[incubator-mxnet] branch master updated (71b6272 -> 44cd63e)

2019-12-08 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 71b6272  use identity_with_cast (#16913)
 add 44cd63e  [Numpy] Implement numpy operator 'average' (#16720)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 100 +-
 python/mxnet/numpy/multiarray.py   | 101 +-
 python/mxnet/symbol/numpy/_symbol.py   |  98 +-
 src/operator/numpy/np_broadcast_reduce_op.h| 348 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  71 +
 src/operator/numpy/np_broadcast_reduce_op_value.cu |   6 +
 tests/python/unittest/test_numpy_op.py | 113 +++
 7 files changed, 823 insertions(+), 14 deletions(-)



[GitHub] [incubator-mxnet] haojin2 merged pull request #16720: [Numpy] Implement numpy operator 'average'

2019-12-08 Thread GitBox
haojin2 merged pull request #16720: [Numpy] Implement numpy operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (7736bfd -> 71b6272)

2019-12-08 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 7736bfd  [Numpy] add op full_like, c++ impl, fix zeros_like, ones_like 
type inference (#16804)
 add 71b6272  use identity_with_cast (#16913)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/linalg.py   |  56 ++-
 python/mxnet/numpy/linalg.py   |  56 ++-
 python/mxnet/numpy_dispatch_protocol.py|   1 +
 python/mxnet/symbol/numpy/linalg.py|  55 ++-
 src/operator/c_lapack_api.cc   |  10 +
 src/operator/c_lapack_api.h|  39 +-
 src/operator/numpy/linalg/np_solve-inl.h   | 496 +
 src/operator/numpy/linalg/np_solve.cc  | 116 +
 .../numpy/linalg/{np_gesvd.cu => np_solve.cu}  |  14 +-
 .../python/unittest/test_numpy_interoperability.py |  22 +
 tests/python/unittest/test_numpy_op.py |  91 
 11 files changed, 944 insertions(+), 12 deletions(-)
 create mode 100644 src/operator/numpy/linalg/np_solve-inl.h
 create mode 100644 src/operator/numpy/linalg/np_solve.cc
 copy src/operator/numpy/linalg/{np_gesvd.cu => np_solve.cu} (74%)



[incubator-mxnet] branch master updated (7736bfd -> 71b6272)

2019-12-08 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 7736bfd  [Numpy] add op full_like, c++ impl, fix zeros_like, ones_like 
type inference (#16804)
 add 71b6272  use identity_with_cast (#16913)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/linalg.py   |  56 ++-
 python/mxnet/numpy/linalg.py   |  56 ++-
 python/mxnet/numpy_dispatch_protocol.py|   1 +
 python/mxnet/symbol/numpy/linalg.py|  55 ++-
 src/operator/c_lapack_api.cc   |  10 +
 src/operator/c_lapack_api.h|  39 +-
 src/operator/numpy/linalg/np_solve-inl.h   | 496 +
 src/operator/numpy/linalg/np_solve.cc  | 116 +
 .../numpy/linalg/{np_gesvd.cu => np_solve.cu}  |  14 +-
 .../python/unittest/test_numpy_interoperability.py |  22 +
 tests/python/unittest/test_numpy_op.py |  91 
 11 files changed, 944 insertions(+), 12 deletions(-)
 create mode 100644 src/operator/numpy/linalg/np_solve-inl.h
 create mode 100644 src/operator/numpy/linalg/np_solve.cc
 copy src/operator/numpy/linalg/{np_gesvd.cu => np_solve.cu} (74%)



[GitHub] [incubator-mxnet] haojin2 merged pull request #16913: [numpy] add op linalg solve

2019-12-08 Thread GitBox
haojin2 merged pull request #16913: [numpy] add op linalg solve 
URL: https://github.com/apache/incubator-mxnet/pull/16913
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] junrushao1994 commented on issue #17018: Replace mxnet_option macro with standard CMAKE_DEPENDENT_OPTION

2019-12-08 Thread GitBox
junrushao1994 commented on issue #17018: Replace mxnet_option macro with 
standard CMAKE_DEPENDENT_OPTION
URL: https://github.com/apache/incubator-mxnet/pull/17018#issuecomment-563108671
 
 
   I think I am a very beginner in using cmake. Looks like 
`CMAKE_DEPENDENT_OPTION ` provides a native way to express cmake options, which 
is great :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] perdasilva commented on issue #16966: [CD] dynamic libmxet pipeline fix + small fixes

2019-12-08 Thread GitBox
perdasilva commented on issue #16966: [CD] dynamic libmxet pipeline fix + small 
fixes
URL: https://github.com/apache/incubator-mxnet/pull/16966#issuecomment-563106814
 
 
   @DickJC123 I've created an issue to track this problem: #17020 - thanks 
again for looking into it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] perdasilva commented on issue #17020: [CUDA 9.0] NVRTC Compilation failed

2019-12-08 Thread GitBox
perdasilva commented on issue #17020: [CUDA 9.0] NVRTC Compilation failed
URL: 
https://github.com/apache/incubator-mxnet/issues/17020#issuecomment-563106604
 
 
   @DickJC123 I've created a proper issue so we can track this problem - thanks 
for looking into it!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] perdasilva opened a new issue #17020: [CUDA 9.0] NVRTC Compilation failed

2019-12-08 Thread GitBox
perdasilva opened a new issue #17020: [CUDA 9.0] NVRTC Compilation failed
URL: https://github.com/apache/incubator-mxnet/issues/17020
 
 
   ## Description
   Since #15167, the CD pipeline for CUDA 9.0 have been failing.
   
   ### Error Message
   
   Many examples can be taken from the CD pipeline, 
[e.g.](http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/restricted-mxnet-cd%2Fmxnet-cd-release-job/detail/mxnet-cd-release-job/276/pipeline)
   
   ```
   ==
   ERROR: test_operator_gpu.test_batchnorm_training
   --
   Traceback (most recent call last):
 File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line 197, in 
runTest
   self.test(*self.arg)
 File "/usr/local/lib/python2.7/dist-packages/nose/util.py", line 620, in 
newfunc
   return func(*arg, **kw)
 File "/work/mxnet/tests/python/gpu/../unittest/common.py", line 177, in 
test_new
   orig_test(*args, **kwargs)
 File "/work/mxnet/tests/python/gpu/../unittest/test_operator.py", line 
1830, in test_batchnorm_training
   check_batchnorm_training('default')
 File "/work/mxnet/tests/python/gpu/../unittest/test_operator.py", line 
1769, in check_batchnorm_training
   check_numeric_gradient(test, in_location, mean_std, numeric_eps=1e-2, 
rtol=0.16, atol=1e-2)
 File "/work/mxnet/python/mxnet/test_utils.py", line 1101, in 
check_numeric_gradient
   symbolic_grads = {k:executor.grad_dict[k].asnumpy() for k in grad_nodes}
 File "/work/mxnet/python/mxnet/test_utils.py", line 1101, in 
   symbolic_grads = {k:executor.grad_dict[k].asnumpy() for k in grad_nodes}
 File "/work/mxnet/python/mxnet/ndarray/ndarray.py", line 2532, in asnumpy
   ctypes.c_size_t(data.size)))
 File "/work/mxnet/python/mxnet/base.py", line 255, in check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   MXNetError: [21:10:06] src/operator/fusion/fused_op.cu:558: Check failed: 
compileResult == NVRTC_SUCCESS (6 vs. 0) : NVRTC Compilation failed. Please set 
environment variable MXNET_USE_FUSION to 0.
   ```
   
   ## To Reproduce
   
   Run the CD pipeline for cu90 and/or cu90mkl
   
   ## What have you tried to solve it?
   
   I noticed that USE_NVTX=1 wasn't set in the [make 
configuration](https://github.com/apache/incubator-mxnet/blob/master/make/pip/pip_linux_cu90.mk)
 for CUDA 9.0 - but this had no effect.
   
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (3b8fdac -> 7736bfd)

2019-12-08 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3b8fdac  skip quantized conv flaky case (#16866)
 add 7736bfd  [Numpy] add op full_like, c++ impl, fix zeros_like, ones_like 
type inference (#16804)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/_numpy_op_doc.py  |  71 -
 python/mxnet/ndarray/numpy/_op.py  | 175 -
 python/mxnet/numpy/multiarray.py   | 173 +++-
 python/mxnet/numpy_dispatch_protocol.py|   3 +-
 python/mxnet/symbol/numpy/_symbol.py   | 132 +++-
 src/operator/numpy/np_init_op.cc   |  29 +---
 src/operator/numpy/np_init_op.cu   |   7 +-
 src/operator/tensor/init_op.h  |  46 ++
 .../python/unittest/test_numpy_interoperability.py |  20 ++-
 tests/python/unittest/test_numpy_op.py |  48 ++
 10 files changed, 596 insertions(+), 108 deletions(-)



[incubator-mxnet] branch master updated (3b8fdac -> 7736bfd)

2019-12-08 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3b8fdac  skip quantized conv flaky case (#16866)
 add 7736bfd  [Numpy] add op full_like, c++ impl, fix zeros_like, ones_like 
type inference (#16804)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/_numpy_op_doc.py  |  71 -
 python/mxnet/ndarray/numpy/_op.py  | 175 -
 python/mxnet/numpy/multiarray.py   | 173 +++-
 python/mxnet/numpy_dispatch_protocol.py|   3 +-
 python/mxnet/symbol/numpy/_symbol.py   | 132 +++-
 src/operator/numpy/np_init_op.cc   |  29 +---
 src/operator/numpy/np_init_op.cu   |   7 +-
 src/operator/tensor/init_op.h  |  46 ++
 .../python/unittest/test_numpy_interoperability.py |  20 ++-
 tests/python/unittest/test_numpy_op.py |  48 ++
 10 files changed, 596 insertions(+), 108 deletions(-)



[GitHub] [incubator-mxnet] haojin2 merged pull request #16804: add numpy op full_like, c++ impl, fix zeros_like, ones_like type infe…

2019-12-08 Thread GitBox
haojin2 merged pull request #16804: add numpy op full_like, c++ impl, fix 
zeros_like, ones_like type infe…
URL: https://github.com/apache/incubator-mxnet/pull/16804
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu opened a new pull request #17019: Fix CUDNN detection for CMake build

2019-12-08 Thread GitBox
leezu opened a new pull request #17019: Fix CUDNN detection for CMake build
URL: https://github.com/apache/incubator-mxnet/pull/17019
 
 
   ## Description ##
   - Use FindCUDNN.cmake instead of the previous macro
   - Previous macro did not detect CUDNN correctly on my system (Deep Learning 
AMI
 Ubuntu 18.04)
   - We now explicitly fail if the user does not provide CUDNN and hasn't 
manually
 set -DUSE_CUDNN=0
   
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [X] Feature1, tests, (and when applicable, API doc)
   
   ## Comments ##
   CC @szha @yajiedesign @junrushao1994 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #17018: Replace mxnet_option macro with standard CMAKE_DEPENDENT_OPTION

2019-12-08 Thread GitBox
szha commented on issue #17018: Replace mxnet_option macro with standard 
CMAKE_DEPENDENT_OPTION
URL: https://github.com/apache/incubator-mxnet/pull/17018#issuecomment-563094643
 
 
   cc @junrushao1994 who may want to offer suggestions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu opened a new pull request #17018: Replace mxnet_option macro with standard CMAKE_DEPENDENT_OPTION

2019-12-08 Thread GitBox
leezu opened a new pull request #17018: Replace mxnet_option macro with 
standard CMAKE_DEPENDENT_OPTION
URL: https://github.com/apache/incubator-mxnet/pull/17018
 
 
   ## Description ##
   Replace mxnet_option macro with standard CMAKE_DEPENDENT_OPTION.
   Using standard language constructs improves maintainability and eases 
reading the code.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [X] Replace mxnet_option macro with CMAKE_DEPENDENT_OPTION part of CMake 
"standard library"
   
   ## Comments ##
   @cjolivier01 please confirm the change in `FindMKL.cmake` is correct. My 
understanding is that
   - `MKL_USE_STATIC_LIBS` should always be `OFF`, and the user can only set it 
to `ON` if `MKL_USE_SINGLE_DYNAMIC_LIBRARY is OFF`.
   -  `MKL_MULTI_THREADED` should always be `ON`, and the user can only set it 
to `OFF` if `MKL_USE_SINGLE_DYNAMIC_LIBRARY is OFF`.
   -  `MKL_USE_CLUSTER` should always be `OFF`, and the user can only set it to 
`ON` if `CMAKE_SIZEOF_VOID_P EQUAL 4`.
   
   Previously the variables where undefined when the condition was not true.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv

2019-12-08 Thread GitBox
pengzhao-intel commented on issue #16830: CI error in unix gpu 
test_quantization_gpu.test_quantized_conv
URL: 
https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-563093760
 
 
   skip the test now and please merge the code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv

2019-12-08 Thread GitBox
haojin2 commented on issue #16830: CI error in unix gpu 
test_quantization_gpu.test_quantized_conv
URL: 
https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-563092812
 
 
   Happening again: 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-17016/1/pipeline
   ```
   
   ==
   
   FAIL: test_quantization_mkldnn.test_quantized_conv
   
   --
   
   Traceback (most recent call last):
   
 File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in 
runTest
   
   self.test(*self.arg)
   
 File "/usr/local/lib/python3.5/dist-packages/nose/util.py", line 620, in 
newfunc
   
   return func(*arg, **kw)
   
 File "/work/mxnet/tests/python/mkl/../unittest/common.py", line 177, in 
test_new
   
   orig_test(*args, **kwargs)
   
 File "/work/mxnet/tests/python/mkl/../quantization/test_quantization.py", 
line 277, in test_quantized_conv
   
   check_quantized_conv((3, 4, 28, 28), (3, 3), 128, (1, 1), (1, 1), False, 
qdtype)
   
 File "/work/mxnet/tests/python/mkl/../quantization/test_quantization.py", 
line 273, in check_quantized_conv
   
   assert cond == 0
   
   AssertionError: 
   
    >> begin captured stdout << -
   
   skipped testing quantized_conv for mkldnn cpu int8 since it is not supported 
yet
   
   skipped testing quantized_conv for mkldnn cpu int8 since it is not supported 
yet
   
   
   
   - >> end captured stdout << --
   
    >> begin captured logging << 
   
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1860264925 to reproduce.
   
   - >> end captured logging << -
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-12-08 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c3d424b  Bump the publish timestamp.
c3d424b is described below

commit c3d424b9a988aa2b99a43555af860866e3bda1f7
Author: mxnet-ci 
AuthorDate: Mon Dec 9 06:44:19 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..6175172
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Dec  9 06:44:19 UTC 2019



[GitHub] [incubator-mxnet] rondogency commented on issue #17006: [RFC] Custom Operator Part 2

2019-12-08 Thread GitBox
rondogency commented on issue #17006: [RFC] Custom Operator Part 2
URL: 
https://github.com/apache/incubator-mxnet/issues/17006#issuecomment-563085461
 
 
   Need to include a fix for the test error 
https://github.com/apache/incubator-mxnet/pull/15921#pullrequestreview-328686634


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] janelu9 commented on issue #9359: Does gluon's dnn support data format of libsvm other than mxnet's?

2019-12-08 Thread GitBox
janelu9 commented on issue #9359: Does gluon's dnn support data format of 
libsvm other than mxnet's?
URL: 
https://github.com/apache/incubator-mxnet/issues/9359#issuecomment-563085016
 
 
   I think I can use embedding


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #15921: dynamic custom operator support

2019-12-08 Thread GitBox
rondogency commented on a change in pull request #15921: dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r355280560
 
 

 ##
 File path: tests/python/unittest/test_extensions.py
 ##
 @@ -21,15 +21,16 @@
 import platform
 import unittest
 import mxnet as mx
+import numpy as np
 from mxnet.base import MXNetError
-from mxnet.test_utils import download, is_cd_run
+from mxnet.test_utils import download, is_cd_run, assert_almost_equal
 
 def check_platform():
 return platform.machine() not in ['x86_64', 'AMD64']
 
 @unittest.skipIf(check_platform(), "not all machine types supported")
 @unittest.skipIf(is_cd_run(), "continuous delivery run - ignoring test")
-def test_library_loading():
+def test_custom_op():
 
 Review comment:
   Gotcha I will fix it in the next PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Tommliu commented on issue #16989: Op_Diagonal [Numpy]

2019-12-08 Thread GitBox
Tommliu commented on issue #16989: Op_Diagonal [Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16989#issuecomment-563080401
 
 
   @haojin2 Diagonal done. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #17017: USE_BLAS=apple broken on OSX 10.15

2019-12-08 Thread GitBox
szha commented on issue #17017: USE_BLAS=apple broken on OSX 10.15
URL: 
https://github.com/apache/incubator-mxnet/issues/17017#issuecomment-563078396
 
 
   It looks broken too: 
https://github.com/apache/incubator-mxnet/blob/master/cmake/Modules/FindAccelerate.cmake#L27


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (b009864 -> 3b8fdac)

2019-12-08 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b009864  large tensor faq doc fix (#16953)
 add 3b8fdac  skip quantized conv flaky case (#16866)

No new revisions were added by this update.

Summary of changes:
 tests/python/quantization/test_quantization.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)



[GitHub] [incubator-mxnet] leezu commented on issue #17017: USE_BLAS=apple broken on OSX 10.15

2019-12-08 Thread GitBox
leezu commented on issue #17017: USE_BLAS=apple broken on OSX 10.15
URL: 
https://github.com/apache/incubator-mxnet/issues/17017#issuecomment-563076929
 
 
   How about CMake?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (b009864 -> 3b8fdac)

2019-12-08 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b009864  large tensor faq doc fix (#16953)
 add 3b8fdac  skip quantized conv flaky case (#16866)

No new revisions were added by this update.

Summary of changes:
 tests/python/quantization/test_quantization.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)



[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16866: skip quantized conv flaky case

2019-12-08 Thread GitBox
pengzhao-intel merged pull request #16866: skip quantized conv flaky case
URL: https://github.com/apache/incubator-mxnet/pull/16866
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha opened a new issue #17017: USE_BLAS=apple broken on OSX 10.15

2019-12-08 Thread GitBox
szha opened a new issue #17017: USE_BLAS=apple broken on OSX 10.15
URL: https://github.com/apache/incubator-mxnet/issues/17017
 
 
   ## Description
   Starting from 10.15, Accelerate/vecLib frameworks are not shipped by default 
([macOS frameworks are now thinned for the x86-64 
architecture](https://developer.apple.com/documentation/macos_release_notes/macos_catalina_10_15_release_notes)),
 and the users need to install macOS SDK separately.
   
   ### Error Message
   cblas.h not found
   
   ## To Reproduce
   on OSX 10.15, `make USE_BLAS=apple`
   
   ## What have you tried to solve it?
   
   1. solved by installing SDK for macOS 10.15 and updating 
https://github.com/apache/incubator-mxnet/blob/master/3rdparty/mshadow/make/mshadow.mk#L124
   to
   ```
MSHADOW_CFLAGS += 
-I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Headers/
   ```
   
   ## Environment
   
   We recommend using our script for collecting the diagnositc information. Run 
the following command and paste the outputs below:
   ```
   curl --retry 10 -s 
https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | 
python
   
   # paste outputs here
   --Python Info--
   ('Version  :', '2.7.16')
   ('Compiler :', 'GCC 4.2.1 Compatible Apple LLVM 11.0.0 
(clang-1100.0.32.4) (-macos10.15-objc-s')
   ('Build:', ('default', 'Oct 17 2019 17:14:30'))
   ('Arch :', ('64bit', ''))
   Pip Info---
   No corresponding pip install for current python.
   --MXNet Info---
   No MXNet installed.
   --System Info--
   ('Platform :', 'Darwin-19.0.0-x86_64-i386-64bit')
   ('system   :', 'Darwin')
   ('node :', 'a483e79ab3ab.ant.amazon.com')
   ('release  :', '19.0.0')
   ('version  :', 'Darwin Kernel Version 19.0.0: Thu Oct 17 16:17:15 PDT 
2019; root:xnu-6153.41.3~29/RELEASE_X86_64')
   --Hardware Info--
   ('machine  :', 'x86_64')
   ('processor:', 'i386')
   machdep.cpu.brand_string: Intel(R) Core(TM) i7-8569U CPU @ 2.80GHz
   machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE 
MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ 
DTES64 MON DSCPL VMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE 
POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
   machdep.cpu.leaf7_features: RDWRFSGS TSC_THREAD_OFFSET SGX BMI1 AVX2 SMEP 
BMI2 ERMS INVPCID FPU_CSDS MPX RDSEED ADX SMAP CLFSOPT IPT MDCLEAR TSXFA IBRS 
STIBP L1DF SSBD
   machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW 
RDTSCP TSCI
   --Network Test--
   Setting timeout: 10
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0380 sec, LOAD: 
0.8266 sec.
   Timing for D2L: http://d2l.ai, DNS: 0.0967 sec, LOAD: 0.1057 sec.
   Timing for FashionMNIST: 
https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, 
DNS: 0.0721 sec, LOAD: 0.2089 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0322 sec, 
LOAD: 0.1277 sec.
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0006 
sec, LOAD: 0.6272 sec.
   Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.0689 sec, LOAD: 
0.2368 sec.
   Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0362 sec, LOAD: 0.1082 sec.
   Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0006 
sec, LOAD: 0.5805 sec.
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (d58f6cb -> b009864)

2019-12-08 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d58f6cb  Add micro averaging strategy to pearsonr metric (#16878)
 add b009864  large tensor faq doc fix (#16953)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/pages/api/faq/large_tensor_support.md | 13 +
 1 file changed, 13 insertions(+)



[incubator-mxnet] branch master updated (d58f6cb -> b009864)

2019-12-08 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d58f6cb  Add micro averaging strategy to pearsonr metric (#16878)
 add b009864  large tensor faq doc fix (#16953)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/pages/api/faq/large_tensor_support.md | 13 +
 1 file changed, 13 insertions(+)



[GitHub] [incubator-mxnet] leezu merged pull request #16953: large tensor faq doc fix

2019-12-08 Thread GitBox
leezu merged pull request #16953: large tensor faq doc fix
URL: https://github.com/apache/incubator-mxnet/pull/16953
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #15921: dynamic custom operator support

2019-12-08 Thread GitBox
wkcn commented on a change in pull request #15921: dynamic custom operator 
support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r355268745
 
 

 ##
 File path: tests/python/unittest/test_extensions.py
 ##
 @@ -21,15 +21,16 @@
 import platform
 import unittest
 import mxnet as mx
+import numpy as np
 from mxnet.base import MXNetError
-from mxnet.test_utils import download, is_cd_run
+from mxnet.test_utils import download, is_cd_run, assert_almost_equal
 
 def check_platform():
 return platform.machine() not in ['x86_64', 'AMD64']
 
 @unittest.skipIf(check_platform(), "not all machine types supported")
 @unittest.skipIf(is_cd_run(), "continuous delivery run - ignoring test")
-def test_library_loading():
+def test_custom_op():
 
 Review comment:
   Is it possible to use mx.libinfo.find_lib_path to find the library?
   
https://github.com/apache/incubator-mxnet/blob/93228649340bcacb8056d47d8f6f8a78a9805ae4/python/mxnet/libinfo.py#L26


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu opened a new pull request #17016: [Numpy] Fix axis=-1 bug in max

2019-12-08 Thread GitBox
xidulu opened a new pull request #17016: [Numpy] Fix axis=-1 bug in max
URL: https://github.com/apache/incubator-mxnet/pull/17016
 
 
   ## Description ##
   Fix
   https://github.com/apache/incubator-mxnet/issues/17011
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #15921: dynamic custom operator support

2019-12-08 Thread GitBox
TaoLv commented on a change in pull request #15921: dynamic custom operator 
support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r355262466
 
 

 ##
 File path: tests/python/unittest/test_extensions.py
 ##
 @@ -21,15 +21,16 @@
 import platform
 import unittest
 import mxnet as mx
+import numpy as np
 from mxnet.base import MXNetError
-from mxnet.test_utils import download, is_cd_run
+from mxnet.test_utils import download, is_cd_run, assert_almost_equal
 
 def check_platform():
 return platform.machine() not in ['x86_64', 'AMD64']
 
 @unittest.skipIf(check_platform(), "not all machine types supported")
 @unittest.skipIf(is_cd_run(), "continuous delivery run - ignoring test")
-def test_library_loading():
+def test_custom_op():
 
 Review comment:
   It has a strong assumption that the case will be called from mxnet root 
folder. Otherwise, the libsample_lib.so will not be found.
   
   ```
   $ cd tests/python/unittest/
   $ nosetests -v test_extensions:test_custom_op
   test_extensions.test_custom_op ... ERROR
   
   ==
   ERROR: test_extensions.test_custom_op
   --
   Traceback (most recent call last):
 File 
"/home/lvtao/miniconda3/envs/mxnet/lib/python3.6/site-packages/nose/case.py", 
line 198, in runTest
   self.test(*self.arg)
 File 
"/home/lvtao/Workspace/mxnet-official/tests/python/unittest/test_extensions.py",
 line 41, in test_custom_op
   raise MXNetError("library %s not found " % lib)
   mxnet.base.MXNetError: library libsample_lib.so not found
   
   --
   Ran 1 test in 0.005s
   
   FAILED (errors=1)
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #16953: large tensor faq doc fix

2019-12-08 Thread GitBox
access2rohit commented on issue #16953: large tensor faq doc fix
URL: https://github.com/apache/incubator-mxnet/pull/16953#issuecomment-563059985
 
 
   @mxnet-label-bot add [pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16990: [numpy] add op matmul

2019-12-08 Thread GitBox
sxjscience commented on issue #16990: [numpy] add op matmul
URL: https://github.com/apache/incubator-mxnet/pull/16990#issuecomment-563047925
 
 
   I think you may call the BatchedGEMMStrided Kernel directly: 
https://devblogs.nvidia.com/cublas-strided-batched-matrix-multiply/
   
   Get Outlook for iOS
   
   From: JiangZhaoh 
   Sent: Sunday, December 8, 2019 6:04:32 PM
   To: apache/incubator-mxnet 
   Cc: Xingjian SHI ; Comment 

   Subject: Re: [apache/incubator-mxnet] [numpy] add op matmul (#16990)
   
   
   @JiangZhaoh commented on this pull request.
   
   
   
   In 
src/operator/numpy/np_matmul_op-inl.h:
   
   > +It is treated as a stack of matrices residing in the last two 
indexes and broadcast accordingly.
   +   * \param out - output: insert 'value' to 'arr' according to 'index'.
   +   * \param a - input: the first argument.
   +   * \param b - input: the second argument.
   +   * \param ndim - ndim of a, b and output. Because of broadcast, regard 
their ndim as equal.
   +   */
   +  template
   +  MSHADOW_XINLINE static void Map(int i, DType* out,
   +  const DType* a, const DType* b,
   +  const mshadow::Shape<10> a_stride,
   +  const mshadow::Shape<10> b_stride,
   +  const mshadow::Shape<10> out_stride,
   +  const mshadow::Shape<10> a_shape,
   +  const mshadow::Shape<10> b_shape,
   +  const mshadow::Shape<10> out_shape,
   +  const size_t ndim){
   
   
   You may refer to the implementation of batch_dot. There is no need to store 
the strides and shapes as Shape<10>. You can reshape the array to a 3D array 
and just dot the last two dimensions:
   
   
https://github.com/apache/incubator-mxnet/blob/1aa1b5a9ab53bb57a3c653793fb824d01f2d5e81/src/operator/tensor/dot-inl.h#L1348-L1409
   
   Thanks for your advice. But, may I ask how could I broadcast the shape if I 
don't store strides and shapes?
   e.g. Matrix A in shape (2, 1, 3, 4, 5) and matrix B in shape (3, 1, 5, 2), 
C=np.matmul(A, B) would broadcast A and B to shape (2, 3, 3, 4, 5) and (3, 3, 
5, 2) respectively, and C's shape would be (2, 3, 3, 4, 2).
   If I didn't store shape and stride, I think I should copy the content in 
each array to get the consistent shape firstly, and use your method then. Is 
this your mean?
   
   —
   You are receiving this because you commented.
   Reply to this email directly, view it on 
GitHub,
 or 
unsubscribe.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (251e6f6 -> d58f6cb)

2019-12-08 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 251e6f6  Fix NDArrayIter cant pad when size is large (#17001)
 add d58f6cb  Add micro averaging strategy to pearsonr metric (#16878)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/metric.py   | 79 ++--
 tests/python/unittest/test_metric.py | 42 +++
 2 files changed, 102 insertions(+), 19 deletions(-)



[incubator-mxnet] branch master updated (251e6f6 -> d58f6cb)

2019-12-08 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 251e6f6  Fix NDArrayIter cant pad when size is large (#17001)
 add d58f6cb  Add micro averaging strategy to pearsonr metric (#16878)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/metric.py   | 79 ++--
 tests/python/unittest/test_metric.py | 42 +++
 2 files changed, 102 insertions(+), 19 deletions(-)



[GitHub] [incubator-mxnet] leezu merged pull request #16878: add micro to pearsonr

2019-12-08 Thread GitBox
leezu merged pull request #16878: add micro to pearsonr
URL: https://github.com/apache/incubator-mxnet/pull/16878
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kice opened a new pull request #17015: optimize onnx ops import

2019-12-08 Thread GitBox
kice opened a new pull request #17015: optimize onnx ops import
URL: https://github.com/apache/incubator-mxnet/pull/17015
 
 
   Set padding for convolution if is symmetric
   Add 'CRD' support for depthtospace
   
   CRD mode is the default mode for PixelShuffle of pytorch when export with 
opset >= 11


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu edited a comment on issue #17011: [Numpy] additional default parameter for mxnet.numpy.max: axis=-1

2019-12-08 Thread GitBox
xidulu edited a comment on issue #17011: [Numpy] additional default parameter 
for mxnet.numpy.max: axis=-1 
URL: 
https://github.com/apache/incubator-mxnet/issues/17011#issuecomment-562963658
 
 
   Looks like the problem occurs in this line
   
https://github.com/apache/incubator-mxnet/blob/9b25db05b4de18b5bfed8467cfd934f2bfa4f11a/src/operator/numpy/np_broadcast_reduce_op.h#L190
   
   I guess an additional check for bypassing the access to ishape tuple 
(`ishape[axes[i]]`) may solve the problem.
   
   __
   
   Working on it.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17012: Upgrade 3rdparty/openmp to release_90 version

2019-12-08 Thread GitBox
leezu commented on issue #17012: Upgrade 3rdparty/openmp to release_90 version
URL: https://github.com/apache/incubator-mxnet/pull/17012#issuecomment-563034941
 
 
   Force push to retrigger CI. Failed with Github API error for 5/11 jobs 
(other 6 passed):
   
   ```
   Pull request #17012 opened
   
   Connecting to https://api.github.com using anirudh2290/** 
(anirudh2290/** (Anirudh Subramanians personal access token to checkout the 
public MXNet repository. https://github.com/settings/tokens/324757639))
   
   ERROR: Error while retrieving pull request 17012 merge hash : 
org.kohsuke.github.HttpException: Server returned HTTP response code: -1, 
message: 'null' for URL: 
https://api.github.com/repos/apache/incubator-mxnet/commits/83a782e27e6a6b1f9d093a374ef042ba1994ea79
   
   Finished: FAILURE
   ```
   
   CC @marcoabreu


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] JiangZhaoh commented on a change in pull request #16990: [numpy] add op matmul

2019-12-08 Thread GitBox
JiangZhaoh commented on a change in pull request #16990: [numpy] add op matmul
URL: https://github.com/apache/incubator-mxnet/pull/16990#discussion_r355237851
 
 

 ##
 File path: src/operator/numpy/np_matmul_op-inl.h
 ##
 @@ -0,0 +1,356 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_matmul_op-inl.h
+ * \brief Function definition of matrix numpy-compatible matmul operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_MATMUL_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_MATMUL_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "np_tensordot_op-inl.h"
+#include "np_dot-inl.h"
+
+namespace mxnet {
+namespace op {
+
+template
+mshadow::Shape GetStride(mshadow::Shape shape, size_t N) {
+  /*!
+   * \brief Calculate stride of each dim from shape 
+   */
+  mshadow::Shapestride;
+  size_t tmp = 1;
+  for (int i = N - 1; i >= 0; --i) {
+stride[i] = tmp;
+tmp *= shape[i];
+  }
+  return stride;
+}
+
+template
+mshadow::Shape GetKernelShape(const mxnet::TShape& shape,
+size_t N, bool T = false) {
+  /*!
+   * \brief Get mshadow::Shape from mxnet::TShape. Extra dims is filled with 1.
+   * \param N - ndim of mshape::Shape shape.
+   * \param T - If T is True, transpose the last two axis, otherwise not.
+   */
+  mshadow::Shapek_shape;
+  for (int i = shape.ndim() - 1, j = N - 1; i >= 0 || j >= 0 ; --i, --j) {
+if (i >= 0) {
+  k_shape[j] = shape[i];
+} else {
+  k_shape[j] = 1;
+}
+  }
+  if (T) {  // transpose the latest two axes
+size_t t = k_shape[N - 1];
+k_shape[N - 1] = k_shape[N - 2];
+k_shape[N - 2] = t;
+  }
+  return k_shape;
+}
+
+template
+mshadow::Shape BroadcastKernelShape(mshadow::Shape in_shape,
+  mshadow::Shape broadcast_shape,
+  size_t N, size_t* size) {
+  /*!
+   * \brief Broadcast in_shape(ndim = N) to broadcast_shape(ndim = N) expect 
the last two axes.
+Make sure that: If i < N - 2 and in_shape[i] != 
broadcast_shape[i], in_shape[i] == 1.
+   * \param N - ndim of both in_shape and broadcast_shape.
+   * \param size - The size of the broadcast_shape.
+   */
+  mshadow::Shapeout_shape(in_shape);
+  *size = 1;
+  for (size_t i = 0; i < N - 2; ++i) {
+out_shape[i] = std::max(in_shape[i], broadcast_shape[i]);
+*size *= out_shape[i];
+  }
+  *size *= (out_shape[N - 2] * out_shape[N - 1]);
+  return out_shape;
+}
+
+template
+struct NDMatmul {
+  /*!
+   * \brief matmul(a, b) in both N-D(N >= 2) case.
+It is treated as a stack of matrices residing in the last two 
indexes and broadcast accordingly.
+   * \param out - output: insert 'value' to 'arr' according to 'index'.
+   * \param a - input: the first argument.
+   * \param b - input: the second argument.
+   * \param ndim - ndim of a, b and output. Because of broadcast, regard their 
ndim as equal.  
+   */
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out,
+  const DType* a, const DType* b,
+  const mshadow::Shape<10> a_stride,
+  const mshadow::Shape<10> b_stride,
+  const mshadow::Shape<10> out_stride,
+  const mshadow::Shape<10> a_shape,
+  const mshadow::Shape<10> b_shape,
+  const mshadow::Shape<10> out_shape,
+  const size_t ndim){
 
 Review comment:
   > You may refer to the implementation of batch_dot. There is no need to 
store the strides and shapes as Shape<10>. You can reshape the array to a 3D 
array and just dot the last two dimensions:
   > 
   > 
https://github.com/apache/incubator-mxnet/blob/1aa1b5a9ab53bb57a3c653793fb824d01f2d5e81/src/operator/tensor/dot-inl.h#L1348-L1409
   
   Thanks for your advice. But, may I ask how could I broadcast the shape if I 
don't store strides and shapes? 
   e.g. Matrix A in shape (2, 1, 3, 4, 5) and matrix B in shape (3, 1, 5, 2), 
C=np.matmul(A, B) would broadcast A and B to shape (2, 3, 3, 4, 5) and (3, 3, 
5, 2) respectively, and C's 

[GitHub] [incubator-mxnet] JiangZhaoh commented on a change in pull request #16990: [numpy] add op matmul

2019-12-08 Thread GitBox
JiangZhaoh commented on a change in pull request #16990: [numpy] add op matmul
URL: https://github.com/apache/incubator-mxnet/pull/16990#discussion_r355237789
 
 

 ##
 File path: src/operator/numpy/np_matmul_op-inl.h
 ##
 @@ -0,0 +1,356 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_matmul_op-inl.h
+ * \brief Function definition of matrix numpy-compatible matmul operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_MATMUL_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_MATMUL_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "np_tensordot_op-inl.h"
+#include "np_dot-inl.h"
+
+namespace mxnet {
+namespace op {
+
+template
+mshadow::Shape GetStride(mshadow::Shape shape, size_t N) {
+  /*!
+   * \brief Calculate stride of each dim from shape 
+   */
+  mshadow::Shapestride;
+  size_t tmp = 1;
+  for (int i = N - 1; i >= 0; --i) {
+stride[i] = tmp;
+tmp *= shape[i];
+  }
+  return stride;
+}
+
+template
+mshadow::Shape GetKernelShape(const mxnet::TShape& shape,
+size_t N, bool T = false) {
+  /*!
+   * \brief Get mshadow::Shape from mxnet::TShape. Extra dims is filled with 1.
+   * \param N - ndim of mshape::Shape shape.
+   * \param T - If T is True, transpose the last two axis, otherwise not.
+   */
+  mshadow::Shapek_shape;
+  for (int i = shape.ndim() - 1, j = N - 1; i >= 0 || j >= 0 ; --i, --j) {
+if (i >= 0) {
+  k_shape[j] = shape[i];
+} else {
+  k_shape[j] = 1;
+}
+  }
+  if (T) {  // transpose the latest two axes
+size_t t = k_shape[N - 1];
+k_shape[N - 1] = k_shape[N - 2];
+k_shape[N - 2] = t;
+  }
+  return k_shape;
+}
+
+template
+mshadow::Shape BroadcastKernelShape(mshadow::Shape in_shape,
+  mshadow::Shape broadcast_shape,
+  size_t N, size_t* size) {
+  /*!
+   * \brief Broadcast in_shape(ndim = N) to broadcast_shape(ndim = N) expect 
the last two axes.
+Make sure that: If i < N - 2 and in_shape[i] != 
broadcast_shape[i], in_shape[i] == 1.
+   * \param N - ndim of both in_shape and broadcast_shape.
+   * \param size - The size of the broadcast_shape.
+   */
+  mshadow::Shapeout_shape(in_shape);
+  *size = 1;
+  for (size_t i = 0; i < N - 2; ++i) {
+out_shape[i] = std::max(in_shape[i], broadcast_shape[i]);
+*size *= out_shape[i];
+  }
+  *size *= (out_shape[N - 2] * out_shape[N - 1]);
+  return out_shape;
+}
+
+template
+struct NDMatmul {
+  /*!
+   * \brief matmul(a, b) in both N-D(N >= 2) case.
+It is treated as a stack of matrices residing in the last two 
indexes and broadcast accordingly.
+   * \param out - output: insert 'value' to 'arr' according to 'index'.
+   * \param a - input: the first argument.
+   * \param b - input: the second argument.
+   * \param ndim - ndim of a, b and output. Because of broadcast, regard their 
ndim as equal.  
+   */
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out,
+  const DType* a, const DType* b,
+  const mshadow::Shape<10> a_stride,
+  const mshadow::Shape<10> b_stride,
+  const mshadow::Shape<10> out_stride,
+  const mshadow::Shape<10> a_shape,
+  const mshadow::Shape<10> b_shape,
+  const mshadow::Shape<10> out_shape,
+  const size_t ndim){
 
 Review comment:
   Thanks for your advice. But, may I ask how could I broadcast the shape if I 
don't store strides and shapes? 
   e.g. Matrix A in shape (2, 1, 3, 4, 5) and matrix B in shape (3, 1, 5, 2), 
C=np.matmul(A, B) would broadcast A and B to shape (2, 3, 3, 4, 5) and (3, 3, 
5, 2) respectively, and C's shape would be (2, 3, 3, 4, 2).
   If I didn't  store shape and stride, I think I should copy the content in 
each array to get the consistent shape firstly, and use your method then.  Is 
this your mean?


This is an automated message from the Apache Git Service.
To respond to 

[GitHub] [incubator-mxnet] JiangZhaoh commented on a change in pull request #16990: [numpy] add op matmul

2019-12-08 Thread GitBox
JiangZhaoh commented on a change in pull request #16990: [numpy] add op matmul
URL: https://github.com/apache/incubator-mxnet/pull/16990#discussion_r355237789
 
 

 ##
 File path: src/operator/numpy/np_matmul_op-inl.h
 ##
 @@ -0,0 +1,356 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_matmul_op-inl.h
+ * \brief Function definition of matrix numpy-compatible matmul operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_MATMUL_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_MATMUL_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "np_tensordot_op-inl.h"
+#include "np_dot-inl.h"
+
+namespace mxnet {
+namespace op {
+
+template
+mshadow::Shape GetStride(mshadow::Shape shape, size_t N) {
+  /*!
+   * \brief Calculate stride of each dim from shape 
+   */
+  mshadow::Shapestride;
+  size_t tmp = 1;
+  for (int i = N - 1; i >= 0; --i) {
+stride[i] = tmp;
+tmp *= shape[i];
+  }
+  return stride;
+}
+
+template
+mshadow::Shape GetKernelShape(const mxnet::TShape& shape,
+size_t N, bool T = false) {
+  /*!
+   * \brief Get mshadow::Shape from mxnet::TShape. Extra dims is filled with 1.
+   * \param N - ndim of mshape::Shape shape.
+   * \param T - If T is True, transpose the last two axis, otherwise not.
+   */
+  mshadow::Shapek_shape;
+  for (int i = shape.ndim() - 1, j = N - 1; i >= 0 || j >= 0 ; --i, --j) {
+if (i >= 0) {
+  k_shape[j] = shape[i];
+} else {
+  k_shape[j] = 1;
+}
+  }
+  if (T) {  // transpose the latest two axes
+size_t t = k_shape[N - 1];
+k_shape[N - 1] = k_shape[N - 2];
+k_shape[N - 2] = t;
+  }
+  return k_shape;
+}
+
+template
+mshadow::Shape BroadcastKernelShape(mshadow::Shape in_shape,
+  mshadow::Shape broadcast_shape,
+  size_t N, size_t* size) {
+  /*!
+   * \brief Broadcast in_shape(ndim = N) to broadcast_shape(ndim = N) expect 
the last two axes.
+Make sure that: If i < N - 2 and in_shape[i] != 
broadcast_shape[i], in_shape[i] == 1.
+   * \param N - ndim of both in_shape and broadcast_shape.
+   * \param size - The size of the broadcast_shape.
+   */
+  mshadow::Shapeout_shape(in_shape);
+  *size = 1;
+  for (size_t i = 0; i < N - 2; ++i) {
+out_shape[i] = std::max(in_shape[i], broadcast_shape[i]);
+*size *= out_shape[i];
+  }
+  *size *= (out_shape[N - 2] * out_shape[N - 1]);
+  return out_shape;
+}
+
+template
+struct NDMatmul {
+  /*!
+   * \brief matmul(a, b) in both N-D(N >= 2) case.
+It is treated as a stack of matrices residing in the last two 
indexes and broadcast accordingly.
+   * \param out - output: insert 'value' to 'arr' according to 'index'.
+   * \param a - input: the first argument.
+   * \param b - input: the second argument.
+   * \param ndim - ndim of a, b and output. Because of broadcast, regard their 
ndim as equal.  
+   */
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out,
+  const DType* a, const DType* b,
+  const mshadow::Shape<10> a_stride,
+  const mshadow::Shape<10> b_stride,
+  const mshadow::Shape<10> out_stride,
+  const mshadow::Shape<10> a_shape,
+  const mshadow::Shape<10> b_shape,
+  const mshadow::Shape<10> out_shape,
+  const size_t ndim){
 
 Review comment:
   Thanks for your advice. But, may I ask how could I broadcast the shape if I 
don't store strides and shapes? 
   e.g. Matrix A in shape (2, 1, 3, 4, 5) and matrix B in shape (3, 1, 5, 2), 
C=np.matmul(A, B) would broadcast A and B to shape (2, 3, 3, 4, 5) and (3, 3, 
5, 2) respectively, and C's shape would be (2, 3, 3, 4, 2).
   If I didn't  store shape and stride, I think I should copy the content in 
each array to get the consistent shape firstly, and use your method then.  Is 
this your mean?


This is an automated message from the Apache Git Service.
To respond to 

[GitHub] [incubator-mxnet] wkcn edited a comment on issue #16994: DataLoader: Missing batch_size dim when num_workers > 0

2019-12-08 Thread GitBox
wkcn edited a comment on issue #16994: DataLoader: Missing batch_size dim when 
num_workers > 0
URL: 
https://github.com/apache/incubator-mxnet/issues/16994#issuecomment-563028046
 
 
   Hi @cuibuaa , please use the latest version of MXNet. It is a bug in the old 
version.
   
   https://github.com/apache/incubator-mxnet/pull/16233


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #16994: DataLoader: Missing batch_size dim when num_workers > 0

2019-12-08 Thread GitBox
wkcn commented on issue #16994: DataLoader: Missing batch_size dim when 
num_workers > 0
URL: 
https://github.com/apache/incubator-mxnet/issues/16994#issuecomment-563028046
 
 
   Hi @cuibuaa , please use the latest version of MXNet. It is a bug in the old 
version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] MediosZ closed issue #16961: Unable to hardcode the OMP_NUM_THREADS into program

2019-12-08 Thread GitBox
MediosZ closed issue #16961: Unable to hardcode the OMP_NUM_THREADS into program
URL: https://github.com/apache/incubator-mxnet/issues/16961
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] MediosZ commented on issue #16961: Unable to hardcode the OMP_NUM_THREADS into program

2019-12-08 Thread GitBox
MediosZ commented on issue #16961: Unable to hardcode the OMP_NUM_THREADS into 
program
URL: 
https://github.com/apache/incubator-mxnet/issues/16961#issuecomment-563027547
 
 
   Yes, the threads are created by openblas, and I rebuild the openblas setting 
the number of threads to one, then problem solved.
   Thanks @ZhennanQin  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kice edited a comment on issue #16590: import_onnx.py parser for onnx opset >= 9 has bug

2019-12-08 Thread GitBox
kice edited a comment on issue #16590: import_onnx.py parser for onnx opset >= 
9 has bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16590#issuecomment-563018787
 
 
   UPDATE:
   
   I think mxnet should look at the initializers first. I checked the doc for 
pytorch export onnx model, and found that I should set 
`keep_initializers_as_inputs=True` for mxnet import. 
   
   Doc for 
[torch.onnx.export](https://pytorch.org/docs/stable/onnx.html#torch.onnx.export)
   
   > keep_initializers_as_inputs (bool, default None) – If True, all the 
initializers (typically corresponding to parameters) in the exported graph will 
also be added as inputs to the graph. If False, then initializers are not added 
as inputs to the graph, and only the non-parameter inputs are added as inputs. 
This may allow for better optimizations (such as constant folding etc.) by 
backends/runtimes that execute these graphs. If unspecified (default None), 
then the behavior is chosen automatically as follows. If operator_export_type 
is OperatorExportTypes.ONNX, the behavior is equivalent to setting this 
argument to False. For other values of operator_export_type, the behavior is 
equivalent to setting this argument to True.
   
   @oorqueda Would you like to fix it?
   
   ---
   In my case, I use pytorch to export onnx with opset less than 8, it still 
cannot import the model to mxnet. 
   
   ```
   >>>mxnet.__version__
   '1.6.0'
   >>> torch.__version__
   '1.3.1'
   >>> onnx.__version__
   '1.6.0'
   ```
   
   ```
   onnx_model = onnx.load(model_file)
   for node in onnx_model.graph.node:
   for i in node.input:
   print(i)
   ```
   Here is the output
   ```
   data
   head.0.weight
   head.0.bias
   19
   body.0.body.0.weight
   body.0.body.0.bias
   20
   21
   body.0.body.2.weight
   body.0.body.2.bias
   22
   19
   23
   body.1.body.0.weight
   body.1.body.0.bias
   24
   25
   body.1.body.2.weight
   body.1.body.2.bias
   26
   23
   27
   body.2.weight
   body.2.bias
   28
   19
   29
   tail.0.0.weight
   tail.0.0.bias
   31
   30
   43
   32
   34
   33
   44
   35
   tail.0.2.weight
   tail.0.2.bias
   37
   36
   45
   38
   40
   39
   46
   41
   tail.1.weight
   tail.1.bias
   ```
   
   Here is error when I import the model.
   ```
   c:\program 
files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_model.py in 
import_model(model_file)
57 # loads model file and returns ONNX protobuf object
58 model_proto = onnx.load_model(model_file)
   ---> 59 sym, arg_params, aux_params = graph.from_onnx(model_proto.graph)
60 return sym, arg_params, aux_params
61 
   
   c:\program 
files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_onnx.py in 
from_onnx(self, graph)
   113 node_name = node_name if node_name else None
   114 onnx_attr = self._parse_attr(node.attribute)
   --> 115 inputs = [self._nodes[i] for i in node.input]
   116 mxnet_sym = self._convert_operator(node_name, op_name, 
onnx_attr, inputs)
   117 
   
   c:\program 
files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_onnx.py in 
(.0)
   113 node_name = node_name if node_name else None
   114 onnx_attr = self._parse_attr(node.attribute)
   --> 115 inputs = [self._nodes[i] for i in node.input]
   116 mxnet_sym = self._convert_operator(node_name, op_name, 
onnx_attr, inputs)
   117 
   
   KeyError: 'head.0.weight'
   ```
   
   I even try to add the model parameters to the input names for pytorch to 
export,
   ```
   opset = 7
   input_names = ['data'] + list(model.state_dict().keys())
   torch.onnx.export(model, # model being run
   x, # model input (or a tuple for multiple inputs)
   onnx_name, # where to save the model (can be a file or 
file-like object)
   export_params=True,# store the trained parameter weights inside 
the model file
   opset_version=opset,   # the ONNX version to export the model to
   do_constant_folding=True,  # whether to execute constant folding for 
optimization
   input_names = input_names, # the model's input names
   output_names = ['output']  # the model's output names
   )
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17012: Upgrade 3rdparty/openmp to release_90 version

2019-12-08 Thread GitBox
leezu commented on issue #17012: Upgrade 3rdparty/openmp to release_90 version
URL: https://github.com/apache/incubator-mxnet/pull/17012#issuecomment-563020783
 
 
   Please see the last line of the PR description above. Given that the assert 
is still there and it is not failing anymore I think we can establish that 
there is no bug in MXNet. Thus I can't follow your reasoning that there's a bug 
in MXNet.
   I think it's more likely that there was a bug in the 2 year old openmp 
version.
   
   Then please rescind your veto or give some evidence that there is a bug. 
Currently I consider this to be a veto without technical justification.
   
   Thanks! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kice edited a comment on issue #16590: import_onnx.py parser for onnx opset >= 9 has bug

2019-12-08 Thread GitBox
kice edited a comment on issue #16590: import_onnx.py parser for onnx opset >= 
9 has bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16590#issuecomment-563018787
 
 
   In my case, I use pytorch to export onnx with opset less than 8, it still 
cannot import the model to mxnet. 
   
   ```
   >>>mxnet.__version__
   '1.6.0'
   >>> torch.__version__
   '1.3.1'
   >>> onnx.__version__
   '1.6.0'
   ```
   
   ```
   onnx_model = onnx.load(model_file)
   for node in onnx_model.graph.node:
   for i in node.input:
   print(i)
   ```
   Here is the output
   ```
   data
   head.0.weight
   head.0.bias
   19
   body.0.body.0.weight
   body.0.body.0.bias
   20
   21
   body.0.body.2.weight
   body.0.body.2.bias
   22
   19
   23
   body.1.body.0.weight
   body.1.body.0.bias
   24
   25
   body.1.body.2.weight
   body.1.body.2.bias
   26
   23
   27
   body.2.weight
   body.2.bias
   28
   19
   29
   tail.0.0.weight
   tail.0.0.bias
   31
   30
   43
   32
   34
   33
   44
   35
   tail.0.2.weight
   tail.0.2.bias
   37
   36
   45
   38
   40
   39
   46
   41
   tail.1.weight
   tail.1.bias
   ```
   
   Here is error when I import the model.
   ```
   c:\program 
files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_model.py in 
import_model(model_file)
57 # loads model file and returns ONNX protobuf object
58 model_proto = onnx.load_model(model_file)
   ---> 59 sym, arg_params, aux_params = graph.from_onnx(model_proto.graph)
60 return sym, arg_params, aux_params
61 
   
   c:\program 
files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_onnx.py in 
from_onnx(self, graph)
   113 node_name = node_name if node_name else None
   114 onnx_attr = self._parse_attr(node.attribute)
   --> 115 inputs = [self._nodes[i] for i in node.input]
   116 mxnet_sym = self._convert_operator(node_name, op_name, 
onnx_attr, inputs)
   117 
   
   c:\program 
files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_onnx.py in 
(.0)
   113 node_name = node_name if node_name else None
   114 onnx_attr = self._parse_attr(node.attribute)
   --> 115 inputs = [self._nodes[i] for i in node.input]
   116 mxnet_sym = self._convert_operator(node_name, op_name, 
onnx_attr, inputs)
   117 
   
   KeyError: 'head.0.weight'
   ```
   
   I even try to add the model parameters to the input names for pytorch to 
export,
   ```
   opset = 7
   input_names = ['data'] + list(model.state_dict().keys())
   torch.onnx.export(model, # model being run
   x, # model input (or a tuple for multiple inputs)
   onnx_name, # where to save the model (can be a file or 
file-like object)
   export_params=True,# store the trained parameter weights inside 
the model file
   opset_version=opset,   # the ONNX version to export the model to
   do_constant_folding=True,  # whether to execute constant folding for 
optimization
   input_names = input_names, # the model's input names
   output_names = ['output']  # the model's output names
   )
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-12-08 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 66e55d5  Bump the publish timestamp.
66e55d5 is described below

commit 66e55d518697fa555790083c185e7402072aa6c5
Author: mxnet-ci 
AuthorDate: Mon Dec 9 00:43:11 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..fb92d37
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Dec  9 00:43:11 UTC 2019



[GitHub] [incubator-mxnet] kice commented on issue #16590: import_onnx.py parser for onnx opset >= 9 has bug

2019-12-08 Thread GitBox
kice commented on issue #16590: import_onnx.py parser for onnx opset >= 9 has 
bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16590#issuecomment-563018787
 
 
   In my case, I use pytorch to export onnx with opset less than 8, it still 
cannot import the model to mxnet. 
   
   ```
   >>>mxnet.__version__
   '1.6.0'
   >>> torch.__version__
   '1.3.1'
   >>> onnx.__version__
   '1.6.0'
   ```
   
   ```
   onnx_model = onnx.load(model_file)
   for node in onnx_model.graph.node:
   for i in node.input:
   print(i)
   ```
   Here is the output
   ```
   data
   head.0.weight
   head.0.bias
   19
   body.0.body.0.weight
   body.0.body.0.bias
   20
   21
   body.0.body.2.weight
   body.0.body.2.bias
   22
   19
   23
   body.1.body.0.weight
   body.1.body.0.bias
   24
   25
   body.1.body.2.weight
   body.1.body.2.bias
   26
   23
   27
   body.2.weight
   body.2.bias
   28
   19
   29
   tail.0.0.weight
   tail.0.0.bias
   31
   30
   43
   32
   34
   33
   44
   35
   tail.0.2.weight
   tail.0.2.bias
   37
   36
   45
   38
   40
   39
   46
   41
   tail.1.weight
   tail.1.bias
   ```
   
   Here is error when I import the model.
   ```
   c:\program 
files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_model.py in 
import_model(model_file)
57 # loads model file and returns ONNX protobuf object
58 model_proto = onnx.load_model(model_file)
   ---> 59 sym, arg_params, aux_params = graph.from_onnx(model_proto.graph)
60 return sym, arg_params, aux_params
61 
   
   c:\program 
files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_onnx.py in 
from_onnx(self, graph)
   113 node_name = node_name if node_name else None
   114 onnx_attr = self._parse_attr(node.attribute)
   --> 115 inputs = [self._nodes[i] for i in node.input]
   116 mxnet_sym = self._convert_operator(node_name, op_name, 
onnx_attr, inputs)
   117 
   
   c:\program 
files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_onnx.py in 
(.0)
   113 node_name = node_name if node_name else None
   114 onnx_attr = self._parse_attr(node.attribute)
   --> 115 inputs = [self._nodes[i] for i in node.input]
   116 mxnet_sym = self._convert_operator(node_name, op_name, 
onnx_attr, inputs)
   117 
   
   KeyError: 'head.0.weight'
   ```
   
   I even try to add the model parameters to the input names for pytorch to 
export,
   ```
   input_names = ['data'] + list(model.state_dict().keys())
   torch.onnx.export(model, # model being run
   x, # model input (or a tuple for multiple inputs)
   onnx_name, # where to save the model (can be a file or 
file-like object)
   export_params=True,# store the trained parameter weights inside 
the model file
   opset_version=opset,   # the ONNX version to export the model to
   do_constant_folding=True,  # whether to execute constant folding for 
optimization
   input_names = input_names, # the model's input names
   output_names = ['output']  # the model's output names
   )
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix NDArrayIter cant pad when size is large (#17001)

2019-12-08 Thread roywei
This is an automated email from the ASF dual-hosted git repository.

roywei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 251e6f6  Fix NDArrayIter cant pad when size is large (#17001)
251e6f6 is described below

commit 251e6f6cb7ec7741434c35a26892fa7450f751c0
Author: Jake Lee 
AuthorDate: Sun Dec 8 16:09:11 2019 -0800

Fix NDArrayIter cant pad when size is large (#17001)

* Fix NDArrayIter cant pad when size is large

* ci
---
 python/mxnet/io/io.py| 44 +---
 tests/python/unittest/test_io.py |  6 +++---
 2 files changed, 26 insertions(+), 24 deletions(-)

diff --git a/python/mxnet/io/io.py b/python/mxnet/io/io.py
index dcf964d..e36665e 100644
--- a/python/mxnet/io/io.py
+++ b/python/mxnet/io/io.py
@@ -36,7 +36,7 @@ from ..ndarray import NDArray
 from ..ndarray.sparse import CSRNDArray
 from ..ndarray import _ndarray_cls
 from ..ndarray import array
-from ..ndarray import concat
+from ..ndarray import concat, tile
 
 from .utils import _init_data, _has_instance, _getdata_by_idx
 
@@ -709,23 +709,27 @@ class NDArrayIter(DataIter):
 
 def _concat(self, first_data, second_data):
 """Helper function to concat two NDArrays."""
+if (not first_data) or (not second_data):
+return first_data if first_data else second_data
 assert len(first_data) == len(
 second_data), 'data source should contain the same size'
-if first_data and second_data:
-return [
-concat(
-first_data[x],
-second_data[x],
-dim=0
-) for x in range(len(first_data))
-]
-elif (not first_data) and (not second_data):
+return [
+concat(
+first_data[i],
+second_data[i],
+dim=0
+) for i in range(len(first_data))
+]
+
+def _tile(self, data, repeats):
+if not data:
 return []
-else:
-return [
-first_data[0] if first_data else second_data[0]
-for x in range(len(first_data))
-]
+res = []
+for datum in data:
+reps = [1] * len(datum.shape)
+reps[0] = repeats
+res.append(tile(datum, reps))
+return res
 
 def _batchify(self, data_source):
 """Load data from underlying arrays, internal use only."""
@@ -749,12 +753,10 @@ class NDArrayIter(DataIter):
 pad = self.batch_size - self.num_data + self.cursor
 first_data = self._getdata(data_source, start=self.cursor)
 if pad > self.num_data:
-while True:
-if pad <= self.num_data:
-break
-second_data = self._getdata(data_source, end=self.num_data)
-pad -= self.num_data
-second_data = self._concat(second_data, 
self._getdata(data_source, end=pad))
+repeats = pad // self.num_data
+second_data = self._tile(self._getdata(data_source, 
end=self.num_data), repeats)
+if pad % self.num_data != 0:
+second_data = self._concat(second_data, 
self._getdata(data_source, end=pad % self.num_data))
 else:
 second_data = self._getdata(data_source, end=pad)
 return self._concat(first_data, second_data)
diff --git a/tests/python/unittest/test_io.py b/tests/python/unittest/test_io.py
index 2a806ef..a13addb 100644
--- a/tests/python/unittest/test_io.py
+++ b/tests/python/unittest/test_io.py
@@ -198,11 +198,11 @@ def _test_shuffle(data, labels=None):
 assert np.array_equal(batch.data[0].asnumpy(), batch_list[idx_list[i]])
 i += 1
 
-# fixes the issue https://github.com/apache/incubator-mxnet/issues/15535
+
 def _test_corner_case():
 data = np.arange(10)
-data_iter = mx.io.NDArrayIter(data=data, batch_size=25, shuffle=False, 
last_batch_handle='pad')
-expect = np.concatenate((np.tile(data, 2), np.arange(5)))
+data_iter = mx.io.NDArrayIter(data=data, batch_size=205, shuffle=False, 
last_batch_handle='pad')
+expect = np.concatenate((np.tile(data, 20), np.arange(5)))
 assert np.array_equal(data_iter.next().data[0].asnumpy(), expect)
 
 



[GitHub] [incubator-mxnet] roywei closed issue #16996: mx.io.NDArrayIter cant pad when size is large

2019-12-08 Thread GitBox
roywei closed issue #16996: mx.io.NDArrayIter cant pad when size is large
URL: https://github.com/apache/incubator-mxnet/issues/16996
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei merged pull request #17001: Fix NDArrayIter cant pad when size is large

2019-12-08 Thread GitBox
roywei merged pull request #17001: Fix NDArrayIter cant pad when size is large
URL: https://github.com/apache/incubator-mxnet/pull/17001
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-12-08 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 1d6ba33  Bump the publish timestamp.
1d6ba33 is described below

commit 1d6ba33012ff1bbf5d9a0379128a2107282e4941
Author: mxnet-ci 
AuthorDate: Sun Dec 8 18:42:13 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..1d8b298
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Dec  8 18:42:13 UTC 2019



[GitHub] [incubator-mxnet] vexilligera opened a new pull request #17014: [NumPy][WIP] Add NumPy support for norm

2019-12-08 Thread GitBox
vexilligera opened a new pull request #17014: [NumPy][WIP] Add NumPy support 
for norm
URL: https://github.com/apache/incubator-mxnet/pull/17014
 
 
   ## Description ##
   implements np.linalg.norm
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on issue #17011: [Numpy] additional default parameter for mxnet.numpy.max: axis=-1

2019-12-08 Thread GitBox
xidulu commented on issue #17011: [Numpy] additional default parameter for 
mxnet.numpy.max: axis=-1 
URL: 
https://github.com/apache/incubator-mxnet/issues/17011#issuecomment-562963658
 
 
   Looks like the problem occurs in this line
   
https://github.com/apache/incubator-mxnet/blob/9b25db05b4de18b5bfed8467cfd934f2bfa4f11a/src/operator/numpy/np_broadcast_reduce_op.h#L190
   
   I guess an additional check for bypassing the access to ishape tuple 
(`ishape[axes[i]]`) may solve the problem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 commented on issue #14979: [BUG] Using a package with MKL and GPU versions, using python to open a new process will cause an error

2019-12-08 Thread GitBox
cjolivier01 commented on issue #14979: [BUG] Using a package with MKL and GPU 
versions, using python to open a new process will cause an error
URL: 
https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-562952406
 
 
   what is the source file and line number of that crash in libmxnet.so? What’s 
the line of code crashing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] artor1os commented on issue #16720: [Numpy] Implement numpy operator 'average'

2019-12-08 Thread GitBox
artor1os commented on issue #16720: [Numpy] Implement numpy operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#issuecomment-562952064
 
 
   @haojin2 ready to be merged


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-12-08 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 6d26f64  Bump the publish timestamp.
6d26f64 is described below

commit 6d26f6413d7ca6cf2022181344a69a2906270062
Author: mxnet-ci 
AuthorDate: Sun Dec 8 12:42:06 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..35df771
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Dec  8 12:42:06 UTC 2019



[GitHub] [incubator-mxnet] fatherMatrix opened a new issue #17013: How can I use mshadow in C++ as a matrix compute framework like Eigen?

2019-12-08 Thread GitBox
fatherMatrix opened a new issue #17013: How can I use mshadow in C++ as a 
matrix compute framework like Eigen?
URL: https://github.com/apache/incubator-mxnet/issues/17013
 
 
   ## Description
   I like MXNet very much and I'd like to use tools about mxnet community in my 
daily work. But I find  the C++ interface of mashadow or ndarray doesn't 
support matrix compute like equation solve or matrix decomposition. So I just 
wonder how it would be if we can support it in the future.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16891: Upgrading MKLDNN to 1.0 causes performance regression.

2019-12-08 Thread GitBox
TaoLv commented on issue #16891: Upgrading MKLDNN to 1.0 causes performance 
regression.
URL: 
https://github.com/apache/incubator-mxnet/issues/16891#issuecomment-562940173
 
 
   @ChaiBapchya The file is used to build mxnet-mkl pip package. If you want to 
change the configurations, I think you need have a proposal on dev@.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu opened a new pull request #17012: Upgrade 3rdparty/openmp to release_90 version

2019-12-08 Thread GitBox
leezu opened a new pull request #17012: Upgrade 3rdparty/openmp to release_90 
version
URL: https://github.com/apache/incubator-mxnet/pull/17012
 
 
   ## Description ##
   Fixes https://github.com/apache/incubator-mxnet/issues/10856
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [X] Upgrade 3rdparty/openmp to release_90 version
   
   ## Comments ##
   See https://github.com/apache/incubator-mxnet/issues/10856 for a discussion 
on the symptom. It seems it's due to a bug in llvm openmp fixed in the meantime.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #14979: [BUG] Using a package with MKL and GPU versions, using python to open a new process will cause an error

2019-12-08 Thread GitBox
leezu edited a comment on issue #14979: [BUG] Using a package with MKL and GPU 
versions, using python to open a new process will cause an error
URL: 
https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-562926756
 
 
   There are currently two hypotheses about the root cause of this error 
(https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-525103793):
 a) bug in llvm / intel openmp b) interaction between gomp and llvm / intel 
openmp.
   
   I did some more investigation and conclude we can rule out option b. In 
particular, I compile `CC=clang-8 CXX=clang++-8 cmake -DUSE_CUDA=1 
-DUSE_MKLDNN=1 -DCMAKE_EXPORT_COMPILE_COMMANDS=1 -DBUILD_CYTHON_MODULES=1 
-DUSE_OPENCV=0 ..`. 
   
   We can investigate the shared library dependencies of the resulting 
`libmxnet.so`:
   
   ```
   % readelf -Wa libmxnet.so | grep NEEDED
0x0001 (NEEDED) Shared library: [libnvToolsExt.so.1]
0x0001 (NEEDED) Shared library: [libopenblas.so.0]
0x0001 (NEEDED) Shared library: [librt.so.1]
0x0001 (NEEDED) Shared library: [libjemalloc.so.1]
0x0001 (NEEDED) Shared library: [liblapack.so.3]
0x0001 (NEEDED) Shared library: [libcublas.so.10.0]
0x0001 (NEEDED) Shared library: [libcufft.so.10.0]
0x0001 (NEEDED) Shared library: 
[libcusolver.so.10.0]
0x0001 (NEEDED) Shared library: [libcurand.so.10.0]
0x0001 (NEEDED) Shared library: [libnvrtc.so.10.0]
0x0001 (NEEDED) Shared library: [libcuda.so.1]
0x0001 (NEEDED) Shared library: [libdl.so.2]
0x0001 (NEEDED) Shared library: [libpthread.so.0]
0x0001 (NEEDED) Shared library: [libomp.so.5]
0x0001 (NEEDED) Shared library: [libstdc++.so.6]
0x0001 (NEEDED) Shared library: [libm.so.6]
0x0001 (NEEDED) Shared library: [libgcc_s.so.1]
0x0001 (NEEDED) Shared library: [libc.so.6]
0x0001 (NEEDED) Shared library: 
[ld-linux-x86-64.so.2]
   ```
   
   among those, `libopenblas.so.0` is provided by the system and depends on 
`libgomp.so`. (If we would compile with OpenCV, OpenCV would also transitively 
depend on `ligomp.so`, so I just disable it for the purpose of this test). We 
can see it shows up among the transitive shared library dependencies:
   
   ```
   % ldd libmxnet.so
   linux-vdso.so.1 (0x7ffd382ca000)
   libnvToolsExt.so.1 => /usr/local/cuda/lib64/libnvToolsExt.so.1 
(0x7efdc9594000)
   libopenblas.so.0 => /usr/local/lib/libopenblas.so.0 
(0x7efdc85fb000)
   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7efdc83f3000)
   libjemalloc.so.1 => /usr/lib/x86_64-linux-gnu/libjemalloc.so.1 
(0x7efdc81bd000)
   liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 
(0x7efdc78fe000)
   libcublas.so.10.0 => /usr/local/cuda/lib64/libcublas.so.10.0 
(0x7efdc3368000)
   libcufft.so.10.0 => /usr/local/cuda/lib64/libcufft.so.10.0 
(0x7efdbceb4000)
   libcusolver.so.10.0 => /usr/local/cuda/lib64/libcusolver.so.10.0 
(0x7efdb47cd000)
   libcurand.so.10.0 => /usr/local/cuda/lib64/libcurand.so.10.0 
(0x7efdb0666000)
   libnvrtc.so.10.0 => /usr/local/cuda/lib64/libnvrtc.so.10.0 
(0x7efdaf04a000)
   libcuda.so.1 => /usr/lib/x86_64-linux-gnu/libcuda.so.1 
(0x7efdaded3000)
   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7efdadccf000)
   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7efdadab)
   libomp.so.5 => /usr/lib/x86_64-linux-gnu/libomp.so.5 
(0x7efe411b4000)
   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7efdad727000)
   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7efdad389000)
   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7efdad171000)
   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7efdacd8)
   /lib64/ld-linux-x86-64.so.2 (0x7efe410a8000)
   libgfortran.so.4 => /usr/lib/x86_64-linux-gnu/libgfortran.so.4 
(0x7efdac9a1000)
   libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 
(0x7efdac772000)
   libblas.so.3 => /usr/lib/x86_64-linux-gnu/libblas.so.3 
(0x7efdac1b)
   libnvidia-fatbinaryloader.so.418.87.01 => 
/usr/lib/x86_64-linux-gnu/libnvidia-fatbinaryloader.so.418.87.01 
(0x7efdabf62000)
   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7efdabd22000)
   
   ```
   
   Thus I recompile OpenBLAS with clang. Then we can investigate the transitive 

[GitHub] [incubator-mxnet] leezu commented on issue #14979: [BUG] Using a package with MKL and GPU versions, using python to open a new process will cause an error

2019-12-08 Thread GitBox
leezu commented on issue #14979: [BUG] Using a package with MKL and GPU 
versions, using python to open a new process will cause an error
URL: 
https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-562926756
 
 
   There are currently two hypotheses about the root cause of this error 
(https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-525103793):
 a) bug in llvm / intel openmp b) interaction between gomp and llvm / intel 
openmp.
   
   I did some more investigation and conclude we can rule out option b. In 
particular, I compile `CC=clang-8 CXX=clang++-8 cmake -DUSE_CUDA=1 
-DUSE_MKLDNN=1 -DCMAKE_EXPORT_COMPILE_COMMANDS=1 -DBUILD_CYTHON_MODULES=1 
-DUSE_OPENCV=0 ..`. 
   
   We can investigate the shared library dependencies of the resulting 
`libmxnet.so`:
   
   ```
   % readelf -Wa libmxnet.so | grep NEEDED
0x0001 (NEEDED) Shared library: [libnvToolsExt.so.1]
0x0001 (NEEDED) Shared library: [libopenblas.so.0]
0x0001 (NEEDED) Shared library: [librt.so.1]
0x0001 (NEEDED) Shared library: [libjemalloc.so.1]
0x0001 (NEEDED) Shared library: [liblapack.so.3]
0x0001 (NEEDED) Shared library: [libcublas.so.10.0]
0x0001 (NEEDED) Shared library: [libcufft.so.10.0]
0x0001 (NEEDED) Shared library: 
[libcusolver.so.10.0]
0x0001 (NEEDED) Shared library: [libcurand.so.10.0]
0x0001 (NEEDED) Shared library: [libnvrtc.so.10.0]
0x0001 (NEEDED) Shared library: [libcuda.so.1]
0x0001 (NEEDED) Shared library: [libdl.so.2]
0x0001 (NEEDED) Shared library: [libpthread.so.0]
0x0001 (NEEDED) Shared library: [libomp.so.5]
0x0001 (NEEDED) Shared library: [libstdc++.so.6]
0x0001 (NEEDED) Shared library: [libm.so.6]
0x0001 (NEEDED) Shared library: [libgcc_s.so.1]
0x0001 (NEEDED) Shared library: [libc.so.6]
0x0001 (NEEDED) Shared library: 
[ld-linux-x86-64.so.2]
   ```
   
   among those, `libopenblas.so.0` is provided by the system and depends on 
`libgomp.so`. (If we would compile with OpenCV, OpenCV would also transitively 
depend on `ligomp.so`, so I just disable it for the purpose of this test). We 
can see it shows up among the transitive shared library dependencies:
   
   ```
   % ldd libmxnet.so
   linux-vdso.so.1 (0x7ffd382ca000)
   libnvToolsExt.so.1 => /usr/local/cuda/lib64/libnvToolsExt.so.1 
(0x7efdc9594000)
   libopenblas.so.0 => /usr/local/lib/libopenblas.so.0 
(0x7efdc85fb000)
   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7efdc83f3000)
   libjemalloc.so.1 => /usr/lib/x86_64-linux-gnu/libjemalloc.so.1 
(0x7efdc81bd000)
   liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 
(0x7efdc78fe000)
   libcublas.so.10.0 => /usr/local/cuda/lib64/libcublas.so.10.0 
(0x7efdc3368000)
   libcufft.so.10.0 => /usr/local/cuda/lib64/libcufft.so.10.0 
(0x7efdbceb4000)
   libcusolver.so.10.0 => /usr/local/cuda/lib64/libcusolver.so.10.0 
(0x7efdb47cd000)
   libcurand.so.10.0 => /usr/local/cuda/lib64/libcurand.so.10.0 
(0x7efdb0666000)
   libnvrtc.so.10.0 => /usr/local/cuda/lib64/libnvrtc.so.10.0 
(0x7efdaf04a000)
   libcuda.so.1 => /usr/lib/x86_64-linux-gnu/libcuda.so.1 
(0x7efdaded3000)
   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7efdadccf000)
   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7efdadab)
   libomp.so.5 => /usr/lib/x86_64-linux-gnu/libomp.so.5 
(0x7efe411b4000)
   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7efdad727000)
   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7efdad389000)
   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7efdad171000)
   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7efdacd8)
   /lib64/ld-linux-x86-64.so.2 (0x7efe410a8000)
   libgfortran.so.4 => /usr/lib/x86_64-linux-gnu/libgfortran.so.4 
(0x7efdac9a1000)
   libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 
(0x7efdac772000)
   libblas.so.3 => /usr/lib/x86_64-linux-gnu/libblas.so.3 
(0x7efdac1b)
   libnvidia-fatbinaryloader.so.418.87.01 => 
/usr/lib/x86_64-linux-gnu/libnvidia-fatbinaryloader.so.418.87.01 
(0x7efdabf62000)
   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7efdabd22000)
   
   ```
   
   Thus I recompile OpenBLAS with clang. Then we can investigate the transitive 
dependencies 

[GitHub] [incubator-mxnet] Alicia1529 opened a new issue #17011: [Numpy] additional default parameter for mxnet.numpy.max: axis=-1

2019-12-08 Thread GitBox
Alicia1529 opened a new issue #17011: [Numpy] additional default parameter for 
mxnet.numpy.max: axis=-1 
URL: https://github.com/apache/incubator-mxnet/issues/17011
 
 
   Max operator in official numpy can support argument axis=-1 and return max 
value from its last dimension. Wonder if deepnumpy can support this argument.
   ```
   >>> import numpy as _np
   >>> help(_np.max)
   
   >>> from mxnet import numpy as np
   >>> import numpy as _np
   >>> a = np.arange(10).reshape(2,5)
   >>> b = _np.arange(10).reshape(2,5)
   >>> a
   array([[0., 1., 2., 3., 4.],
  [5., 6., 7., 8., 9.]])
   >>> b
   array([[0, 1, 2, 3, 4],
  [5, 6, 7, 8, 9]])
   >>> _np.max(b, axis=-1)
   array([4, 9])
   >>> np.max(a, axis=-1)
   Traceback (most recent call last):
 File "", line 1, in 
 File "", line 44, in max
 File "/Users/luoting/Desktop/mxnet/python/mxnet/_ctypes/ndarray.py", line 
107, in _imperative_invoke
   ctypes.byref(out_stypes)))
 File "/Users/luoting/Desktop/mxnet/python/mxnet/base.py", line 278, in 
check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [17:03:00] include/mxnet/tuple.h:220: Check failed: i 
>= 0 && i < ndim(): index = -1 must be in range [0, 2)
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] z01nl1o02 commented on issue #15957: error: call of overloaded is ambiguous

2019-12-08 Thread GitBox
z01nl1o02 commented on issue #15957: error: call of overloaded is ambiguous
URL: 
https://github.com/apache/incubator-mxnet/issues/15957#issuecomment-562922365
 
 
   add copy constructor may fix it
   `  NodeEntry(const struct NodeEntry& entry):
   node(std::move(entry.node)),
   index(entry.index),
   version(entry.version)
 {}
   `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services