[GitHub] [incubator-mxnet] aashudwivedi removed a comment on issue #14332: MXNet static library build results in error in centos, oracle linux and similar distros

2019-03-10 Thread GitBox
aashudwivedi removed a comment on issue #14332: MXNet static library build 
results in error in centos, oracle linux and similar distros
URL: 
https://github.com/apache/incubator-mxnet/issues/14332#issuecomment-470151109
 
 
   Thanks for the help @lanking520 
   I am not able to use the libs which are built in ubuntu, in centos. It runs 
into an error for missing .so files.
   To build in ole7 / centos, I have added the following lines in the script 
build_lib.sh before the line `>&2 echo "Checking linked objects on 
libmxnet.so..."`
   ```
   for libname in $(ls staticdeps/lib64/*.a | xargs -n 1 basename)
   do
   cp -fL staticdeps/lib64/$libname staticdeps/lib/$libname
   done
   
   cp -L /usr/lib64/libgfortran.so.3 lib/libgfortran.so.3
   cp -L 
/usr/gcc-4.8.5/release/x86_64-unknown-linux-gnu/libquadmath/.libs/libquadmath.so
 lib/libquadmath.so.0
   ```
   
   which resolves the previous error.  However the build still fails, with the 
error message : 
   
   ```
   /usr/bin/ld: skipping incompatible /lib/librt.so when searching for -lrt
   /usr/bin/ld: cannot find -lgfortran
   /usr/bin/ld: skipping incompatible /lib/libdl.so when searching for -ldl
   /usr/bin/ld: skipping incompatible /lib/libm.so when searching for -lm
   /usr/bin/ld: skipping incompatible /lib/libpthread.so when searching for 
-lpthread
   collect2: error: ld returned 1 exit status
   make: *** [lib/libmxnet.so] Error 1
   make: *** Waiting for unfinished jobs
   /usr/bin/ld: skipping incompatible /lib/librt.so when searching for -lrt
   /usr/bin/ld: cannot find -lgfortran
   /usr/bin/ld: skipping incompatible /lib/libdl.so when searching for -ldl
   /usr/bin/ld: skipping incompatible /lib/libm.so when searching for -lm
   /usr/bin/ld: skipping incompatible /lib/libpthread.so when searching for 
-lpthread
   collect2: error: ld returned 1 exit status
   make: *** [bin/im2rec] Error 1
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #14335: [MKLDNN] Question on installation and use of MKLDNN

2019-03-10 Thread GitBox
TaoLv commented on issue #14335: [MKLDNN] Question on installation and use of 
MKLDNN
URL: 
https://github.com/apache/incubator-mxnet/issues/14335#issuecomment-471408977
 
 
   @dbsxdbsx could you post the steps for reproducing the build issue? Or 
typical steps for building mxnet from a Windows user perspective. I also sent 
an email to you for details.
   
   Try to answer part of the questions:
   
   > Q2: from office tutorial of gluonCV with C++, seems it is feasible to 
build with MKL+MKLDNN in cmd command. But with CMake Gui, USE_MKLDNN is 
forbidden, as it need (NOT MSVC) . WHY?
   
   It's possible to build MXNet with MKL/MKL-DNN from source. Please take a 
look at what we do in CI:
   
https://github.com/apache/incubator-mxnet/blob/master/ci/docker/runtime_functions.sh#L553
   
https://github.com/apache/incubator-mxnet/blob/master/ci/docker/runtime_functions.sh#L708
   I'm not a CMAKE GUI user, so not sure what's the problem on it.
   
   > Q4: I googled a lot on MKl with mxnet, and I found a discussion within 
mxnet team, with the content of the discussion and what I found in 
downloadMKLML.cmake, does it mean mxnet recommand install MKLDNN with JUST 
submodule of MKL at present(and it is still vague to decide whether submodule 
of MKL is needed when building mxnet from source)?
   
   Currently, it is much easier to get MKLML compared with to get full MKL. It 
has more friendly license to mxnet, smaller binary size and more convenient 
download source. So now it's the default behavior to download MKLML and link to 
it when mxnet is built with USE_MKLDNN=1. But you're right, there is no need to 
link MKLML if full MKL is installed and linked to mxnet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #14027: Julia: rename `mx.clip` to `clamp` for `NDArray`

2019-03-10 Thread GitBox
wkcn commented on issue #14027: Julia: rename `mx.clip` to `clamp` for `NDArray`
URL: https://github.com/apache/incubator-mxnet/pull/14027#issuecomment-471397243
 
 
   This PR has been merged.
   Thanks for your contribution!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Julia: rename `mx.clip` to `clamp` for `NDArray` (#14027)

2019-03-10 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new af41af5  Julia: rename `mx.clip` to `clamp` for `NDArray` (#14027)
af41af5 is described below

commit af41af527221ad9b2c975b3962417b5fb5e1595b
Author: Iblis Lin 
AuthorDate: Mon Mar 11 12:02:13 2019 +0800

Julia: rename `mx.clip` to `clamp` for `NDArray` (#14027)

- in order to match Julia `Base.clamp` interface

- depwarn for `mx.clip` included
---
 julia/NEWS.md   |  7 +++---
 julia/src/MXNet.jl  |  2 --
 julia/src/deprecated.jl |  8 +--
 julia/src/ndarray/arithmetic.jl | 52 +
 julia/src/ndarray/remap.jl  | 12 ++
 julia/src/optimizer.jl  |  2 +-
 julia/test/unittest/ndarray.jl  | 12 +-
 7 files changed, 60 insertions(+), 35 deletions(-)

diff --git a/julia/NEWS.md b/julia/NEWS.md
index 4a6c1a2..3cd6162 100644
--- a/julia/NEWS.md
+++ b/julia/NEWS.md
@@ -19,8 +19,6 @@
 
 * Following material from `mx` module got exported (#TBD):
 * `NDArray`
-* `clip()`
-* `clip!()`
 * `context()`
 * `expand_dims()`
 * `@inplace`
@@ -373,11 +371,12 @@
99.9889  100.533  100.072
   ```
 
-* Signature of `clip` changed, it doesn't require any keyword argument now.
+* Signature of `clip` changed and renamed to `clamp`.
+  It doesn't require any keyword argument now.
   (#TBD)
 
   Before: `clip(x, a_min = -4, a_max = 4)`
-  After: `clip(x, -4, 4)`
+  After: `clamp(x, -4, 4)`
 
 ### Optimizer
 
diff --git a/julia/src/MXNet.jl b/julia/src/MXNet.jl
index 68663d1..70eda96 100644
--- a/julia/src/MXNet.jl
+++ b/julia/src/MXNet.jl
@@ -50,8 +50,6 @@ export SymbolicNode,
 
 # ndarray.jl
 export NDArray,
-   clip,
-   clip!,
context,
expand_dims,
@inplace,
diff --git a/julia/src/deprecated.jl b/julia/src/deprecated.jl
index 70079b8..7c49b66 100644
--- a/julia/src/deprecated.jl
+++ b/julia/src/deprecated.jl
@@ -72,8 +72,6 @@ end
 @deprecate softmax(x::NDArray; axis = ndims(x)) softmax.(x, axis)
 @deprecate log_softmax(x::NDArray; axis = ndims(x)) log_softmax.(x, axis)
 
-@deprecate clip(x; a_min = 0, a_max = 0) clip(x, a_min, a_max)
-
 function broadcast_plus(x::NDArray, y::NDArray)
   @warn("broadcast_plus(x, y) is deprecated, use x .+ y instead.")
   x .+ y
@@ -194,3 +192,9 @@ function empty(dims::Int...)
 "use `NDArray(undef, dims...)` instead.")
   NDArray(undef, dims...)
 end
+
+# replaced by Base.clamp
+@deprecate clip(x::NDArray, lo::Real, hi::Real)  clamp(x, lo, hi)
+@deprecate clip!(x::NDArray, lo::Real, hi::Real) clamp!(x, lo, hi)
+@deprecate clip(x; a_min = 0, a_max = 0) clamp(x, a_min, a_max)
+
diff --git a/julia/src/ndarray/arithmetic.jl b/julia/src/ndarray/arithmetic.jl
index 60dde6b..4c467a2 100644
--- a/julia/src/ndarray/arithmetic.jl
+++ b/julia/src/ndarray/arithmetic.jl
@@ -218,40 +218,52 @@ broadcasted(::typeof(^), x::NDArray{T,N}, 
y::NDArray{T,N}) where {T,N} =
 broadcasted(::typeof(^), x::NDArray{T,N}, y::NDArray{T,M}) where {T,N,M} =
   _broadcast_power(x, y)
 
-_nddoc[:clip] = _nddoc[:clip!] =
 """
-clip(x::NDArray, min, max)
-clip!(x::NDArray, min, max)
+clamp(x::NDArray, lo, hi)
 
-Clips (limits) the values in `NDArray`.
+Clamps (limits) the values in `NDArray`.
 Given an interval, values outside the interval are clipped to the interval 
edges.
-Clipping `x` between `min` and `x` would be:
+Clamping `x` between low `lo` and high `hi` would be:
 
 ```julia
-clip(x, min_, max_) = max(min(x, max_), min_))
+clamp(x, lo, hi) = max(min(x, lo), hi))
 ```
 
+The storage type of clip output depends on storage types of inputs and the
+`lo`, `hi` parameter values:
+
+- clamp(default) -> default
+- clamp(row_sparse, lo <= 0, hi >= 0) -> row_sparse
+- clamp(csr, lo <= 0, hi >= 0) -> csr
+- clamp(row_sparse, lo < 0, hi < 0) -> default
+- clamp(row_sparse, lo > 0, hi > 0) -> default
+- clamp(csr, lo < 0, hi < 0) -> csr
+- clamp(csr, lo > 0, hi > 0) -> csr
+
+## Examples
+
 ```jldoctest
 julia> x = NDArray(1:9);
 
-julia> mx.clip(x, 2, 8)'
+julia> clamp(x, 2, 8)'
 1×9 mx.NDArray{Int64,2} @ CPU0:
  2  2  3  4  5  6  7  8  8
-```
 
-The storage type of clip output depends on storage types of inputs and the
-`min`, `max` parameter values:
-
-- clip(default) = default
-- clip(row_sparse, min <= 0, max >= 0) = row_sparse
-- clip(csr, min <= 0, max >= 0) = csr
-- clip(row_sparse, min < 0, max < 0) = default
-- clip(row_sparse, min > 0, max > 0) = default
-- clip(csr, min < 0, max < 0) = csr
-- clip(csr, min > 0, max > 0) = csr
+julia> clamp(x, 8, 2)'
+1×9 NDArray{Int64,2} @ CPU0:
+ 8  8  2  2  2  2  2  2  2
+ ```
+"""
+Base.clamp(x::NDArray, lo::Real, hi::Real) = _clamp(x, lo, hi)
+@_remap _clamp(x::NDArray, lo::Real, hi::Real) clip(x; a_min = lo, 

[GitHub] [incubator-mxnet] wkcn merged pull request #14027: Julia: rename `mx.clip` to `clamp` for `NDArray`

2019-03-10 Thread GitBox
wkcn merged pull request #14027: Julia: rename `mx.clip` to `clamp` for 
`NDArray`
URL: https://github.com/apache/incubator-mxnet/pull/14027
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn merged pull request #14190: [Flaky Test] Python3: MKLDNN-GPU test_kvstore_gpu.test_rsp_push_pull

2019-03-10 Thread GitBox
wkcn merged pull request #14190: [Flaky Test] Python3: MKLDNN-GPU 
test_kvstore_gpu.test_rsp_push_pull
URL: https://github.com/apache/incubator-mxnet/pull/14190
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Flaky test https://github.com/apache/incubator-mxnet/issues/14189 (#14190)

2019-03-10 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 47d4d66  Flaky test 
https://github.com/apache/incubator-mxnet/issues/14189 (#14190)
47d4d66 is described below

commit 47d4d66ac477ae560a0d34d06e9145ce422d0e9c
Author: Chance Bair 
AuthorDate: Mon Mar 11 05:00:37 2019 +0100

Flaky test https://github.com/apache/incubator-mxnet/issues/14189 (#14190)
---
 tests/python/gpu/test_kvstore_gpu.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/python/gpu/test_kvstore_gpu.py 
b/tests/python/gpu/test_kvstore_gpu.py
index 8ff8752..23bab53 100644
--- a/tests/python/gpu/test_kvstore_gpu.py
+++ b/tests/python/gpu/test_kvstore_gpu.py
@@ -43,6 +43,7 @@ def init_kv_with_str(stype='default', kv_type='local'):
 # Not reproducible, so this test is back on random seeds.
 @with_seed()
 @unittest.skipIf(mx.context.num_gpus() < 2, "test_rsp_push_pull needs more 
than 1 GPU")
+@unittest.skip("Flaky test 
https://github.com/apache/incubator-mxnet/issues/14189;)
 def test_rsp_push_pull():
 def check_rsp_push_pull(kv_type, sparse_pull, is_push_cpu=True):
 kv = init_kv_with_str('row_sparse', kv_type)



[GitHub] [incubator-mxnet] wkcn commented on issue #14190: [Flaky Test] Python3: MKLDNN-GPU test_kvstore_gpu.test_rsp_push_pull

2019-03-10 Thread GitBox
wkcn commented on issue #14190: [Flaky Test] Python3: MKLDNN-GPU 
test_kvstore_gpu.test_rsp_push_pull
URL: https://github.com/apache/incubator-mxnet/pull/14190#issuecomment-471397039
 
 
   The PR has been merged.
   Thanks for your contribution!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: fix engine crash in shutdown phase (#14382)

2019-03-10 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 4f5cba5  fix engine crash in shutdown phase (#14382)
4f5cba5 is described below

commit 4f5cba59dd44b91bfdd59025106e3bc116600a86
Author: Wang Jiajun 
AuthorDate: Mon Mar 11 11:56:53 2019 +0800

fix engine crash in shutdown phase (#14382)

* fix engine crash in shutdown phase

* fix lint

* Revert "Bypass ThreadedEngine in 
test_operator_gpu.py:test_convolution_multiple_streams. (#14338)"

This reverts commit d6eafca2555b58746f51052fdce96a264d02a84a.
---
 src/engine/threaded_engine.h  |  9 +
 tests/python/gpu/test_operator_gpu.py | 12 +---
 2 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/src/engine/threaded_engine.h b/src/engine/threaded_engine.h
index ab06ca1..640eac4 100644
--- a/src/engine/threaded_engine.h
+++ b/src/engine/threaded_engine.h
@@ -30,6 +30,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -306,6 +307,8 @@ class ThreadedEngine : public Engine {
 objpool_varblk_ref_ = 
common::ObjectPool::_GetSharedRef();
 objpool_var_ref_= common::ObjectPool::_GetSharedRef();
 
+storage_ref_ = Storage::_GetSharedRef();
+
 // Get a ref to the profiler so that it doesn't get killed before us
 profiler::Profiler::Get(_);
   }
@@ -549,6 +552,12 @@ class ThreadedEngine : public Engine {
   std::shared_ptr > objpool_varblk_ref_;
   std::shared_ptr >   objpool_var_ref_;
 
+  /*!
+   * \brief Async destruction of some objects is relied on storage,
+   *  prevent it from being destructed too early
+   */
+  std::shared_ptr storage_ref_;
+
 #if MXNET_USE_CUDA
   /*! \brief Number of GPU devices available */
   std::atomic device_count_{-1};
diff --git a/tests/python/gpu/test_operator_gpu.py 
b/tests/python/gpu/test_operator_gpu.py
index 7d7c2ed..c12c94b 100644
--- a/tests/python/gpu/test_operator_gpu.py
+++ b/tests/python/gpu/test_operator_gpu.py
@@ -547,18 +547,8 @@ def _conv_with_num_streams(seed):
 
 @with_seed()
 def test_convolution_multiple_streams():
-engines = ['NaiveEngine', 'ThreadedEngine', 'ThreadedEnginePerDevice']
-
-if os.getenv('MXNET_ENGINE_TYPE') is not None:
-engines = [os.getenv('MXNET_ENGINE_TYPE'),]
-print("Only running against '%s'" % engines[0], file=sys.stderr, 
end='')
-# Remove this else clause when the ThreadedEngine can handle this test
-else:
-engines.remove('ThreadedEngine')
-print("SKIP: 'ThreadedEngine', only running against %s" % engines, 
file=sys.stderr, end='')
-
 for num_streams in [1, 2]:
-for engine in engines:
+for engine in ['NaiveEngine', 'ThreadedEngine', 
'ThreadedEnginePerDevice']:
 print("Starting engine %s with %d streams." % (engine, 
num_streams), file=sys.stderr)
 run_in_spawned_process(_conv_with_num_streams,
 {'MXNET_GPU_WORKER_NSTREAMS' : num_streams, 
'MXNET_ENGINE_TYPE' : engine})



[GitHub] [incubator-mxnet] dbsxdbsx edited a comment on issue #14385: OpenCV 4.0 is currently not support on win10

2019-03-10 Thread GitBox
dbsxdbsx edited a comment on issue #14385: OpenCV 4.0 is currently not support 
on win10
URL: 
https://github.com/apache/incubator-mxnet/issues/14385#issuecomment-471390025
 
 
   @wkcn,for context of error message, the blew is the total errors:
   ```
   严重性  代码  说明  项目  文件  行   禁止显示状态
   错误   MSB3721 命令“"C:\Program Files\NVIDIA GPU Computing 
Toolkit\CUDA\v10.1\bin\nvcc.exe" 
-gencode=arch=compute_50,code=\"sm_50,compute_50\" --use-local-env -ccbin 
"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\x86_amd64" -x cu  
-I"C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2019.2.190\windows\mkl\include" 
-IC:\mxnet\3rdparty\mkldnn\include -IC:\mxnet\include -IC:\mxnet\src 
-IC:\mxnet\3rdparty\mshadow -IC:\mxnet\3rdparty\cub 
-IC:\mxnet\3rdparty\tvm\nnvm\include -IC:\mxnet\3rdparty\tvm\include 
-I"C:\mxnet\3rdparty\dmlc-core\include" -IC:\mxnet\3rdparty\dlpack\include 
-IC:\opencv4\build\include -I"C:\Program Files\NVIDIA GPU Computing 
Toolkit\CUDA\v10.1\include" -I"C:\Program Files\NVIDIA GPU Computing 
Toolkit\CUDA\v10.1\include" --keep-dir x64\Release -maxrregcount=0  
--machine 64 --compile -cudart static -Xcompiler="/MP /bigobj -openmp -Ob2 -Gy" 
   -DNDEBUG -DWIN32_LEAN_AND_MEAN -DDMLC_USE_CXX11 -DMSHADOW_IN_CXX11 
-D_SCL_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_WARNINGS -DMXNET_EXPORTS 
-DNNVM_EXPORTS -DDMLC_STRICT_CXX11 -DNOMINMAX -DUSE_MKL=1 -DCUB_MKL=1 
-DMXNET_USE_MKLDNN=1 -DMSHADOW_USE_CUDA=1 -DMXNET_USE_NCCL=0 -DUSE_MKL 
-DUSE_CBLAS -DMSHADOW_USE_CBLAS=0 -DMSHADOW_USE_MKL=1 -DMXNET_USE_BLAS_MKL=1 
-DMXNET_USE_OPENCV=1 -DMXNET_USE_OPENMP=1 -DMXNET_USE_LAPACK=1 -DUSE_CUDNN 
-DMSHADOW_USE_CUDNN=1 -DMXNET_ENABLE_CUDA_RTC=1 -DMXNET_USE_CUDA=1 
-D"CMAKE_INTDIR=\"Release\"" -Dmxnet_EXPORTS -DNDEBUG -DWIN32_LEAN_AND_MEAN 
-DDMLC_USE_CXX11 -DMSHADOW_IN_CXX11 -D_SCL_SECURE_NO_WARNINGS 
-D_CRT_SECURE_NO_WARNINGS -DMXNET_EXPORTS -DNNVM_EXPORTS -DDMLC_STRICT_CXX11 
-DNOMINMAX -DUSE_MKL=1 -DCUB_MKL=1 -DMXNET_USE_MKLDNN=1 -DMSHADOW_USE_CUDA=1 
-DMXNET_USE_NCCL=0 -DUSE_MKL -DUSE_CBLAS -DMSHADOW_USE_CBLAS=0 
-DMSHADOW_USE_MKL=1 -DMXNET_USE_BLAS_MKL=1 -DMXNET_USE_OPENCV=1 
-DMXNET_USE_OPENMP=1 -DMXNET_USE_LAPACK=1 -DUSE_CUDNN -DMSHADOW_USE_CUDNN=1 
-DMXNET_ENABLE_CUDA_RTC=1 -DMXNET_USE_CUDA=1 -D"CMAKE_INTDIR=\"Release\"" 
-Dmxnet_EXPORTS -D_WINDLL -D_MBCS -Xcompiler "/EHsc /W1 /nologo /O2 
/Fdmxnet.dir\Release\vc140.pdb /FS /Zi  /MT " -o 
mxnet.dir\Release\/src/operator/image/resize.cu.obj 
"C:\mxnet\src\operator\image\resize.cu"”已退出,返回代码为 1。mxnet   C:\Program 
Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140\BuildCustomizations\CUDA 
10.1.targets757 
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 

[GitHub] [incubator-mxnet] dbsxdbsx edited a comment on issue #14385: OpenCV 4.0 is currently not support on win10

2019-03-10 Thread GitBox
dbsxdbsx edited a comment on issue #14385: OpenCV 4.0 is currently not support 
on win10
URL: 
https://github.com/apache/incubator-mxnet/issues/14385#issuecomment-471390025
 
 
   @wkcn,for context of error message, the blew is the total errors:
   ```
   严重性  代码  说明  项目  文件  行   禁止显示状态
   错误   MSB3721 命令“"C:\Program Files\NVIDIA GPU Computing 
Toolkit\CUDA\v10.1\bin\nvcc.exe" 
-gencode=arch=compute_50,code=\"sm_50,compute_50\" --use-local-env -ccbin 
"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\x86_amd64" -x cu  
-I"C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2019.2.190\windows\mkl\include" 
-IC:\mxnet\3rdparty\mkldnn\include -IC:\mxnet\include -IC:\mxnet\src 
-IC:\mxnet\3rdparty\mshadow -IC:\mxnet\3rdparty\cub 
-IC:\mxnet\3rdparty\tvm\nnvm\include -IC:\mxnet\3rdparty\tvm\include 
-I"C:\mxnet\3rdparty\dmlc-core\include" -IC:\mxnet\3rdparty\dlpack\include 
-IC:\opencv4\build\include -I"C:\Program Files\NVIDIA GPU Computing 
Toolkit\CUDA\v10.1\include" -I"C:\Program Files\NVIDIA GPU Computing 
Toolkit\CUDA\v10.1\include" --keep-dir x64\Release -maxrregcount=0  
--machine 64 --compile -cudart static -Xcompiler="/MP /bigobj -openmp -Ob2 -Gy" 
   -DNDEBUG -DWIN32_LEAN_AND_MEAN -DDMLC_USE_CXX11 -DMSHADOW_IN_CXX11 
-D_SCL_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_WARNINGS -DMXNET_EXPORTS 
-DNNVM_EXPORTS -DDMLC_STRICT_CXX11 -DNOMINMAX -DUSE_MKL=1 -DCUB_MKL=1 
-DMXNET_USE_MKLDNN=1 -DMSHADOW_USE_CUDA=1 -DMXNET_USE_NCCL=0 -DUSE_MKL 
-DUSE_CBLAS -DMSHADOW_USE_CBLAS=0 -DMSHADOW_USE_MKL=1 -DMXNET_USE_BLAS_MKL=1 
-DMXNET_USE_OPENCV=1 -DMXNET_USE_OPENMP=1 -DMXNET_USE_LAPACK=1 -DUSE_CUDNN 
-DMSHADOW_USE_CUDNN=1 -DMXNET_ENABLE_CUDA_RTC=1 -DMXNET_USE_CUDA=1 
-D"CMAKE_INTDIR=\"Release\"" -Dmxnet_EXPORTS -DNDEBUG -DWIN32_LEAN_AND_MEAN 
-DDMLC_USE_CXX11 -DMSHADOW_IN_CXX11 -D_SCL_SECURE_NO_WARNINGS 
-D_CRT_SECURE_NO_WARNINGS -DMXNET_EXPORTS -DNNVM_EXPORTS -DDMLC_STRICT_CXX11 
-DNOMINMAX -DUSE_MKL=1 -DCUB_MKL=1 -DMXNET_USE_MKLDNN=1 -DMSHADOW_USE_CUDA=1 
-DMXNET_USE_NCCL=0 -DUSE_MKL -DUSE_CBLAS -DMSHADOW_USE_CBLAS=0 
-DMSHADOW_USE_MKL=1 -DMXNET_USE_BLAS_MKL=1 -DMXNET_USE_OPENCV=1 
-DMXNET_USE_OPENMP=1 -DMXNET_USE_LAPACK=1 -DUSE_CUDNN -DMSHADOW_USE_CUDNN=1 
-DMXNET_ENABLE_CUDA_RTC=1 -DMXNET_USE_CUDA=1 -D"CMAKE_INTDIR=\"Release\"" 
-Dmxnet_EXPORTS -D_WINDLL -D_MBCS -Xcompiler "/EHsc /W1 /nologo /O2 
/Fdmxnet.dir\Release\vc140.pdb /FS /Zi  /MT " -o 
mxnet.dir\Release\/src/operator/image/resize.cu.obj 
"C:\mxnet\src\operator\image\resize.cu"”已退出,返回代码为 1。mxnet   C:\Program 
Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140\BuildCustomizations\CUDA 
10.1.targets757 
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 
"std::true_type (std::integral_constant<__nv_bool, false> *)" (the meaning of a 
name may have changed since the template declaration -- the type of the 
template is "std::true_type 
(std::is_same))>::type, void>::type *)")   
mxnet   c:\opencv4\build\include\opencv2\core\cvstd_wrapper.hpp 49  
   错误   template instantiation resulted in unexpected function type of 

[GitHub] [incubator-mxnet] wkcn commented on issue #14382: fix engine crash in shutdown phase

2019-03-10 Thread GitBox
wkcn commented on issue #14382: fix engine crash in shutdown phase
URL: https://github.com/apache/incubator-mxnet/pull/14382#issuecomment-471396556
 
 
   The PR has been merged.
   Thanks for your contribution!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn merged pull request #14382: fix engine crash in shutdown phase

2019-03-10 Thread GitBox
wkcn merged pull request #14382: fix engine crash in shutdown phase
URL: https://github.com/apache/incubator-mxnet/pull/14382
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn merged pull request #14356: support leading dimension of -1 in ravel/unravel

2019-03-10 Thread GitBox
wkcn merged pull request #14356: support leading dimension of -1 in 
ravel/unravel
URL: https://github.com/apache/incubator-mxnet/pull/14356
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: support leading dimension of -1 in ravel/unravel (#14356)

2019-03-10 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 35098b8  support leading dimension of -1 in ravel/unravel (#14356)
35098b8 is described below

commit 35098b8ad27e5151fc266fbf1a766d937bd9ec8e
Author: moin 
AuthorDate: Mon Mar 11 04:53:30 2019 +0100

support leading dimension of -1 in ravel/unravel (#14356)
---
 src/operator/tensor/ravel.cc   | 6 --
 src/operator/tensor/ravel.h| 3 ++-
 tests/python/unittest/test_operator.py | 7 +++
 3 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/src/operator/tensor/ravel.cc b/src/operator/tensor/ravel.cc
index 0a66ea8..94d79c7 100644
--- a/src/operator/tensor/ravel.cc
+++ b/src/operator/tensor/ravel.cc
@@ -31,12 +31,13 @@ DMLC_REGISTER_PARAMETER(RavelParam);
 
 NNVM_REGISTER_OP(_ravel_multi_index)
 .add_alias("ravel_multi_index")
-.describe(R"code(Converts a batch of index arrays into an array of flat 
indices. The operator follows numpy conventions so a single multi index is 
given by a column of the input matrix. 
+.describe(R"code(Converts a batch of index arrays into an array of flat 
indices. The operator follows numpy conventions so a single multi index is 
given by a column of the input matrix. The leading dimension may be left 
unspecified by using -1 as placeholder.  
 
 Examples::

A = [[3,6,6],[4,5,1]]
ravel(A, shape=(7,6)) = [22,41,37]
+   ravel(A, shape=(-1,6)) = [22,41,37]
 
 )code" ADD_FILELINE)
 .set_num_inputs(1)
@@ -55,12 +56,13 @@ Examples::
 
 NNVM_REGISTER_OP(_unravel_index)
 .add_alias("unravel_index")
-.describe(R"code(Converts an array of flat indices into a batch of index 
arrays. The operator follows numpy conventions so a single multi index is given 
by a column of the output matrix.
+.describe(R"code(Converts an array of flat indices into a batch of index 
arrays. The operator follows numpy conventions so a single multi index is given 
by a column of the output matrix. The leading dimension may be left unspecified 
by using -1 as placeholder.  
 
 Examples::
 
A = [22,41,37]
unravel(A, shape=(7,6)) = [[3,6,6],[4,5,1]]
+   unravel(A, shape=(-1,6)) = [[3,6,6],[4,5,1]]
 
 )code" ADD_FILELINE)
 .set_num_inputs(1)
diff --git a/src/operator/tensor/ravel.h b/src/operator/tensor/ravel.h
index 6d337dc..256fe33 100644
--- a/src/operator/tensor/ravel.h
+++ b/src/operator/tensor/ravel.h
@@ -110,11 +110,12 @@ struct unravel_index {
   DType *unravelled, DType *ravelled) {
 index_t idx(ravelled[i]);
 #pragma unroll
-for (int j = ndim; j--; ) {
+for (int j = ndim-1; j > 0; --j) {
   index_t tmp = idx / shape[j];
   unravelled[i+j*N] = idx - tmp*shape[j];
   idx = tmp;
 }
+unravelled[i] = idx;
   }
 };
 
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index 6bb8150..7169395 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -7106,6 +7106,13 @@ def test_ravel():
   check_symbolic_forward(b, location={'a': data}, expected=[ravel_npy])
   c = mx.sym.unravel_index(a, shape=shape)
   check_symbolic_forward(c, location={'a': ravel_npy}, expected=[data])
+  # Test with leading dimension set to -1.
+  shape2 = shape
+  shape2 = (-1,)+shape[1:]
+  b = mx.sym.ravel_multi_index(a, shape=shape2)
+  check_symbolic_forward(b, location={'a': data}, expected=[ravel_npy])
+  c = mx.sym.unravel_index(a, shape=shape2)
+  check_symbolic_forward(c, location={'a': ravel_npy}, expected=[data])
 
 def test_context_num_gpus():
 try:



[GitHub] [incubator-mxnet] wkcn commented on issue #14356: support leading dimension of -1 in ravel/unravel

2019-03-10 Thread GitBox
wkcn commented on issue #14356: support leading dimension of -1 in ravel/unravel
URL: https://github.com/apache/incubator-mxnet/pull/14356#issuecomment-471396126
 
 
   The PR has been merged.
   Thanks for your contribution!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn closed issue #13862: [1.4.0] unravel_index no longer works with magic '-1' in shape parameter as in 1.3.1

2019-03-10 Thread GitBox
wkcn closed issue #13862: [1.4.0] unravel_index no longer works with magic '-1' 
in shape parameter as in 1.3.1
URL: https://github.com/apache/incubator-mxnet/issues/13862
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] peterpaniff closed issue #14384: try to train ssdlite mobilenetv2, encounter the error.

2019-03-10 Thread GitBox
peterpaniff closed issue #14384: try to train ssdlite mobilenetv2, encounter 
the error.
URL: https://github.com/apache/incubator-mxnet/issues/14384
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] peterpaniff commented on issue #14384: try to train ssdlite mobilenetv2, encounter the error.

2019-03-10 Thread GitBox
peterpaniff commented on issue #14384: try to train ssdlite mobilenetv2, 
encounter the error.
URL: 
https://github.com/apache/incubator-mxnet/issues/14384#issuecomment-471395643
 
 
   i solve it myself


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ankkhedia commented on a change in pull request #14351: [MXNet-1348][WIP][Fit API]Adding CNN examples for fit() API

2019-03-10 Thread GitBox
ankkhedia commented on a change in pull request #14351: [MXNet-1348][WIP][Fit 
API]Adding CNN examples for fit() API
URL: https://github.com/apache/incubator-mxnet/pull/14351#discussion_r264084828
 
 

 ##
 File path: example/gluon/estimator_example/alexnet.py
 ##
 @@ -0,0 +1,156 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This example is inspired from
+# 
https://github.com/d2l-ai/d2l-en/blob/master/chapter_convolutional-neural-networks/alexnet.md
+# Model definition is from
+# https://github.com/dmlc/gluon-cv/blob/master/gluoncv/model_zoo/alexnet.py
+
+
+import os
+import sys
+import argparse
+import mxnet as mx
+from mxnet import gluon
+from mxnet.gluon import nn, data
+from mxnet.gluon.block import HybridBlock
+from mxnet.gluon.estimator import estimator, event_handler
+
+def parse_args():
+'''
+Command Line Interface
+'''
+parser = argparse.ArgumentParser(description='Train ResNet18 on 
Fashion-MNIST')
+parser.add_argument('--batch-size', type=int, default=128,
+help='training batch size per device (CPU/GPU).')
+parser.add_argument('--num-epochs', type=int, default=1,
+help='number of training epochs.')
+parser.add_argument('--input-size', type=int, default=224,
+help='size of the input image size. default is 224')
+parser.add_argument('--lr', type=float, default=0.001,
+help='learning rate. default is 0.001')
+parser.add_argument('-j', '--num-workers', default=None, type=int,
+help='number of preprocessing workers')
+opt = parser.parse_args()
+return opt
+
+class AlexNet(HybridBlock):
 
 Review comment:
   @abhinavs95  You can also use default arguments  eg. learning rate , 
num_workers instead of taking them from command line arguments to make the 
example simple for the beginners


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] peterpaniff opened a new issue #14386: train ssdlite mobilenetv2, enconter the name error

2019-03-10 Thread GitBox
peterpaniff opened a new issue #14386: train ssdlite mobilenetv2, enconter the 
name error
URL: https://github.com/apache/incubator-mxnet/issues/14386
 
 
   raise ValueError('Cannot find output that matches name \"%s\"' % index)
   ValueError: Cannot find output that matches name 
"mobilenetv20_features_relu61_relu6"
   
   but i visualize the  pretrained mobilenetV2 structure, it does has the name.
   
![image](https://user-images.githubusercontent.com/6010392/54099213-0d412d80-43f3-11e9-91d6-4ccf30272969.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ankkhedia commented on a change in pull request #14351: [MXNet-1348][WIP][Fit API]Adding CNN examples for fit() API

2019-03-10 Thread GitBox
ankkhedia commented on a change in pull request #14351: [MXNet-1348][WIP][Fit 
API]Adding CNN examples for fit() API
URL: https://github.com/apache/incubator-mxnet/pull/14351#discussion_r264084828
 
 

 ##
 File path: example/gluon/estimator_example/alexnet.py
 ##
 @@ -0,0 +1,156 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This example is inspired from
+# 
https://github.com/d2l-ai/d2l-en/blob/master/chapter_convolutional-neural-networks/alexnet.md
+# Model definition is from
+# https://github.com/dmlc/gluon-cv/blob/master/gluoncv/model_zoo/alexnet.py
+
+
+import os
+import sys
+import argparse
+import mxnet as mx
+from mxnet import gluon
+from mxnet.gluon import nn, data
+from mxnet.gluon.block import HybridBlock
+from mxnet.gluon.estimator import estimator, event_handler
+
+def parse_args():
+'''
+Command Line Interface
+'''
+parser = argparse.ArgumentParser(description='Train ResNet18 on 
Fashion-MNIST')
+parser.add_argument('--batch-size', type=int, default=128,
+help='training batch size per device (CPU/GPU).')
+parser.add_argument('--num-epochs', type=int, default=1,
+help='number of training epochs.')
+parser.add_argument('--input-size', type=int, default=224,
+help='size of the input image size. default is 224')
+parser.add_argument('--lr', type=float, default=0.001,
+help='learning rate. default is 0.001')
+parser.add_argument('-j', '--num-workers', default=None, type=int,
+help='number of preprocessing workers')
+opt = parser.parse_args()
+return opt
+
+class AlexNet(HybridBlock):
 
 Review comment:
   @abhinavs95  You can also use default arguments  eg. learning rate , 
num_workers to make the example simple for the beginners


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stereomatchingkiss commented on issue #14376: I failed to install the mxnet-mkl version on Windows 10

2019-03-10 Thread GitBox
stereomatchingkiss commented on issue #14376: I  failed to install the 
mxnet-mkl version on Windows 10
URL: 
https://github.com/apache/incubator-mxnet/issues/14376#issuecomment-471394903
 
 
   > To reproduce the problem on the Windows
   
   Any chances to add a travis build for windows(if you have enough of 
resources) like #14370 suggest?
   One with mkl + cuda
   Another one with mkl
   
   This way whenever you commit the codes written on x platform(mac osx, linux 
etc), you can ensure your new codes do not break anything on windows.
   It would be better if you add travis build for every platforms you plan to 
support, like opencv.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ankkhedia commented on a change in pull request #14346: [MXNet-1334][WIP][Fit API]base class for estimator and eventhandler

2019-03-10 Thread GitBox
ankkhedia commented on a change in pull request #14346: [MXNet-1334][WIP][Fit 
API]base class for estimator and eventhandler
URL: https://github.com/apache/incubator-mxnet/pull/14346#discussion_r264084500
 
 

 ##
 File path: python/mxnet/gluon/estimator/estimator.py
 ##
 @@ -0,0 +1,203 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=wildcard-import
+"""Gluon Estimator"""
+
+
+import warnings
+
+from .event_handler import LoggingHandler
+from ... import *
+from ... import gluon, autograd
+from ...context import cpu, gpu, num_gpus
+from ...metric import EvalMetric, Loss
+
+__all__ = ['Estimator']
+
+
+class Estimator(object):
+"""
+Estimator Class for easy model training
+TODO: update doc
+"""
+
+def __init__(self, net,
+ loss=None,
+ metrics=None,
+ initializer=None,
+ trainers=None,
+ context=None):
+
+self.net = net
+if isinstance(loss, gluon.loss.Loss):
+self.loss = [loss]
+else:
+self.loss = loss or []
+if isinstance(metrics, EvalMetric):
+self.metrics = [metrics]
+else:
+self.metrics = metrics or []
+
+self.initializer = initializer
+# store training statistics
+self.train_stats = {}
+self.train_stats['epochs'] = []
+self.train_stats['learning_rate'] = []
+# time used for each epoch
+self.train_stats['step'] = ''
+for metric in self.metrics:
+# record a history of metrics over each epoch
+self.train_stats['train_' + metric.name] = []
+# only record the latest metric numbers after each batch
+self.train_stats['batch_' + metric.name] = 0.
+self.loss_metrics = []
+# using the metric wrapper for loss to record loss value
+for loss in self.loss:
+self.loss_metrics.append(Loss(loss.name))
+self.train_stats['train_' + loss.name] = []
+# only record the latest loss numbers after each batch
+self.train_stats['batch_' + loss.name] = 0.
+
+# handle context
+if isinstance(context, Context):
+self.context = [context]
+if not context:
+if num_gpus() > 0:
+# only use 1 GPU by default
+if num_gpus() > 1:
+warnings.warn("You have multiple GPUs, gpu(0) will be used 
by default."
+  "To utilize all your GPUs, specify context 
as a list of gpus, e.g. context=[mx.gpu(0), mx.gpu(2)] ")
+self.context = [gpu(0)]
+else:
+self.context = [cpu()]
+
+# initialize the network
+if self.initializer:
+if self._is_initialized():
+# if already initialized, re-init with user specified 
initializer
+warnings.warn("You have already initialized your net, it will 
be forced re-initialized "
+  "with the initializer you speficied. You don't 
need to pass initializer if you alraedy initialized your net.")
+self.net.initialize(init=self.initializer, ctx=self.context, 
force_reinit=True)
+else:
+# initialize with user specified initializer
+self.net.initialize(init=self.initializer, ctx=self.context, 
force_reinit=False)
+else:
+if not self._is_initialized():
+self.net.initialize(ctx=self.context)
+
+# handle trainers
+if isinstance(trainers, gluon.Trainer):
+self.trainers = [trainers]
+else:
+self.trainers = trainers or []
+if not self.trainers:
+warnings.warn("No trainer specified, default SGD optimizer with 
learning rate 0.001 is used.")
+self.trainers = [gluon.Trainer(self.net.collect_params(), 'sgd', 
{'learning_rate': 0.001})]
+
+def _is_initialized(self):
+param_dict = self.net.collect_params()
+for param in param_dict:
+try:
+param_dict[param].list_ctx()
+except RuntimeError:
+ 

[GitHub] [incubator-mxnet] ankkhedia commented on a change in pull request #14346: [MXNet-1334][WIP][Fit API]base class for estimator and eventhandler

2019-03-10 Thread GitBox
ankkhedia commented on a change in pull request #14346: [MXNet-1334][WIP][Fit 
API]base class for estimator and eventhandler
URL: https://github.com/apache/incubator-mxnet/pull/14346#discussion_r264084052
 
 

 ##
 File path: python/mxnet/gluon/estimator/estimator.py
 ##
 @@ -0,0 +1,203 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=wildcard-import
+"""Gluon Estimator"""
+
+
+import warnings
+
+from .event_handler import LoggingHandler
+from ... import *
+from ... import gluon, autograd
+from ...context import cpu, gpu, num_gpus
+from ...metric import EvalMetric, Loss
+
+__all__ = ['Estimator']
+
+
+class Estimator(object):
+"""
+Estimator Class for easy model training
+TODO: update doc
+"""
+
+def __init__(self, net,
+ loss=None,
+ metrics=None,
+ initializer=None,
+ trainers=None,
+ context=None):
+
+self.net = net
+if isinstance(loss, gluon.loss.Loss):
+self.loss = [loss]
+else:
+self.loss = loss or []
+if isinstance(metrics, EvalMetric):
+self.metrics = [metrics]
+else:
+self.metrics = metrics or []
+
+self.initializer = initializer
+# store training statistics
+self.train_stats = {}
+self.train_stats['epochs'] = []
+self.train_stats['learning_rate'] = []
+# time used for each epoch
+self.train_stats['step'] = ''
+for metric in self.metrics:
+# record a history of metrics over each epoch
+self.train_stats['train_' + metric.name] = []
+# only record the latest metric numbers after each batch
+self.train_stats['batch_' + metric.name] = 0.
+self.loss_metrics = []
+# using the metric wrapper for loss to record loss value
+for loss in self.loss:
+self.loss_metrics.append(Loss(loss.name))
+self.train_stats['train_' + loss.name] = []
+# only record the latest loss numbers after each batch
+self.train_stats['batch_' + loss.name] = 0.
+
+# handle context
+if isinstance(context, Context):
+self.context = [context]
+if not context:
+if num_gpus() > 0:
+# only use 1 GPU by default
+if num_gpus() > 1:
+warnings.warn("You have multiple GPUs, gpu(0) will be used 
by default."
+  "To utilize all your GPUs, specify context 
as a list of gpus, e.g. context=[mx.gpu(0), mx.gpu(2)] ")
+self.context = [gpu(0)]
+else:
+self.context = [cpu()]
+
+# initialize the network
+if self.initializer:
+if self._is_initialized():
+# if already initialized, re-init with user specified 
initializer
+warnings.warn("You have already initialized your net, it will 
be forced re-initialized "
+  "with the initializer you speficied. You don't 
need to pass initializer if you alraedy initialized your net.")
+self.net.initialize(init=self.initializer, ctx=self.context, 
force_reinit=True)
+else:
+# initialize with user specified initializer
+self.net.initialize(init=self.initializer, ctx=self.context, 
force_reinit=False)
+else:
+if not self._is_initialized():
+self.net.initialize(ctx=self.context)
+
+# handle trainers
+if isinstance(trainers, gluon.Trainer):
+self.trainers = [trainers]
+else:
+self.trainers = trainers or []
+if not self.trainers:
 
 Review comment:
   Are we dealing with multiple trainer case over here (e.g. Multi task 
classification)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With 

[GitHub] [incubator-mxnet] dbsxdbsx edited a comment on issue #14385: OpenCV 4.0 is currently not support on win10

2019-03-10 Thread GitBox
dbsxdbsx edited a comment on issue #14385: OpenCV 4.0 is currently not support 
on win10
URL: 
https://github.com/apache/incubator-mxnet/issues/14385#issuecomment-471390025
 
 
   @wkcn,for context of error message, not feasible at present, I would try to 
rebuilt it and get it soon. 
   for code of `cvstd_wrapper.hpp` in OPENCV4.0:
   ```
   // This file is part of OpenCV project.
   // It is subject to the license terms in the LICENSE file found in the 
top-level directory
   // of this distribution and at http://opencv.org/license.html.
   
   #ifndef OPENCV_CORE_CVSTD_WRAPPER_HPP
   #define OPENCV_CORE_CVSTD_WRAPPER_HPP
   
   #include "opencv2/core/cvdef.h"
   
   #include 
   #include   // std::shared_ptr
   #include   // std::enable_if
   
   namespace cv {
   
   using std::nullptr_t;
   
   //! @addtogroup core_basic
   //! @{
   
   #ifdef CV_DOXYGEN
   
   template  using Ptr = std::shared_ptr<_Tp>;  // In ideal world 
it should look like this, but we need some compatibility workarounds below
   
   template static inline
   Ptr<_Tp> makePtr(const A1&... a1) { return std::make_shared<_Tp>(a1...); }
   
   #else  // cv::Ptr with compatibility workarounds
   
   // It should be defined for C-API types only.
   // C++ types should use regular "delete" operator.
   template struct DefaultDeleter;
   #if 0
   {
   void operator()(Y* p) const;
   };
   #endif
   
   namespace sfinae {
   template
   struct has_parenthesis_operator
   {
   private:
   template
   static CV_CONSTEXPR std::true_type check(typename std::is_same().operator()(std::declval()...))>::type,
 Ret>::type*);
   
   template static CV_CONSTEXPR std::false_type check(...);
   
   typedef decltype(check(0)) type;
   
   public:
   static CV_CONSTEXPR bool value = type::value;
   };
   } // namespace sfinae
   
   template 
   struct has_custom_delete
   : public std::false_type {};
   
   template 
   struct has_custom_delete, void, T*>::value >::type >
   : public std::true_type {};
   
   
   template
   struct Ptr : public std::shared_ptr
   {
   #if 0
   using std::shared_ptr::shared_ptr;  // GCC 5.x can't handle this
   #else
   inline Ptr() CV_NOEXCEPT : std::shared_ptr() {}
   inline Ptr(nullptr_t) CV_NOEXCEPT : std::shared_ptr(nullptr) {}
   template inline Ptr(Y* p, D d) : 
std::shared_ptr(p, d) {}
   template inline Ptr(nullptr_t, D d) : 
std::shared_ptr(nullptr, d) {}
   
   template inline Ptr(const Ptr& r, T* ptr) CV_NOEXCEPT : 
std::shared_ptr(r, ptr) {}
   
   inline Ptr(const Ptr& o) CV_NOEXCEPT : std::shared_ptr(o) {}
   inline Ptr(Ptr&& o) CV_NOEXCEPT : std::shared_ptr(std::move(o)) {}
   
   template inline Ptr(const Ptr& o) CV_NOEXCEPT : 
std::shared_ptr(o) {}
   template inline Ptr(Ptr&& o) CV_NOEXCEPT : 
std::shared_ptr(std::move(o)) {}
   #endif
   inline Ptr(const std::shared_ptr& o) CV_NOEXCEPT : 
std::shared_ptr(o) {}
   inline Ptr(std::shared_ptr&& o) CV_NOEXCEPT : 
std::shared_ptr(std::move(o)) {}
   
   // Overload with custom DefaultDeleter: Ptr(...)
   template
   inline Ptr(const std::true_type&, Y* ptr) : std::shared_ptr(ptr, 
DefaultDeleter()) {}
   
   // Overload without custom deleter: Ptr(...);
   template
   inline Ptr(const std::false_type&, Y* ptr) : std::shared_ptr(ptr) {}
   
   template
   inline Ptr(Y* ptr) : Ptr(has_custom_delete(), ptr) {}
   
   // Overload with custom DefaultDeleter: Ptr(...)
   template
   inline void reset(const std::true_type&, Y* ptr) { 
std::shared_ptr::reset(ptr, DefaultDeleter()); }
   
   // Overload without custom deleter: Ptr(...);
   template
   inline void reset(const std::false_type&, Y* ptr) { 
std::shared_ptr::reset(ptr); }
   
   template
   inline void reset(Y* ptr) { Ptr::reset(has_custom_delete(), ptr); }
   
   template
   void reset(Y* ptr, Deleter d) { std::shared_ptr::reset(ptr, d); }
   
   void reset() CV_NOEXCEPT { std::shared_ptr::reset(); }
   
   Ptr& operator=(const Ptr& o) { std::shared_ptr::operator =(o); return 
*this; }
   template inline Ptr& operator=(const Ptr& o) { 
std::shared_ptr::operator =(o); return *this; }
   
   T* operator->() const CV_NOEXCEPT { return std::shared_ptr::get();}
   typename std::add_lvalue_reference::type operator*() const 
CV_NOEXCEPT { return *std::shared_ptr::get(); }
   
   // OpenCV 3.x methods (not a part of standart C++ library)
   inline void release() { std::shared_ptr::reset(); }
   inline operator T* () const { return std::shared_ptr::get(); }
   inline bool empty() const { return std::shared_ptr::get() == nullptr; 
}
   
   template inline
   Ptr staticCast() const CV_NOEXCEPT { return 
std::static_pointer_cast(*this); }
   
   template inline
   Ptr constCast() const CV_NOEXCEPT { return 
std::const_pointer_cast(*this); }
   
   template inline
   Ptr dynamicCast() const 

[GitHub] [incubator-mxnet] dbsxdbsx edited a comment on issue #14385: OpenCV 4.0 is currently not support on win10

2019-03-10 Thread GitBox
dbsxdbsx edited a comment on issue #14385: OpenCV 4.0 is currently not support 
on win10
URL: 
https://github.com/apache/incubator-mxnet/issues/14385#issuecomment-471390025
 
 
   @wkcn,for context of error message, not feasible at present, I would try to 
rebuilt it and get it soon. 
   for code in OPENCV4.0:
   ```
   // This file is part of OpenCV project.
   // It is subject to the license terms in the LICENSE file found in the 
top-level directory
   // of this distribution and at http://opencv.org/license.html.
   
   #ifndef OPENCV_CORE_CVSTD_WRAPPER_HPP
   #define OPENCV_CORE_CVSTD_WRAPPER_HPP
   
   #include "opencv2/core/cvdef.h"
   
   #include 
   #include   // std::shared_ptr
   #include   // std::enable_if
   
   namespace cv {
   
   using std::nullptr_t;
   
   //! @addtogroup core_basic
   //! @{
   
   #ifdef CV_DOXYGEN
   
   template  using Ptr = std::shared_ptr<_Tp>;  // In ideal world 
it should look like this, but we need some compatibility workarounds below
   
   template static inline
   Ptr<_Tp> makePtr(const A1&... a1) { return std::make_shared<_Tp>(a1...); }
   
   #else  // cv::Ptr with compatibility workarounds
   
   // It should be defined for C-API types only.
   // C++ types should use regular "delete" operator.
   template struct DefaultDeleter;
   #if 0
   {
   void operator()(Y* p) const;
   };
   #endif
   
   namespace sfinae {
   template
   struct has_parenthesis_operator
   {
   private:
   template
   static CV_CONSTEXPR std::true_type check(typename std::is_same().operator()(std::declval()...))>::type,
 Ret>::type*);
   
   template static CV_CONSTEXPR std::false_type check(...);
   
   typedef decltype(check(0)) type;
   
   public:
   static CV_CONSTEXPR bool value = type::value;
   };
   } // namespace sfinae
   
   template 
   struct has_custom_delete
   : public std::false_type {};
   
   template 
   struct has_custom_delete, void, T*>::value >::type >
   : public std::true_type {};
   
   
   template
   struct Ptr : public std::shared_ptr
   {
   #if 0
   using std::shared_ptr::shared_ptr;  // GCC 5.x can't handle this
   #else
   inline Ptr() CV_NOEXCEPT : std::shared_ptr() {}
   inline Ptr(nullptr_t) CV_NOEXCEPT : std::shared_ptr(nullptr) {}
   template inline Ptr(Y* p, D d) : 
std::shared_ptr(p, d) {}
   template inline Ptr(nullptr_t, D d) : 
std::shared_ptr(nullptr, d) {}
   
   template inline Ptr(const Ptr& r, T* ptr) CV_NOEXCEPT : 
std::shared_ptr(r, ptr) {}
   
   inline Ptr(const Ptr& o) CV_NOEXCEPT : std::shared_ptr(o) {}
   inline Ptr(Ptr&& o) CV_NOEXCEPT : std::shared_ptr(std::move(o)) {}
   
   template inline Ptr(const Ptr& o) CV_NOEXCEPT : 
std::shared_ptr(o) {}
   template inline Ptr(Ptr&& o) CV_NOEXCEPT : 
std::shared_ptr(std::move(o)) {}
   #endif
   inline Ptr(const std::shared_ptr& o) CV_NOEXCEPT : 
std::shared_ptr(o) {}
   inline Ptr(std::shared_ptr&& o) CV_NOEXCEPT : 
std::shared_ptr(std::move(o)) {}
   
   // Overload with custom DefaultDeleter: Ptr(...)
   template
   inline Ptr(const std::true_type&, Y* ptr) : std::shared_ptr(ptr, 
DefaultDeleter()) {}
   
   // Overload without custom deleter: Ptr(...);
   template
   inline Ptr(const std::false_type&, Y* ptr) : std::shared_ptr(ptr) {}
   
   template
   inline Ptr(Y* ptr) : Ptr(has_custom_delete(), ptr) {}
   
   // Overload with custom DefaultDeleter: Ptr(...)
   template
   inline void reset(const std::true_type&, Y* ptr) { 
std::shared_ptr::reset(ptr, DefaultDeleter()); }
   
   // Overload without custom deleter: Ptr(...);
   template
   inline void reset(const std::false_type&, Y* ptr) { 
std::shared_ptr::reset(ptr); }
   
   template
   inline void reset(Y* ptr) { Ptr::reset(has_custom_delete(), ptr); }
   
   template
   void reset(Y* ptr, Deleter d) { std::shared_ptr::reset(ptr, d); }
   
   void reset() CV_NOEXCEPT { std::shared_ptr::reset(); }
   
   Ptr& operator=(const Ptr& o) { std::shared_ptr::operator =(o); return 
*this; }
   template inline Ptr& operator=(const Ptr& o) { 
std::shared_ptr::operator =(o); return *this; }
   
   T* operator->() const CV_NOEXCEPT { return std::shared_ptr::get();}
   typename std::add_lvalue_reference::type operator*() const 
CV_NOEXCEPT { return *std::shared_ptr::get(); }
   
   // OpenCV 3.x methods (not a part of standart C++ library)
   inline void release() { std::shared_ptr::reset(); }
   inline operator T* () const { return std::shared_ptr::get(); }
   inline bool empty() const { return std::shared_ptr::get() == nullptr; 
}
   
   template inline
   Ptr staticCast() const CV_NOEXCEPT { return 
std::static_pointer_cast(*this); }
   
   template inline
   Ptr constCast() const CV_NOEXCEPT { return 
std::const_pointer_cast(*this); }
   
   template inline
   Ptr dynamicCast() const CV_NOEXCEPT { return 

[GitHub] [incubator-mxnet] dbsxdbsx commented on issue #14385: OpenCV 4.0 is currently not support on win10

2019-03-10 Thread GitBox
dbsxdbsx commented on issue #14385: OpenCV 4.0 is currently not support on win10
URL: 
https://github.com/apache/incubator-mxnet/issues/14385#issuecomment-471390025
 
 
   for code in OPENCV4.0
   ```
   // This file is part of OpenCV project.
   // It is subject to the license terms in the LICENSE file found in the 
top-level directory
   // of this distribution and at http://opencv.org/license.html.
   
   #ifndef OPENCV_CORE_CVSTD_WRAPPER_HPP
   #define OPENCV_CORE_CVSTD_WRAPPER_HPP
   
   #include "opencv2/core/cvdef.h"
   
   #include 
   #include   // std::shared_ptr
   #include   // std::enable_if
   
   namespace cv {
   
   using std::nullptr_t;
   
   //! @addtogroup core_basic
   //! @{
   
   #ifdef CV_DOXYGEN
   
   template  using Ptr = std::shared_ptr<_Tp>;  // In ideal world 
it should look like this, but we need some compatibility workarounds below
   
   template static inline
   Ptr<_Tp> makePtr(const A1&... a1) { return std::make_shared<_Tp>(a1...); }
   
   #else  // cv::Ptr with compatibility workarounds
   
   // It should be defined for C-API types only.
   // C++ types should use regular "delete" operator.
   template struct DefaultDeleter;
   #if 0
   {
   void operator()(Y* p) const;
   };
   #endif
   
   namespace sfinae {
   template
   struct has_parenthesis_operator
   {
   private:
   template
   static CV_CONSTEXPR std::true_type check(typename std::is_same().operator()(std::declval()...))>::type,
 Ret>::type*);
   
   template static CV_CONSTEXPR std::false_type check(...);
   
   typedef decltype(check(0)) type;
   
   public:
   static CV_CONSTEXPR bool value = type::value;
   };
   } // namespace sfinae
   
   template 
   struct has_custom_delete
   : public std::false_type {};
   
   template 
   struct has_custom_delete, void, T*>::value >::type >
   : public std::true_type {};
   
   
   template
   struct Ptr : public std::shared_ptr
   {
   #if 0
   using std::shared_ptr::shared_ptr;  // GCC 5.x can't handle this
   #else
   inline Ptr() CV_NOEXCEPT : std::shared_ptr() {}
   inline Ptr(nullptr_t) CV_NOEXCEPT : std::shared_ptr(nullptr) {}
   template inline Ptr(Y* p, D d) : 
std::shared_ptr(p, d) {}
   template inline Ptr(nullptr_t, D d) : 
std::shared_ptr(nullptr, d) {}
   
   template inline Ptr(const Ptr& r, T* ptr) CV_NOEXCEPT : 
std::shared_ptr(r, ptr) {}
   
   inline Ptr(const Ptr& o) CV_NOEXCEPT : std::shared_ptr(o) {}
   inline Ptr(Ptr&& o) CV_NOEXCEPT : std::shared_ptr(std::move(o)) {}
   
   template inline Ptr(const Ptr& o) CV_NOEXCEPT : 
std::shared_ptr(o) {}
   template inline Ptr(Ptr&& o) CV_NOEXCEPT : 
std::shared_ptr(std::move(o)) {}
   #endif
   inline Ptr(const std::shared_ptr& o) CV_NOEXCEPT : 
std::shared_ptr(o) {}
   inline Ptr(std::shared_ptr&& o) CV_NOEXCEPT : 
std::shared_ptr(std::move(o)) {}
   
   // Overload with custom DefaultDeleter: Ptr(...)
   template
   inline Ptr(const std::true_type&, Y* ptr) : std::shared_ptr(ptr, 
DefaultDeleter()) {}
   
   // Overload without custom deleter: Ptr(...);
   template
   inline Ptr(const std::false_type&, Y* ptr) : std::shared_ptr(ptr) {}
   
   template
   inline Ptr(Y* ptr) : Ptr(has_custom_delete(), ptr) {}
   
   // Overload with custom DefaultDeleter: Ptr(...)
   template
   inline void reset(const std::true_type&, Y* ptr) { 
std::shared_ptr::reset(ptr, DefaultDeleter()); }
   
   // Overload without custom deleter: Ptr(...);
   template
   inline void reset(const std::false_type&, Y* ptr) { 
std::shared_ptr::reset(ptr); }
   
   template
   inline void reset(Y* ptr) { Ptr::reset(has_custom_delete(), ptr); }
   
   template
   void reset(Y* ptr, Deleter d) { std::shared_ptr::reset(ptr, d); }
   
   void reset() CV_NOEXCEPT { std::shared_ptr::reset(); }
   
   Ptr& operator=(const Ptr& o) { std::shared_ptr::operator =(o); return 
*this; }
   template inline Ptr& operator=(const Ptr& o) { 
std::shared_ptr::operator =(o); return *this; }
   
   T* operator->() const CV_NOEXCEPT { return std::shared_ptr::get();}
   typename std::add_lvalue_reference::type operator*() const 
CV_NOEXCEPT { return *std::shared_ptr::get(); }
   
   // OpenCV 3.x methods (not a part of standart C++ library)
   inline void release() { std::shared_ptr::reset(); }
   inline operator T* () const { return std::shared_ptr::get(); }
   inline bool empty() const { return std::shared_ptr::get() == nullptr; 
}
   
   template inline
   Ptr staticCast() const CV_NOEXCEPT { return 
std::static_pointer_cast(*this); }
   
   template inline
   Ptr constCast() const CV_NOEXCEPT { return 
std::const_pointer_cast(*this); }
   
   template inline
   Ptr dynamicCast() const CV_NOEXCEPT { return 
std::dynamic_pointer_cast(*this); }
   };
   
   template static inline
   Ptr<_Tp> makePtr(const A1&... a1)
   {
   

[GitHub] [incubator-mxnet] arcadiaphy commented on issue #14058: add backgroud class in box_nms

2019-03-10 Thread GitBox
arcadiaphy commented on issue #14058: add backgroud class in box_nms
URL: https://github.com/apache/incubator-mxnet/pull/14058#issuecomment-471387717
 
 
   @zhreshold OK to go now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #14385: OpenCV 4.0 is currently not support on win10

2019-03-10 Thread GitBox
wkcn commented on issue #14385: OpenCV 4.0 is currently not support on win10
URL: 
https://github.com/apache/incubator-mxnet/issues/14385#issuecomment-471386554
 
 
   Could you please provide the context of error message, and the code of 
c:\opencv\build\include\opencv2\core\cvstd_wrapper.hpp45?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hengdos edited a comment on issue #13870: [C++] Linking static library (error or bug?) to load python trained model

2019-03-10 Thread GitBox
hengdos edited a comment on issue #13870: [C++] Linking static library (error 
or bug?) to load python trained model
URL: 
https://github.com/apache/incubator-mxnet/issues/13870#issuecomment-471385323
 
 
   > @hengdos I was trying to reproduce this issue on Mac and Ubuntu. But I was 
not able to invoke 'make' while building the 'transformer' with statically 
linked libraries. Can you please ensure that CMakelist.txt for static library 
is correct?
   > I will try it again as well.
   
   @leleamol hi, I have tested the following CMakelist.txt for static library. 
It works well on my macbook (macOS Mojave 10.14.3). 
   
   ```
   cmake_minimum_required(VERSION 3.9)  
   
   set (CMAKE_CXX_STANDARD 11)
   
   add_executable(transformer
   main.cc
   )
   
   target_link_libraries(transformer
   ${PROJECT_SOURCE_DIR}/lib/libmxnet.a
   ${PROJECT_SOURCE_DIR}/lib/libdmlc.a
   ${PROJECT_SOURCE_DIR}/lib/libnnvm.a
   )
   
   target_include_directories(transformer PUBLIC
   ${PROJECT_SOURCE_DIR}/include
   )
   ```
   
   The output is like this:
   ```shell
   $ ./transformer 
   net-symbol.json ... 954 bytes
   net-.params ... 164 bytes
   Assertion failed: (pred_hnd), function main, file 
/Users/heng/Projects/lrcplus/main.cc, line 101.
   [1]30126 abort  ./transformer
   ```
   
   Thanks for your reply.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hengdos commented on issue #13870: [C++] Linking static library (error or bug?) to load python trained model

2019-03-10 Thread GitBox
hengdos commented on issue #13870: [C++] Linking static library (error or bug?) 
to load python trained model
URL: 
https://github.com/apache/incubator-mxnet/issues/13870#issuecomment-471385323
 
 
   > @hengdos I was trying to reproduce this issue on Mac and Ubuntu. But I was 
not able to invoke 'make' while building the 'transformer' with statically 
linked libraries. Can you please ensure that CMakelist.txt for static library 
is correct?
   > I will try it again as well.
   
   @leleamol hi, I have tested the following CMakelist.txt for static library 
on macOS Mojave 10.14.3.
   
   ```
   cmake_minimum_required(VERSION 3.9)  
   
   set (CMAKE_CXX_STANDARD 11)
   
   add_executable(transformer
   main.cc
   )
   
   target_link_libraries(transformer
   ${PROJECT_SOURCE_DIR}/lib/libmxnet.a
   ${PROJECT_SOURCE_DIR}/lib/libdmlc.a
   ${PROJECT_SOURCE_DIR}/lib/libnnvm.a
   )
   
   target_include_directories(transformer PUBLIC
   ${PROJECT_SOURCE_DIR}/include
   )
   ```
   
   The output is like this:
   ```shell
   $ ./transformer 
   net-symbol.json ... 954 bytes
   net-.params ... 164 bytes
   Assertion failed: (pred_hnd), function main, file 
/Users/heng/Projects/lrcplus/main.cc, line 101.
   [1]30126 abort  ./transformer
   ```
   
   Thanks for your reply.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] dbsxdbsx commented on issue #14313: compatibility with opencv4

2019-03-10 Thread GitBox
dbsxdbsx commented on issue #14313: compatibility with opencv4
URL: https://github.com/apache/incubator-mxnet/pull/14313#issuecomment-471385029
 
 
   @wkcn, thanks. I posted it #14385.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #14385: OpenCV 4.0 is currently not support on win10

2019-03-10 Thread GitBox
mxnet-label-bot commented on issue #14385: OpenCV 4.0 is currently not support 
on win10
URL: 
https://github.com/apache/incubator-mxnet/issues/14385#issuecomment-471384938
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Build


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] dbsxdbsx opened a new issue #14385: OpenCV 4.0 is currently not support on win10

2019-03-10 Thread GitBox
dbsxdbsx opened a new issue #14385: OpenCV 4.0 is currently not support on win10
URL: https://github.com/apache/incubator-mxnet/issues/14385
 
 
   I try to build mxnet on win10_64. With the latest mxnet source code, and 
vs2017(changed to v14)  cuda+cudnn+mkl+mkldnn+opencv4(prebuilt), then it raises 
an error after compiling about 15 minutes.
   ```
   incomplete type is not allowed   mxnet   
c:\opencv\build\include\opencv2\core\cvstd_wrapper.hpp  45
   ```  
   But everything is ok with opencv3.4.4(prebuilt).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #14376: I failed to install the mxnet-mkl version on Windows 10

2019-03-10 Thread GitBox
pengzhao-intel commented on issue #14376: I  failed to install the mxnet-mkl 
version on Windows 10
URL: 
https://github.com/apache/incubator-mxnet/issues/14376#issuecomment-471383996
 
 
   To reproduce the problem on the Windows


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wufengqian commented on issue #14376: I failed to install the mxnet-mkl version on Windows 10

2019-03-10 Thread GitBox
wufengqian commented on issue #14376: I  failed to install the mxnet-mkl 
version on Windows 10
URL: 
https://github.com/apache/incubator-mxnet/issues/14376#issuecomment-471382917
 
 
   Building the windows system inside? I'm sorry but I am curious about that. 
What is the purpose ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #14376: I failed to install the mxnet-mkl version on Windows 10

2019-03-10 Thread GitBox
pengzhao-intel commented on issue #14376: I  failed to install the mxnet-mkl 
version on Windows 10
URL: 
https://github.com/apache/incubator-mxnet/issues/14376#issuecomment-471382333
 
 
   Sorry, we don't check the forum frequently :(
   BTW, please wait a moment because we are building the windows system inside 
now. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wufengqian commented on issue #14376: I failed to install the mxnet-mkl version on Windows 10

2019-03-10 Thread GitBox
wufengqian commented on issue #14376: I  failed to install the mxnet-mkl 
version on Windows 10
URL: 
https://github.com/apache/incubator-mxnet/issues/14376#issuecomment-471381106
 
 
   Hi,
   I also asked the same question in [discuss.gluon.ai](url) ,but it seems 
that no one know how to solve the problem in detail. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #14375: 新版官网没有验证安装过程

2019-03-10 Thread GitBox
pengzhao-intel commented on issue #14375: 新版官网没有验证安装过程
URL: 
https://github.com/apache/incubator-mxnet/issues/14375#issuecomment-471380094
 
 
   @juliusshufan 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #14375: 新版官网没有验证安装过程

2019-03-10 Thread GitBox
pengzhao-intel commented on issue #14375: 新版官网没有验证安装过程
URL: 
https://github.com/apache/incubator-mxnet/issues/14375#issuecomment-471380005
 
 
   @wufengqian thanks for the great suggestions.
   @TaoLv is working on the MKL documentation 
https://github.com/apache/incubator-mxnet/pull/14202
   
   @mxnet-label-bot add [doc, questions] 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #14313: compatibility with opencv4

2019-03-10 Thread GitBox
wkcn commented on issue #14313: compatibility with opencv4
URL: https://github.com/apache/incubator-mxnet/pull/14313#issuecomment-471378485
 
 
   @dbsxdbsx Could you please open an issue and provide more error message?
   I only fix the compatibility on Linux in this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] dbsxdbsx commented on issue #14313: compatibility with opencv4

2019-03-10 Thread GitBox
dbsxdbsx commented on issue #14313: compatibility with opencv4
URL: https://github.com/apache/incubator-mxnet/pull/14313#issuecomment-471375630
 
 
   As far as I know, when I tried to build mxnet on win10 with cuda 10.1. It 
failed with compling error.
   ```
   incomplete type is not allowed   mxnet   
c:\opencv\build\include\opencv2\core\cvstd_wrapper.hpp  45  
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-03-10 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new d650bd1  Bump the publish timestamp.
d650bd1 is described below

commit d650bd1181a77025932ddff20620326846a63fa7
Author: mxnet-ci 
AuthorDate: Mon Mar 11 01:18:19 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..3fb55ea
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Mar 11 01:18:19 UTC 2019



[GitHub] [incubator-mxnet] zboldyga edited a comment on issue #14360: supporting matrix inversion and determinant

2019-03-10 Thread GitBox
zboldyga edited a comment on issue #14360: supporting matrix inversion and 
determinant 
URL: 
https://github.com/apache/incubator-mxnet/issues/14360#issuecomment-471338317
 
 
   @ketranm 
   
   Looks like inversion via Cholesky factorization is supported, and there's 
also an API handle for getting the inversion using that factorization:
   
   https://mxnet.apache.org/api/python/ndarray/linalg.html#linear-algebra
   
   potrf - get the Cholesky factorization (triangular matrix)
   potri - calculate inversion (edit: using the Cholesky factorization from 
potrf)
   sumlogdiag - *may* be useful for calculating logdeterminant (my linear 
algebra is a little rusty)
   
   There's not a shortcut for getting the determinant or log determinant, but 
these are simple ops using the Cholesky factorization. 
   
   It seems to me that all of this should be clarified in the documentation, at 
the minimum, and we should probably add API calls for det and logdet. I've also 
made a request to have a single 'inverse' operation as with Torch. I opened a 
JIRA ticket and will start implementing these as soon as someone more internal 
to the MXNet project signs-off!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gigasquid commented on a change in pull request #14305: [Clojure] Helper function for n-dim vector to ndarray

2019-03-10 Thread GitBox
gigasquid commented on a change in pull request #14305: [Clojure] Helper 
function for n-dim vector to ndarray
URL: https://github.com/apache/incubator-mxnet/pull/14305#discussion_r264014076
 
 

 ##
 File path: contrib/clojure-package/src/org/apache/clojure_mxnet/util.clj
 ##
 @@ -218,15 +218,25 @@
 (throw (ex-info error-msg
 (s/explain-data spec value)
 
-(s/def ::non-empty-seq sequential?)
+(s/def ::non-empty-seq (s/and sequential? not-empty))
 (defn to-array-nd
   "Converts any N-D sequential structure to an array
with the same dimensions."
-  [s]
-  (validate! ::non-empty-seq s "Invalid N-D sequence")
-  (if (sequential? (first s))
-(to-array (mapv to-array-nd s))
-(to-array s)))
+  [nd-seq]
+  (validate! ::non-empty-seq nd-seq "Invalid N-D sequence")
+  (if (sequential? (first nd-seq))
+(to-array (mapv to-array-nd nd-seq))
+(to-array nd-seq)))
+
+(defn nd-seq-shape
 
 Review comment:
   Nice!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [clojure-package][wip] add `->nd-vec` function in `ndarray.clj` (#14308)

2019-03-10 Thread cmeier
This is an automated email from the ASF dual-hosted git repository.

cmeier pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8be97d7  [clojure-package][wip] add `->nd-vec` function in 
`ndarray.clj` (#14308)
8be97d7 is described below

commit 8be97d7a79f9ea9815e41956e5f15ddcf25026b6
Author: Arthur Caillau 
AuthorDate: Mon Mar 11 00:46:50 2019 +0100

[clojure-package][wip] add `->nd-vec` function in `ndarray.clj` (#14308)

* [clojure-package][wip] add `->nd-vec` function in `ndarray.clj`

* WIP
* Unit tests need to be added

* [clojure-package][ndarray] add unit tests for `->nd-vec` util fn
---
 .../src/org/apache/clojure_mxnet/ndarray.clj   | 58 +++---
 .../test/org/apache/clojure_mxnet/ndarray_test.clj | 12 +
 2 files changed, 64 insertions(+), 6 deletions(-)

diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/ndarray.clj 
b/contrib/clojure-package/src/org/apache/clojure_mxnet/ndarray.clj
index 651bdcb..151e18b 100644
--- a/contrib/clojure-package/src/org/apache/clojure_mxnet/ndarray.clj
+++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/ndarray.clj
@@ -16,15 +16,18 @@
 ;;
 
 (ns org.apache.clojure-mxnet.ndarray
+  "NDArray API for Clojure package."
   (:refer-clojure :exclude [* - + > >= < <= / cast concat flatten identity 
load max
 min repeat reverse set sort take to-array empty 
shuffle
 ref])
-  (:require [org.apache.clojure-mxnet.base :as base]
-[org.apache.clojure-mxnet.context :as mx-context]
-[org.apache.clojure-mxnet.shape :as mx-shape]
-[org.apache.clojure-mxnet.util :as util]
-[clojure.reflect :as r]
-[t6.from-scala.core :refer [$] :as $])
+  (:require
+[clojure.spec.alpha :as s]
+
+[org.apache.clojure-mxnet.base :as base]
+[org.apache.clojure-mxnet.context :as mx-context]
+[org.apache.clojure-mxnet.shape :as mx-shape]
+[org.apache.clojure-mxnet.util :as util]
+[t6.from-scala.core :refer [$] :as $])
   (:import (org.apache.mxnet NDArray)))
 
 ;; loads the generated functions into the namespace
@@ -167,3 +170,46 @@
 
 (defn shape-vec [ndarray]
   (mx-shape/->vec (shape ndarray)))
+
+(s/def ::ndarray #(instance? NDArray %))
+(s/def ::vector vector?)
+(s/def ::sequential sequential?)
+(s/def ::shape-vec-match-vec
+  (fn [[v vec-shape]] (= (count v) (reduce clojure.core/* 1 vec-shape
+
+(s/fdef vec->nd-vec
+:args (s/cat :v ::sequential :shape-vec ::sequential)
+:ret ::vector)
+
+(defn- vec->nd-vec
+  "Convert a vector `v` into a n-dimensional vector given the `shape-vec`
+   Ex:
+(vec->nd-vec [1 2 3] [1 1 3])   ;[[[1 2 3]]]
+(vec->nd-vec [1 2 3 4 5 6] [2 3 1]) ;[[[1] [2] [3]] [[4] [5] [6]]]
+(vec->nd-vec [1 2 3 4 5 6] [1 2 3]) ;[[[1 2 3]] [4 5 6]]]
+(vec->nd-vec [1 2 3 4 5 6] [3 1 2]) ;[[[1 2]] [[3 4]] [[5 6]]]
+(vec->nd-vec [1 2 3 4 5 6] [3 2])   ;[[1 2] [3 4] [5 6]]"
+  [v [s1 & ss :as shape-vec]]
+  (util/validate! ::sequential v "Invalid input vector `v`")
+  (util/validate! ::sequential shape-vec "Invalid input vector `shape-vec`")
+  (util/validate! ::shape-vec-match-vec
+  [v shape-vec]
+  "Mismatch between vector `v` and vector `shape-vec`")
+  (if-not (seq ss)
+(vec v)
+(->> v
+ (partition (clojure.core// (count v) s1))
+ vec
+ (mapv #(vec->nd-vec % ss)
+
+(s/fdef ->nd-vec :args (s/cat :ndarray ::ndarray) :ret ::vector)
+
+(defn ->nd-vec
+  "Convert an ndarray `ndarray` into a n-dimensional Clojure vector.
+  Ex:
+(->nd-vec (array [1] [1 1 1]))   ;[[[1.0]]]
+(->nd-vec (array [1 2 3] [3 1 1]))   ;[[[1.0]] [[2.0]] [[3.0]]]
+(->nd-vec (array [1 2 3 4 5 6]) [3 1 2]) ;[[[1.0 2.0]] [[3.0 4.0]] [[5.0 
6.0]]]"
+  [ndarray]
+  (util/validate! ::ndarray ndarray "Invalid input array")
+  (vec->nd-vec (->vec ndarray) (shape-vec ndarray)))
diff --git 
a/contrib/clojure-package/test/org/apache/clojure_mxnet/ndarray_test.clj 
b/contrib/clojure-package/test/org/apache/clojure_mxnet/ndarray_test.clj
index 9ffd3ab..a9ae296 100644
--- a/contrib/clojure-package/test/org/apache/clojure_mxnet/ndarray_test.clj
+++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/ndarray_test.clj
@@ -473,3 +473,15 @@
 (is (= [2 2] (ndarray/->int-vec nda)))
 (is (= [2.0 2.0] (ndarray/->double-vec nda)))
 (is (= [(byte 2) (byte 2)] (ndarray/->byte-vec nda)
+
+(deftest test->nd-vec
+  (is (= [[[1.0]]]
+ (ndarray/->nd-vec (ndarray/array [1] [1 1 1]
+  (is (= [[[1.0]] [[2.0]] [[3.0]]]
+ (ndarray/->nd-vec (ndarray/array [1 2 3] [3 1 1]
+  (is (= [[[1.0 2.0]] [[3.0 4.0]] [[5.0 6.0]]]
+ (ndarray/->nd-vec (ndarray/array [1 2 3 4 5 6] [3 1 2]
+  (is (= [[[1.0] [2.0]] [[3.0] [4.0]] [[5.0] [6.0]]]
+   

[GitHub] [incubator-mxnet] gigasquid merged pull request #14308: [clojure-package] add `->nd-vec` function in `ndarray.clj`

2019-03-10 Thread GitBox
gigasquid merged pull request #14308: [clojure-package] add `->nd-vec` function 
in `ndarray.clj`
URL: https://github.com/apache/incubator-mxnet/pull/14308
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gigasquid commented on issue #14308: [clojure-package] add `->nd-vec` function in `ndarray.clj`

2019-03-10 Thread GitBox
gigasquid commented on issue #14308: [clojure-package] add `->nd-vec` function 
in `ndarray.clj`
URL: https://github.com/apache/incubator-mxnet/pull/14308#issuecomment-471366487
 
 
   Thanks for your contribution @Chouffe  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] perdasilva commented on issue #14336: CI Changes for Codified Windows AMIs

2019-03-10 Thread GitBox
perdasilva commented on issue #14336: CI Changes for Codified Windows AMIs
URL: https://github.com/apache/incubator-mxnet/pull/14336#issuecomment-471356456
 
 
   @marcoabreu please review and merge if it's ok. Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-03-10 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a645d3b  Bump the publish timestamp.
a645d3b is described below

commit a645d3b810cb5f80dbd6b402694fe70fde3831ef
Author: mxnet-ci 
AuthorDate: Sun Mar 10 20:50:41 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..3bc06f9
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Mar 10 20:50:41 UTC 2019



[GitHub] [incubator-mxnet] zboldyga edited a comment on issue #14360: supporting matrix inversion and determinant

2019-03-10 Thread GitBox
zboldyga edited a comment on issue #14360: supporting matrix inversion and 
determinant 
URL: 
https://github.com/apache/incubator-mxnet/issues/14360#issuecomment-471338317
 
 
   @ketranm 
   
   Looks like inversion via Cholesky factorization is supported, and there's 
also an API handle for getting the inversion using that factorization:
   
   https://mxnet.apache.org/api/python/ndarray/linalg.html#linear-algebra
   
   potrf - get the Cholesky factorization (triangular matrix)
   potri - calculate inversion 
   sumlogdiag - *may* be useful for calculating logdeterminant (my linear 
algebra is a little rusty)
   
   There's not a shortcut for getting the determinant or log determinant, but 
these are simple ops using the Cholesky factorization. 
   
   It seems to me that all of this should be clarified in the documentation, at 
the minimum, and we should probably add API calls for det and logdet. I've also 
made a request to have a single 'inverse' operation as with Torch. I opened a 
JIRA ticket and will start implementing these as soon as someone more internal 
to the MXNet project signs-off!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zboldyga commented on issue #14360: supporting matrix inversion and determinant

2019-03-10 Thread GitBox
zboldyga commented on issue #14360: supporting matrix inversion and determinant 
URL: 
https://github.com/apache/incubator-mxnet/issues/14360#issuecomment-471338317
 
 
   @ketranm 
   
   Looks like inversion via Cholesky factorization is supported, and there's 
also an API handle for getting the inversion using that factorization:
   
   https://mxnet.apache.org/api/python/ndarray/linalg.html#linear-algebra
   
   potrf - get the Cholesky factorization (triangular matrix)
   potri - calculate inversion 
   sumlogdiag - *may* be useful for calculating logdeterminant (my linear 
algebra is a little rusty)
   
   There's not a shortcut for getting the determinant or log determinant, but 
these are simple ops using the Cholesky factorization. 
   
   It seems to me that all of this should be clarified in the documentation, at 
the minimum, and we should probably add API calls for det and logdet. I opened 
a JIRA ticket and will start implementing these as soon as someone more 
internal to the MXNet project signs-off!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-03-10 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new d21eb5b  Bump the publish timestamp.
d21eb5b is described below

commit d21eb5bf919345697544e399dc31e55c73d5ad48
Author: mxnet-ci 
AuthorDate: Sun Mar 10 19:18:15 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..31a66cb
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Mar 10 19:18:15 UTC 2019



[GitHub] [incubator-mxnet] Chouffe commented on issue #14308: [clojure-package][wip] add `->nd-vec` function in `ndarray.clj`

2019-03-10 Thread GitBox
Chouffe commented on issue #14308: [clojure-package][wip] add `->nd-vec` 
function in `ndarray.clj`
URL: https://github.com/apache/incubator-mxnet/pull/14308#issuecomment-471333656
 
 
   Unit tests added and all checks have passed!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] turtleizzy edited a comment on issue #13949: Error: shape inconsistent while converting PyTorch model to mxnet model with onnx

2019-03-10 Thread GitBox
turtleizzy edited a comment on issue #13949: Error: shape inconsistent while 
converting PyTorch model to mxnet model with onnx
URL: 
https://github.com/apache/incubator-mxnet/issues/13949#issuecomment-471307657
 
 
   > @wangliye00 @Con-Mi However, I do see the error that you facing with MXNet 
v1.3.1. For fixing this, could you try pulling in the commit #13413 and 
checking if you are able to proceed further?
   
   I am facing similar issue when loading pytorch-densenet onnx model into 
mxnet. The error message reads:
   
   ```
   
/usr/local/lib/python3.6/site-packages/mxnet/contrib/onnx/onnx2mx/import_onnx.py
 in _convert_operator(self, node_name, op_name, attrs, inputs)
59 """
60 if op_name in convert_map:
   ---> 61 op_name, new_attrs, inputs = convert_map[op_name](attrs, 
inputs, self)
62 else:
63 raise NotImplementedError("Operator {} not 
implemented.".format(op_name))
   
   
/usr/local/lib/python3.6/site-packages/mxnet/contrib/onnx/onnx2mx/_op_translations.py
 in reshape(attrs, inputs, proto_obj)
   432 if len(inputs) == 1:
   433 return 'reshape', attrs, inputs[0]
   --> 434 reshape_shape = list(proto_obj._params[inputs[1].name].asnumpy())
   435 reshape_shape = [int(i) for i in reshape_shape]
   436 new_attrs = {'shape': reshape_shape}
   
   KeyError: 'concat51'
   ```
   
I tried mxnet 1.3.1 (after patched `import_onnx.py` following your 
suggestion) and 1.4.0 with no luck, both raised similar exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] turtleizzy edited a comment on issue #13949: Error: shape inconsistent while converting PyTorch model to mxnet model with onnx

2019-03-10 Thread GitBox
turtleizzy edited a comment on issue #13949: Error: shape inconsistent while 
converting PyTorch model to mxnet model with onnx
URL: 
https://github.com/apache/incubator-mxnet/issues/13949#issuecomment-471307657
 
 
   > @wangliye00 @Con-Mi However, I do see the error that you facing with MXNet 
v1.3.1. For fixing this, could you try pulling in the commit #13413 and 
checking if you are able to proceed further?
   
   I am facing similar issue when loading pytorch-densenet onnx model into 
mxnet. The error message reads:
   
   ```
   
/usr/local/lib/python3.6/site-packages/mxnet/contrib/onnx/onnx2mx/import_onnx.py
 in _convert_operator(self, node_name, op_name, attrs, inputs)
59 """
60 if op_name in convert_map:
   ---> 61 op_name, new_attrs, inputs = convert_map[op_name](attrs, 
inputs, self)
62 else:
63 raise NotImplementedError("Operator {} not 
implemented.".format(op_name))
   
   
/usr/local/lib/python3.6/site-packages/mxnet/contrib/onnx/onnx2mx/_op_translations.py
 in reshape(attrs, inputs, proto_obj)
   432 if len(inputs) == 1:
   433 return 'reshape', attrs, inputs[0]
   --> 434 reshape_shape = list(proto_obj._params[inputs[1].name].asnumpy())
   435 reshape_shape = [int(i) for i in reshape_shape]
   436 new_attrs = {'shape': reshape_shape}
   
   KeyError: 'concat51'
   ```
   
I tried mxnet 1.3.1 (after patched `import_onnx.py` following your 
suggestion) and 1.4.0 with no luck, both raised similar Exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] turtleizzy commented on issue #13949: Error: shape inconsistent while converting PyTorch model to mxnet model with onnx

2019-03-10 Thread GitBox
turtleizzy commented on issue #13949: Error: shape inconsistent while 
converting PyTorch model to mxnet model with onnx
URL: 
https://github.com/apache/incubator-mxnet/issues/13949#issuecomment-471307657
 
 
   > @wangliye00 @Con-Mi However, I do see the error that you facing with MXNet 
v1.3.1. For fixing this, could you try pulling in the commit #13413 and 
checking if you are able to proceed further?
   
   I am facing similar issue when loading pytorch-densenet onnx model into 
mxnet. The error message reads:
   
   
`/usr/local/lib/python3.6/site-packages/mxnet/contrib/onnx/onnx2mx/import_onnx.py
 in _convert_operator(self, node_name, op_name, attrs, inputs)
59 """
60 if op_name in convert_map:
   ---> 61 op_name, new_attrs, inputs = convert_map[op_name](attrs, 
inputs, self)
62 else:
63 raise NotImplementedError("Operator {} not 
implemented.".format(op_name))
   
   
/usr/local/lib/python3.6/site-packages/mxnet/contrib/onnx/onnx2mx/_op_translations.py
 in reshape(attrs, inputs, proto_obj)
   432 if len(inputs) == 1:
   433 return 'reshape', attrs, inputs[0]
   --> 434 reshape_shape = list(proto_obj._params[inputs[1].name].asnumpy())
   435 reshape_shape = [int(i) for i in reshape_shape]
   436 new_attrs = {'shape': reshape_shape}
   
   KeyError: 'concat51'`
   
I tried mxnet 1.3.1 (after patched `import_onnx.py` following your 
suggestion) and 1.4.0 with no luck, both raised similar Exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-03-10 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new eae7471  Bump the publish timestamp.
eae7471 is described below

commit eae74713a499093dffe39eaad39cf557d2efb583
Author: mxnet-ci 
AuthorDate: Sun Mar 10 13:18:37 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..0110165
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Mar 10 13:18:37 UTC 2019



[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #14335: [MKLDNN] Question on installation and use of MKLDNN

2019-03-10 Thread GitBox
pengzhao-intel commented on issue #14335: [MKLDNN] Question on installation and 
use of MKLDNN
URL: 
https://github.com/apache/incubator-mxnet/issues/14335#issuecomment-471278086
 
 
   + @juliusshufan to track the windows related building issues
   
   @mxnet-label-bot Add [Windows]
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stereomatchingkiss commented on issue #14380: fix type mismatch bugs

2019-03-10 Thread GitBox
stereomatchingkiss commented on issue #14380: fix type mismatch bugs
URL: https://github.com/apache/incubator-mxnet/pull/14380#issuecomment-471269805
 
 
   > And I think you could merge the PR into the master branch, thank you!
   
   There are two problems stopping me to do that
   
   1. Since mxnet 1.4.0, mxnet got even more bugs when you try to compile it on 
windows, I can't specify those bugs are caused by this pull request or not 
because of the new bugs exist in master branch(please check #14203)
   2. Could you tell me why jenkins complain?
   
   Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stereomatchingkiss commented on issue #14370: Add travis build for different platforms

2019-03-10 Thread GitBox
stereomatchingkiss commented on issue #14370: Add travis build for different 
platforms
URL: 
https://github.com/apache/incubator-mxnet/issues/14370#issuecomment-471269478
 
 
   > Or are the Jenkins CI testing out various platform sufficient for now ?
   
   I hope it is enough too, but it is not, there are many bugs when you try to 
build mxnet on windows, especially when you try to build with intel mkl, please 
check issues #14343, #14364 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] peterpaniff opened a new issue #14384: try to train ssdlite mobilenetv2, encounter the error.

2019-03-10 Thread GitBox
peterpaniff opened a new issue #14384: try to train ssdlite mobilenetv2, 
encounter the error.
URL: https://github.com/apache/incubator-mxnet/issues/14384
 
 
   i follow the instrctions. The code below was added to  
example/ssd/symbol/symbol_factory.py
   elif network == 'mobilenet_v2':
   image_shape = '3,224,224'
   network = 'mobilenet_v2'
   from_layers = ['relu6_1_expand', 'relu6_4', '', '', '', '']
   num_filters = [-1, -1, 512, 256, 256, 128]
   strides = [-1, -1, 2, 2, 2, 2]
   pads = [-1, -1, 1, 1, 1, 1]
   sizes = [[.1, .141], [.2, .272], [.37, .447], [.54, .619], [.71, 
.79], [.88, .961]]
   ratios = [[1, 2, .5], [1, 2, .5, 3, 1. / 3], [1, 2, .5, 3, 1. / 3], 
[1, 2, .5, 3, 1. / 3], \
 [1, 2, .5], [1, 2, .5]]
   normalizations = -1
   steps = []
   return locals()
   when i train the model, 
   but i enconter the error below:
   TypeError: get_symbol() got an unexpected keyword argument 'image_shape'


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services