[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #17735: Fix OpPerf in Master

2020-03-03 Thread GitBox
apeforest commented on a change in pull request #17735: Fix OpPerf in Master
URL: https://github.com/apache/incubator-mxnet/pull/17735#discussion_r387498647
 
 

 ##
 File path: benchmark/opperf/rules/default_params.py
 ##
 @@ -431,7 +433,7 @@
"delta" : DEFAULT_DELTA,
"lr" : DEFAULT_LR,
"lrs" : DEFAULT_LRS,
-   "wds" : DEFAULT_LRS,
+   "wd" : DEFAULT_LR,
 
 Review comment:
   Yes, I think it's better to create a new var even if its value is the same 
as other var. It improves readability.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #16535: CI Travis time out

2020-03-03 Thread GitBox
leezu commented on issue #16535: CI Travis time out 
URL: 
https://github.com/apache/incubator-mxnet/issues/16535#issuecomment-594371552
 
 
   I enabled the unittests in the osx github actions build in my fork. Runs 
mostly fine: 
   
https://github.com/leezu/mxnet/commit/e32e82a6551da0bd5a8ec2847ae10556039c8bda/checks/484044519/logs
   
   It does expose the problem that in the staticbuild `libcustomop_lib.so not 
found` and `library libsubgraph_lib.so`, failing the respective tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17043: Segmentation fault: 11

2020-03-03 Thread GitBox
leezu commented on issue #17043: Segmentation fault: 11
URL: 
https://github.com/apache/incubator-mxnet/issues/17043#issuecomment-594367715
 
 
   Seeing the issue again in 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-17751/4/pipeline
   
   It's the same pipeline @szha reported as failing above.
   
   That pipeline runs the following build
   
   
https://github.com/apache/incubator-mxnet/blob/5cffa744859658d8192041eafcdcfcf176d27482/ci/docker/runtime_functions.sh#L762-L779
   
   The build log associated with the build used for above failing pipeline is 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-17751/4/pipeline/51,
 specifically 
http://jenkins.mxnet-ci.amazon-ml.com/blue/rest/organizations/jenkins/pipelines/mxnet-validation/pipelines/unix-gpu/branches/PR-17751/runs/4/nodes/51/steps/294/log/?start=0
   
   There are a couple of interesting points about this build and failure:
   1) the build is unrelated to llvm openmp, by the nature of our Makefile 
build not supporting llvm openmp.
   2) the build does not use jemalloc.
   
   So I think we can conclude that the issue is not with jemalloc, but that 
there is an underlying MXNet bug and building with jemalloc and openmp makes 
the bug much easier to reproduce.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ciyongch commented on issue #17707: [MKLDNN] Remove overhead of sg_mkldnn_fullyconnected op

2020-03-03 Thread GitBox
ciyongch commented on issue #17707: [MKLDNN] Remove overhead of 
sg_mkldnn_fullyconnected op
URL: https://github.com/apache/incubator-mxnet/pull/17707#issuecomment-594355586
 
 
   @TaoLv @eric-haibin-lin please take a look. CI is not that stable...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17751: Fix MKL static link & default to static link on unix

2020-03-03 Thread GitBox
TaoLv commented on issue #17751: Fix MKL static link & default to static link 
on unix
URL: https://github.com/apache/incubator-mxnet/pull/17751#issuecomment-594354616
 
 
   @leezu We're working on a proposal for the build logic of DNNL/MKL related 
stuffs. You can decide to merge this change now if urgent or wait for the 
proposal after it's ready. Thanks! @pengzhao-intel @zixuanweeei 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-03 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 46d23d8  Bump the publish timestamp.
46d23d8 is described below

commit 46d23d835c7d4f97805f3e4b267c43eb971491f8
Author: mxnet-ci 
AuthorDate: Wed Mar 4 06:43:01 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..570f57b
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Mar  4 06:43:01 UTC 2020



[GitHub] [incubator-mxnet] Yiyan66 opened a new pull request #17756: [numpy] FFI random.exponential, logistic, gumbel, rayleigh

2020-03-03 Thread GitBox
Yiyan66 opened a new pull request #17756: [numpy] FFI random.exponential, 
logistic, gumbel, rayleigh
URL: https://github.com/apache/incubator-mxnet/pull/17756
 
 
   ## Description ##
FFI random.exponential, random.logistic, random.gumbel, random.rayleigh
   
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #16858: Cannot load trainer with AMP

2020-03-03 Thread GitBox
eric-haibin-lin commented on issue #16858: Cannot load trainer with AMP
URL: 
https://github.com/apache/incubator-mxnet/issues/16858#issuecomment-594353119
 
 
   @ptrendx 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (21b7fc0 -> 5cffa74)

2020-03-03 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 21b7fc0  Add 1.6.0 to the list of signed download links (#17753)
 add 5cffa74  [Large Tensor] Fixed RNN op (#17632)

No new revisions were added by this update.

Summary of changes:
 src/operator/rnn-inl.h|  40 ++--
 src/operator/rnn_impl.h   | 408 +++---
 tests/nightly/test_large_array.py |  36 +++-
 3 files changed, 259 insertions(+), 225 deletions(-)



[GitHub] [incubator-mxnet] apeforest merged pull request #17632: [Large Tensor] Fixed RNN op

2020-03-03 Thread GitBox
apeforest merged pull request #17632: [Large Tensor] Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (af322dc -> 21b7fc0)

2020-03-03 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from af322dc  [CI] Test gcc8 -WError build CI (#17752)
 add 21b7fc0  Add 1.6.0 to the list of signed download links (#17753)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/pages/get_started/download.md | 1 +
 1 file changed, 1 insertion(+)



[GitHub] [incubator-mxnet] ptrendx merged pull request #17753: Add 1.6.0 to the list of signed download links

2020-03-03 Thread GitBox
ptrendx merged pull request #17753: Add 1.6.0 to the list of signed download 
links
URL: https://github.com/apache/incubator-mxnet/pull/17753
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #17741: bump up 1.x branch to 1.7.0

2020-03-03 Thread GitBox
szha commented on issue #17741: bump up 1.x branch to 1.7.0
URL: https://github.com/apache/incubator-mxnet/pull/17741#issuecomment-594331329
 
 
   @leezu I realized it afterwards and cherry-picked the change to 1.x. I think 
the problem here is that there's no new scala mxnet released yet. @lanking520 
could you look into the status for the build?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu opened a new issue #17755: Jave and scala nightly tests broken

2020-03-03 Thread GitBox
leezu opened a new issue #17755: Jave and scala nightly tests broken 
URL: https://github.com/apache/incubator-mxnet/issues/17755
 
 
   ```
   [2020-03-03T19:53:00.983Z] [ERROR] Failed to execute goal on project 
mxnet-scala-demo: Could not resolve dependencies for project 
Demo:mxnet-scala-demo:pom:1.0-SNAPSHOT: Failed to collect dependencies at 
org.apache.mxnet:mxnet-full_2.11-linux-x86_64-cpu:jar:[1.7.0-SNAPSHOT,): No 
versions available for 
org.apache.mxnet:mxnet-full_2.11-linux-x86_64-cpu:jar:[1.7.0-SNAPSHOT,) within 
specified range -> [Help 1]
   ```
   
   https://github.com/apache/incubator-mxnet/pull/17741


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17741: bump up 1.x branch to 1.7.0

2020-03-03 Thread GitBox
leezu commented on issue #17741: bump up 1.x branch to 1.7.0
URL: https://github.com/apache/incubator-mxnet/pull/17741#issuecomment-594329103
 
 
   @szha Seems this was merged to the wrong branch and broke nightly tests
   
   ```
   [2020-03-03T19:53:00.983Z] [ERROR] Failed to execute goal on project 
mxnet-scala-demo: Could not resolve dependencies for project 
Demo:mxnet-scala-demo:pom:1.0-SNAPSHOT: Failed to collect dependencies at 
org.apache.mxnet:mxnet-full_2.11-linux-x86_64-cpu:jar:[1.7.0-SNAPSHOT,): No 
versions available for 
org.apache.mxnet:mxnet-full_2.11-linux-x86_64-cpu:jar:[1.7.0-SNAPSHOT,) within 
specified range -> [Help 1]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (2abd225 -> af322dc)

2020-03-03 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 2abd225  Update website, README and NEWS with 1.6.0 (#17658)
 add af322dc  [CI] Test gcc8 -WError build CI (#17752)

No new revisions were added by this update.

Summary of changes:
 3rdparty/dmlc-core |  2 +-
 3rdparty/mshadow/mshadow/tensor_cpu-inl.h  |  5 
 CMakeLists.txt |  4 +++
 ci/docker/runtime_functions.sh | 31 ++
 ci/jenkins/Jenkins_steps.groovy| 14 ++
 ci/jenkins/Jenkinsfile_miscellaneous   |  5 ++--
 src/common/utils.h |  5 
 src/operator/contrib/boolean_mask-inl.h|  5 
 src/operator/contrib/boolean_mask.cc   |  5 
 .../contrib/deformable_psroi_pooling-inl.h |  6 ++---
 src/operator/contrib/index_copy.cc | 10 +++
 src/operator/numpy/np_ediff1d_op-inl.h |  5 
 src/operator/numpy/np_unique_op.cc |  5 
 src/operator/random/shuffle_op.cc  |  5 
 src/operator/rnn_impl.h|  7 -
 src/operator/tensor/indexing_op.cc |  5 
 16 files changed, 95 insertions(+), 24 deletions(-)



[incubator-mxnet] branch master updated (2abd225 -> af322dc)

2020-03-03 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 2abd225  Update website, README and NEWS with 1.6.0 (#17658)
 add af322dc  [CI] Test gcc8 -WError build CI (#17752)

No new revisions were added by this update.

Summary of changes:
 3rdparty/dmlc-core |  2 +-
 3rdparty/mshadow/mshadow/tensor_cpu-inl.h  |  5 
 CMakeLists.txt |  4 +++
 ci/docker/runtime_functions.sh | 31 ++
 ci/jenkins/Jenkins_steps.groovy| 14 ++
 ci/jenkins/Jenkinsfile_miscellaneous   |  5 ++--
 src/common/utils.h |  5 
 src/operator/contrib/boolean_mask-inl.h|  5 
 src/operator/contrib/boolean_mask.cc   |  5 
 .../contrib/deformable_psroi_pooling-inl.h |  6 ++---
 src/operator/contrib/index_copy.cc | 10 +++
 src/operator/numpy/np_ediff1d_op-inl.h |  5 
 src/operator/numpy/np_unique_op.cc |  5 
 src/operator/random/shuffle_op.cc  |  5 
 src/operator/rnn_impl.h|  7 -
 src/operator/tensor/indexing_op.cc |  5 
 16 files changed, 95 insertions(+), 24 deletions(-)



[GitHub] [incubator-mxnet] leezu edited a comment on issue #17708: Silence all compiler warnings when build

2020-03-03 Thread GitBox
leezu edited a comment on issue #17708: Silence all compiler warnings when build
URL: 
https://github.com/apache/incubator-mxnet/issues/17708#issuecomment-594127873
 
 
   See https://github.com/apache/incubator-mxnet/pull/17752 for fixing the CPU 
build.
   There are still remaining warnings affecting the cuda build


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu merged pull request #17752: Test gcc8 -WError build on CI

2020-03-03 Thread GitBox
leezu merged pull request #17752: Test gcc8 -WError build on CI
URL: https://github.com/apache/incubator-mxnet/pull/17752
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] konioyxgq commented on issue #13219: using C++ interface on CPU to extract feature vector, delete NDArray::WaitAll() generate incorrect result

2020-03-03 Thread GitBox
konioyxgq commented on issue #13219: using C++  interface on CPU to extract 
feature vector, delete NDArray::WaitAll() generate incorrect result
URL: 
https://github.com/apache/incubator-mxnet/issues/13219#issuecomment-594319306
 
 
   @WANG-MengJiao Please


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387423786
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -1102,7 +1183,12 @@ extern "C" {
   const int64_t** outshapes, int* outdims, void** outdata, 
int* outtypes,
   size_t* outIDs, const char** outdev_type, int* outdev_id, 
int num_out,
   xpu_malloc_t cpu_malloc, void* cpu_alloc,
-  xpu_malloc_t gpu_malloc, void* gpu_alloc, void* cuda_stream) 
{
+  xpu_malloc_t gpu_malloc, void* gpu_alloc, void* cuda_stream,
+  void** in_indices, void** in_indptr, 
+  int64_t* in_indices_shapes, int64_t* in_indptr_shapes,
+  std::vector>& tmp_data,
 
 Review comment:
   cannot pass objects across the ABI boundary.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387423786
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -1102,7 +1183,12 @@ extern "C" {
   const int64_t** outshapes, int* outdims, void** outdata, 
int* outtypes,
   size_t* outIDs, const char** outdev_type, int* outdev_id, 
int num_out,
   xpu_malloc_t cpu_malloc, void* cpu_alloc,
-  xpu_malloc_t gpu_malloc, void* gpu_alloc, void* cuda_stream) 
{
+  xpu_malloc_t gpu_malloc, void* gpu_alloc, void* cuda_stream,
+  void** in_indices, void** in_indptr, 
+  int64_t* in_indices_shapes, int64_t* in_indptr_shapes,
+  std::vector>& tmp_data,
 
 Review comment:
   cannot pass objects across the ABI boundary. Otherwise it requires the same 
version of stdlib in MXNet and the custom library


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387423284
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -222,9 +257,27 @@ void CustomFComputeDispatcher(const std::string op_name,
 out_shapes.data(), out_dims.data(), 
out_data.data(), out_types.data(),
 out_verIDs.data(), out_dev_type.data(), 
out_dev_id.data(),
 out_data.size(),
-cpu_malloc, _alloc, gpu_malloc, _alloc, 
cuda_stream))
+cpu_malloc, _alloc, gpu_malloc, _alloc, 
cuda_stream,
+in_indices.data(), in_indptr.data(),
+in_indices_shapes.data(), in_indptr_shapes.data(),
+tmp_data, col_index, row_ptr))
   << "Error calling FStatefulCompute for custom operator '" << op_name << 
"'";
   }
+  
+  // Alloc space for sparse output and copy data to saprse NDArray.
 
 Review comment:
   saprse ==> sparse


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17557: test_batchnorm is failing for cu101/cu101mkl/cu102 builds in CD

2020-03-03 Thread GitBox
TaoLv commented on issue #17557: test_batchnorm is failing for 
cu101/cu101mkl/cu102 builds in CD
URL: 
https://github.com/apache/incubator-mxnet/issues/17557#issuecomment-594288987
 
 
   Seems it's also failing in CI:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fcentos-gpu/detail/PR-17313/18/pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387416093
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -229,20 +241,75 @@ enum MXReturnValue {
   MX_SUCCESS = 1,
 };
 
+// For sparse input, read/write the data from NDarray via pointers.
+struct MXInSparse {
+  // Pointer to data.
+  void *data{nullptr};
+  // length of (non-zero) data.
+  int64_t data_len;
+
+  // To store aux data for sparse.
+  // For CSR, indices stores the col index of non-zero values.
+  // For row sparse, indices store row index of rows which have non-zero 
values.
+  int64_t* indices;
+  int64_t indices_len;
+
+  // For CSR, indptr gives the start and end index of data for each row.
+  // For row sparse, indptr is empty. 
+  int64_t* indptr;
+  int64_t indptr_len;
+
+  void set(void *data_ptr, const int64_t* dims, int ndims, void *idx,
+  int64_t num_idx, void *idx_ptr = nullptr, int64_t num_idx_ptr = 0) {
+data = data_ptr;
+// If CSR, num of non-zero value is num_idx,
+// If row sparse, num of value is num_idx * width.
+data_len = idx_ptr ? num_idx : num_idx * dims[1];
 
 Review comment:
   Does `num_idx * dims[1];` only work for 2D arrays? According to the MXNet 
docs:
   > `indices` array stores the row index for each row slice with non-zero 
elements.
   
   `indices` points to the first dimensions that have non-zero elements. Would 
we would need to recursively multiply the `dims` from dimension[1] to 
dimension[end] and multiply that by `num_idx`?
   
https://github.com/deepinsight/mxnet/blob/master/docs/tutorials/sparse/row_sparse.md#row-sparse-format


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tobecontinued opened a new pull request #17754: [MXNET-978] Higher Order Gradient Support power, elemwise_mul and elemwise_sub and add unit test funcino check check_nth_orde

2020-03-03 Thread GitBox
tobecontinued opened a new pull request #17754: [MXNET-978] Higher Order 
Gradient Support power, elemwise_mul and elemwise_sub and add  unit test 
funcino check check_nth_order_binary
URL: https://github.com/apache/incubator-mxnet/pull/17754
 
 
   ## Description ##
   * add unittest funtinon check check_nth_order_binary
   * support higher order gradient for power,  elemwise_mul and elemwise_sub.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] add unittest funtinon check check_nth_order_binary
   - [x] support higher order gradient for power,  elemwise_mul and 
elemwise_sub.
   - [x] unittest of higher order gradient of them.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387410144
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -229,20 +241,75 @@ enum MXReturnValue {
   MX_SUCCESS = 1,
 };
 
+// For sparse input, read/write the data from NDarray via pointers.
+struct MXInSparse {
+  // Pointer to data.
+  void *data{nullptr};
+  // length of (non-zero) data.
+  int64_t data_len;
+
+  // To store aux data for sparse.
+  // For CSR, indices stores the col index of non-zero values.
+  // For row sparse, indices store row index of rows which have non-zero 
values.
+  int64_t* indices;
+  int64_t indices_len;
+
+  // For CSR, indptr gives the start and end index of data for each row.
+  // For row sparse, indptr is empty. 
+  int64_t* indptr;
+  int64_t indptr_len;
+
+  void set(void *data_ptr, const int64_t* dims, int ndims, void *idx,
+  int64_t num_idx, void *idx_ptr = nullptr, int64_t num_idx_ptr = 0) {
+data = data_ptr;
+// If CSR, num of non-zero value is num_idx,
+// If row sparse, num of value is num_idx * width.
 
 Review comment:
   value ==> elements


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387409871
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -,16 +1197,49 @@ extern "C" {
 
 // create a vector of tensors for inputs
 std::vector inputs(num_in);
+// create a vector for sparse inputs
+std::vector in_sparse(num_in);
+
 for (int i = 0; i < num_in; i++) {
-  inputs[i].setTensor(indata[i], (MXDType)intypes[i], inshapes[i], 
indims[i],
-  inIDs[i], {indev_type[i], indev_id[i]});
+  // Dense representation.
+  if(!in_indices_shapes) {
 
 Review comment:
   dont we need to check this on each input? there could be a mix of 
sparse/dense


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387408282
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -114,13 +114,19 @@ void CustomFComputeDispatcher(const std::string op_name,
   const std::vector& req,
   const std::vector& outputs) {
   std::vector in_data, out_data;
-  std::vector in_shapes, out_shapes;
+  std::vector in_shapes, out_shapes;
   std::vector in_dims, out_dims;
   std::vector in_types, out_types;
   std::vector in_verIDs, out_verIDs;
   std::vector in_dev_type, out_dev_type;
   std::vector in_dev_id, out_dev_id;
 
+  // Extra data for sparse inputs.
+  std::vector in_indices;
+  std::vector in_indptr;
+  std::vector in_indices_shapes;
+  std::vector in_indptr_shapes;
 
 Review comment:
   lets just initialize these vectors to inputs.size(). Then it'll be easier to 
index into them with the same index as the tensor input


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387405373
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -,16 +1197,49 @@ extern "C" {
 
 // create a vector of tensors for inputs
 std::vector inputs(num_in);
+// create a vector for sparse inputs
+std::vector in_sparse(num_in);
 
 Review comment:
   This assumes we should allocate a sparse tensor for every input, before even 
checking if the input is sparse. Maybe theres a better way to do it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387401876
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -,16 +1197,49 @@ extern "C" {
 
 // create a vector of tensors for inputs
 std::vector inputs(num_in);
+// create a vector for sparse inputs
+std::vector in_sparse(num_in);
+
 for (int i = 0; i < num_in; i++) {
-  inputs[i].setTensor(indata[i], (MXDType)intypes[i], inshapes[i], 
indims[i],
-  inIDs[i], {indev_type[i], indev_id[i]});
+  // Dense representation.
+  if(!in_indices_shapes) {
+inputs[i].setTensor(indata[i], (MXDType)intypes[i], inshapes[i], 
indims[i],
+inIDs[i], {indev_type[i], indev_id[i]}, 
kDefaultStorage);
+  }
+  // Sparse representation.
+  else {
+MXStorageType type;
+if(!in_indptr_shapes) {
+  type = kRowSparseStorage;
+  in_sparse[i].set(indata[i], inshapes[i], indims[i], in_indices[i], 
in_indices_shapes[i]);
+}
+else {
+  type = kCSRStorage;
+  in_sparse[i].set(indata[i], inshapes[i], indims[i], in_indices[i],
+   in_indices_shapes[i], in_indptr[i], 
in_indptr_shapes[i]);
+}
+inputs[i].setTensor((void*)(_sparse[i]), (MXDType)intypes[i], 
inshapes[i], indims[i],
+inIDs[i], {indev_type[i], indev_id[i]}, type);
+  }
 }
 
 // create a vector of tensors for outputs
 std::vector outputs(num_out);
+// create a vector for sparse outputs
+std::vector out_sparse;
 
 Review comment:
   This assumes we should allocate a sparse tensor for every input, before even 
checking if the input is sparse. Maybe theres a better way to do it 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387405373
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -,16 +1197,49 @@ extern "C" {
 
 // create a vector of tensors for inputs
 std::vector inputs(num_in);
+// create a vector for sparse inputs
+std::vector in_sparse(num_in);
 
 Review comment:
   This assumes we should allocate a sparse tensor for every input, before even 
checking if the input is sparse. Maybe theres a better way to do it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387405083
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -229,20 +241,75 @@ enum MXReturnValue {
   MX_SUCCESS = 1,
 };
 
+// For sparse input, read/write the data from NDarray via pointers.
+struct MXInSparse {
+  // Pointer to data.
+  void *data{nullptr};
+  // length of (non-zero) data.
+  int64_t data_len;
+
+  // To store aux data for sparse.
+  // For CSR, indices stores the col index of non-zero values.
+  // For row sparse, indices store row index of rows which have non-zero 
values.
+  int64_t* indices;
+  int64_t indices_len;
+
+  // For CSR, indptr gives the start and end index of data for each row.
+  // For row sparse, indptr is empty. 
+  int64_t* indptr;
+  int64_t indptr_len;
+
+  void set(void *data_ptr, const int64_t* dims, int ndims, void *idx,
+  int64_t num_idx, void *idx_ptr = nullptr, int64_t num_idx_ptr = 0) {
+data = data_ptr;
+// If CSR, num of non-zero value is num_idx,
+// If row sparse, num of value is num_idx * width.
+data_len = idx_ptr ? num_idx : num_idx * dims[1];
+
+indices = (int64_t*)idx;
+indices_len = num_idx;
+
+if(idx_ptr) {
+  indptr = (int64_t*)idx_ptr;
 
 Review comment:
   lets initialize `indptr` to nullptr so that its set to some value for 
rowsparse


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387404776
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -229,20 +241,75 @@ enum MXReturnValue {
   MX_SUCCESS = 1,
 };
 
+// For sparse input, read/write the data from NDarray via pointers.
+struct MXInSparse {
+  // Pointer to data.
+  void *data{nullptr};
+  // length of (non-zero) data.
+  int64_t data_len;
+
+  // To store aux data for sparse.
+  // For CSR, indices stores the col index of non-zero values.
+  // For row sparse, indices store row index of rows which have non-zero 
values.
+  int64_t* indices;
+  int64_t indices_len;
+
+  // For CSR, indptr gives the start and end index of data for each row.
+  // For row sparse, indptr is empty. 
 
 Review comment:
   "indptr is empty" ==> "indptr is not used"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387402708
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -1193,19 +1312,57 @@ extern "C" {
   const int64_t** outshapes, int* outdims, void** 
outdata, int* outtypes,
   size_t* outIDs, const char** outdev_type, int* 
outdev_id, int num_out,
   xpu_malloc_t cpu_malloc, void* cpu_alloc,
-  xpu_malloc_t gpu_malloc, void* gpu_alloc, void* 
stream) {
+  xpu_malloc_t gpu_malloc, void* gpu_alloc, void* 
stream,
+  void** in_indices, void** in_indptr, 
+  int64_t* in_indices_shapes, int64_t* 
in_indptr_shapes,
+  std::vector>& tmp_data,
+  std::vector>& col_idx,
+  std::vector>& row_ptr) {
 // create a vector of tensors for inputs
 std::vector inputs(num_in);
+// create a vector for sparse inputs
+std::vector in_sparse(num_in);
+
 for (int i = 0; i < num_in; i++) {
-  inputs[i].setTensor(indata[i], (MXDType)intypes[i], inshapes[i], 
indims[i],
-  inIDs[i], {indev_type[i], indev_id[i]});
+  // Dense representation.
+  if(!in_indices_shapes) {
+inputs[i].setTensor(indata[i], (MXDType)intypes[i], inshapes[i], 
indims[i],
+inIDs[i], {indev_type[i], indev_id[i]}, 
kDefaultStorage);
+  }
+  // Sparse representation.
+  else {
+MXStorageType type;
+if(!in_indptr_shapes) {
+  type = kRowSparseStorage;
+  in_sparse[i].set(indata[i], inshapes[i], indims[i], in_indices[i], 
in_indices_shapes[i]);
 
 Review comment:
   In c_api.cc you only push sparse in_indices conditionally:
   ```
   if(inputs[i].storage_type() == mxnet::kRowSparseStorage) {
 in_indices.push_back(inputs[i].aux_data(rowsparse::kIdx).dptr_);
 
in_indices_shapes.push_back(inputs[i].aux_shape(rowsparse::kIdx).Size());
   }
   ```
   But here you assume that the tensor index can be used to pull out entries. 
For cases where some tensors are sparse and others are dense this wont work and 
you get segfault possibly indexing outside of the `in_indices` range. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support 
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387401876
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -,16 +1197,49 @@ extern "C" {
 
 // create a vector of tensors for inputs
 std::vector inputs(num_in);
+// create a vector for sparse inputs
+std::vector in_sparse(num_in);
+
 for (int i = 0; i < num_in; i++) {
-  inputs[i].setTensor(indata[i], (MXDType)intypes[i], inshapes[i], 
indims[i],
-  inIDs[i], {indev_type[i], indev_id[i]});
+  // Dense representation.
+  if(!in_indices_shapes) {
+inputs[i].setTensor(indata[i], (MXDType)intypes[i], inshapes[i], 
indims[i],
+inIDs[i], {indev_type[i], indev_id[i]}, 
kDefaultStorage);
+  }
+  // Sparse representation.
+  else {
+MXStorageType type;
+if(!in_indptr_shapes) {
+  type = kRowSparseStorage;
+  in_sparse[i].set(indata[i], inshapes[i], indims[i], in_indices[i], 
in_indices_shapes[i]);
+}
+else {
+  type = kCSRStorage;
+  in_sparse[i].set(indata[i], inshapes[i], indims[i], in_indices[i],
+   in_indices_shapes[i], in_indptr[i], 
in_indptr_shapes[i]);
+}
+inputs[i].setTensor((void*)(_sparse[i]), (MXDType)intypes[i], 
inshapes[i], indims[i],
+inIDs[i], {indev_type[i], indev_id[i]}, type);
+  }
 }
 
 // create a vector of tensors for outputs
 std::vector outputs(num_out);
+// create a vector for sparse outputs
+std::vector out_sparse;
 
 Review comment:
   This assumes we should allocate a sparse tensor for every input, before even 
checking if the input is sparse. Maybe theres a better way to do it 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #16535: CI Travis time out

2020-03-03 Thread GitBox
leezu commented on issue #16535: CI Travis time out 
URL: 
https://github.com/apache/incubator-mxnet/issues/16535#issuecomment-594253593
 
 
   Let's consider which tests to run in Github Actions. I see the following 
commented out tests in the travis.yml
   
   1) CoreML: `python2 -m nose --verbose tools/coreml/test 
--exclude-test=test_mxnet_image` @apeforest is this now tested elsewhere? 
Should an equivalent test for Python 3 be re-enabled?
2) Python unittests. `python -m nose --verbose tests/python/unittest/` 
though #12706 disabled a number of tests for speed reasons.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #16535: CI Travis time out

2020-03-03 Thread GitBox
roywei commented on issue #16535: CI Travis time out 
URL: 
https://github.com/apache/incubator-mxnet/issues/16535#issuecomment-594248623
 
 
   Yes, I think Github Actions can replace Travis. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-03 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new e943862  Bump the publish timestamp.
e943862 is described below

commit e943862a0cfe981c97202659d52b636c2a009b2b
Author: mxnet-ci 
AuthorDate: Wed Mar 4 00:40:14 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..b603766
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Mar  4 00:40:14 UTC 2020



[GitHub] [incubator-mxnet] guanxinq commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
guanxinq commented on a change in pull request #17569: Adding sparse support to 
MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387360997
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -229,20 +241,53 @@ enum MXReturnValue {
   MX_SUCCESS = 1,
 };
 
+struct MXSparse {
+  // Pointer to data.
+  void *data{nullptr};
+  // length of (non-zero) data.
+  int64_t data_len;
+
+  // To store aux data for sparse.
+  // For CSR, indices stores the col index of non-zero values.
+  // For row sparse, indices store row index of rows which have non-zero 
values.
+  int64_t* indices;
+  int64_t indices_len;
+
+  // For CSR, indptr gives the start and end index of data for each row.
+  // For row sparse, indptr is empty. 
+  int64_t* indptr;
+  int64_t indptr_len;
+
+  void set(void *data_ptr, const int64_t* dims, int ndims, void *idx,
+  int64_t num_idx, void *idx_ptr = nullptr, int64_t num_idx_ptr = 0) {
+data = data_ptr;
+data_len = num_idx;
+
+indices = (int64_t*)idx;
+indices_len = num_idx;
+
+if(idx_ptr) {
+  indptr = (int64_t*)idx_ptr;
+  indptr_len = num_idx_ptr;
+}
+  }
 
 Review comment:
   Do you mean move this function to MXTensor? Is it better to keep it in 
MXSparse structure? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] guanxinq commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
guanxinq commented on a change in pull request #17569: Adding sparse support to 
MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387352445
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -,18 +1171,63 @@ extern "C" {
 
 // create a vector of tensors for inputs
 std::vector inputs(num_in);
+
+MXStorageType type;
+void *data = nullptr;
+void *data2 = nullptr;
+MXSparse sparse;
+MXSparse sparse2;
 for (int i = 0; i < num_in; i++) {
-  inputs[i].setTensor(indata[i], (MXDType)intypes[i], inshapes[i], 
indims[i],
-  inIDs[i], {indev_type[i], indev_id[i]});
+  // Dense representation.
+  if(!in_indices_shapes) {
+type = kDefaultStorage;
+   data = indata[i]; 
+  }
+  // Sparse representation.
+  else {
+// To do: remove if else.
+   if(!in_indptr_shapes) {
+  type = kRowSparseStorage;
+ sparse.set(indata[i], inshapes[i], indims[i], in_indices[i], 
in_indices_shapes[i]);
+}
+else {
+  type = kCSRStorage;
+  sparse.set(indata[i], inshapes[i], indims[i], in_indices[i],
+  in_indices_shapes[i], in_indptr[i], in_indptr_shapes[i]);
+}
+   data = (void*)();
 
 Review comment:
   Thank Sam for the comments.
   1. The complicated initialization sequence is simplified in the latest 
update. 
   2. If we manage the MXSparse in MXTensor, we need to add extra MXSparse 
variables to MXTensor, which is similar to my current fix.
   3. I know the overwrite issue which is fixed now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #16535: CI Travis time out

2020-03-03 Thread GitBox
szha commented on issue #16535: CI Travis time out 
URL: 
https://github.com/apache/incubator-mxnet/issues/16535#issuecomment-594225486
 
 
   @leezu built a Github Actions based solution for mac verification. It seems 
more promising and offers comparable usability. Shall we focus on Github 
Actions instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] guanxinq commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-03 Thread GitBox
guanxinq commented on a change in pull request #17569: Adding sparse support to 
MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r387352445
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -,18 +1171,63 @@ extern "C" {
 
 // create a vector of tensors for inputs
 std::vector inputs(num_in);
+
+MXStorageType type;
+void *data = nullptr;
+void *data2 = nullptr;
+MXSparse sparse;
+MXSparse sparse2;
 for (int i = 0; i < num_in; i++) {
-  inputs[i].setTensor(indata[i], (MXDType)intypes[i], inshapes[i], 
indims[i],
-  inIDs[i], {indev_type[i], indev_id[i]});
+  // Dense representation.
+  if(!in_indices_shapes) {
+type = kDefaultStorage;
+   data = indata[i]; 
+  }
+  // Sparse representation.
+  else {
+// To do: remove if else.
+   if(!in_indptr_shapes) {
+  type = kRowSparseStorage;
+ sparse.set(indata[i], inshapes[i], indims[i], in_indices[i], 
in_indices_shapes[i]);
+}
+else {
+  type = kCSRStorage;
+  sparse.set(indata[i], inshapes[i], indims[i], in_indices[i],
+  in_indices_shapes[i], in_indptr[i], in_indptr_shapes[i]);
+}
+   data = (void*)();
 
 Review comment:
   Thanks Sam. I know the overwrite issue which is fixed now. I am thinking 
about how to handle MXSparse and the set() function to make it more reasonable. 
Will update later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17751: Fix MKL static link & default to static link on unix

2020-03-03 Thread GitBox
leezu commented on a change in pull request #17751: Fix MKL static link & 
default to static link on unix
URL: https://github.com/apache/incubator-mxnet/pull/17751#discussion_r387342850
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -442,8 +440,11 @@ if(USE_OPENMP)
 if(OPENMP_FOUND)
   set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
   set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
-  set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
-  set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
+  if(NOT BLAS STREQUAL "MKL")
+# Linker flags for Intel OMP are already set in case MKL is used. Only 
set if not MKL
+set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
+set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
+  endif()
 
 Review comment:
   FYI @cjolivier01, based on this, clang won't add omp.
   
   ```
   ubuntu@ip-172-31-24-41:~/mxnet/build$ ldd libmxnet.so | grep omp
   libiomp5.so => /usr/local/lib/libiomp5.so (0x7f982da3d000)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17751: Fix MKL static link & default to static link on unix

2020-03-03 Thread GitBox
leezu commented on a change in pull request #17751: Fix MKL static link & 
default to static link on unix
URL: https://github.com/apache/incubator-mxnet/pull/17751#discussion_r387342850
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -442,8 +440,11 @@ if(USE_OPENMP)
 if(OPENMP_FOUND)
   set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
   set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
-  set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
-  set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
+  if(NOT BLAS STREQUAL "MKL")
+# Linker flags for Intel OMP are already set in case MKL is used. Only 
set if not MKL
+set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
+set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
+  endif()
 
 Review comment:
   FYI @cjolivier01, based on this, clang won't add omp.
   
   ```
   ubuntu@ip-172-31-24-41:~/mxnet/build$ ldd libmxnet.so | grep omp
   libiomp5.so => /usr/local/lib/libiomp5.so (0x7f982da3d000)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx opened a new pull request #17753: Add 1.6.0 to the list of signed download links

2020-03-03 Thread GitBox
ptrendx opened a new pull request #17753: Add 1.6.0 to the list of signed 
download links
URL: https://github.com/apache/incubator-mxnet/pull/17753
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aGiant removed a comment on issue #6333: Automatic context to select CPU or GPU(s)

2020-03-03 Thread GitBox
aGiant removed a comment on issue #6333: Automatic context to select CPU or 
GPU(s)
URL: 
https://github.com/apache/incubator-mxnet/issues/6333#issuecomment-594112031
 
 
   Almost 3 years, the error remains the same:

   MXNetError: [19:56:15] src/imperative/./imperative_utils.h:71: Check failed: 
inputs[i]->ctx().dev_mask() == ctx.dev_mask() (1 vs. 2) : Operator 
broadcast_sub require all inputs live on the same context. But the first 
argument is on gpu(0) while the 2-th argument is on cpu(0)
   
   Even with gpu():
   
12 with mx.gpu():
13 half = nd.ones(y_true.shape)*0.5
   ---> 14 right = nd.lesser(nd.abs(y_pred - y_true), half)
15 wrong = nd.greater(nd.abs(y_pred - y_true), half)
16 take_opt = nd.greater(nd.abs(y_pred), half)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aGiant removed a comment on issue #6333: Automatic context to select CPU or GPU(s)

2020-03-03 Thread GitBox
aGiant removed a comment on issue #6333: Automatic context to select CPU or 
GPU(s)
URL: 
https://github.com/apache/incubator-mxnet/issues/6333#issuecomment-594130280
 
 
   This is not a feature request but a bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aGiant removed a comment on issue #6333: Automatic context to select CPU or GPU(s)

2020-03-03 Thread GitBox
aGiant removed a comment on issue #6333: Automatic context to select CPU or 
GPU(s)
URL: 
https://github.com/apache/incubator-mxnet/issues/6333#issuecomment-594113416
 
 
   Still error with as_in_context():
in performance_measure(y_pred, y_true)
12 with mx.gpu(0):
13 half = nd.ones(y_true.shape)*0.5
   ---> 14 right = nd.lesser(nd.abs(y_pred - y_true), 
half.as_in_context(model_ctx))
15 wrong = nd.greater(nd.abs(y_pred - y_true), 
half.as_in_context(model_ctx))
16 take_opt = nd.greater(nd.abs(y_pred), 
half.as_in_context(model_ctx))
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
leezu edited a comment on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-594096882
 
 
   @relaxli00 there are no ABI guarantees for C++. Please do not rely on it.
   The C API exposed by MXNet is unaffected by that though.
   
   See for example 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p1863r1.pdf
   
   We'll improve the build further and speed it up, so this will improve the 
build from source experience.
   
   Any Windows specific PRs are very welcome


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
szha commented on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-594131670
 
 
   @relaxli00 thanks for sharing your experiences. For continued support of c++ 
in mxnet 2.0, it would be great to continue the discussion on c++ support in 
https://github.com/apache/incubator-mxnet/issues/16167.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aGiant commented on issue #6333: Automatic context to select CPU or GPU(s)

2020-03-03 Thread GitBox
aGiant commented on issue #6333: Automatic context to select CPU or GPU(s)
URL: 
https://github.com/apache/incubator-mxnet/issues/6333#issuecomment-594130280
 
 
   This is not a feature request but a bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default dtype

2020-03-03 Thread GitBox
haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default 
dtype
URL: https://github.com/apache/incubator-mxnet/pull/17283#discussion_r387247229
 
 

 ##
 File path: src/operator/tensor/init_op.cc
 ##
 @@ -92,6 +91,16 @@ NNVM_REGISTER_OP(_full)
   .set_attr("FCompute", InitFillWithScalarCompute)
 .add_arguments(InitOpWithScalarParam::__FIELDS__());
 
+NNVM_REGISTER_OP(_npi_full)
 
 Review comment:
   please register this op in `src/operator/numpy/np_init_op.cc/cu`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] relaxli00 commented on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
relaxli00 commented on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-594128909
 
 
   @leezu, thanks for the information. Will keep eyes open on it.
   
   I adopted MXNet mainly because of its C++ support at the beginning. In many 
cases, Python is not a good solution for production deployment.
   
   But, everything is changing quickly. From time to time, I met inconsistent 
results between Python and C++ implementations. I'm struggling to balance the 
usage of both.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu opened a new pull request #17752: Test gcc8 -WError build on CI

2020-03-03 Thread GitBox
leezu opened a new pull request #17752: Test gcc8 -WError build on CI
URL: https://github.com/apache/incubator-mxnet/pull/17752
 
 
   ## Description ##
   Fix gcc8 -WError build and enable testing it on CI
   
   Fixes https://github.com/apache/incubator-mxnet/issues/17708


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17708: Silence all compiler warnings when build

2020-03-03 Thread GitBox
leezu commented on issue #17708: Silence all compiler warnings when build
URL: 
https://github.com/apache/incubator-mxnet/issues/17708#issuecomment-594127873
 
 
   See https://github.com/apache/incubator-mxnet/pull/17752


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on issue #17632: [Large Tensor] Fixed RNN op

2020-03-03 Thread GitBox
connorgoggins commented on issue #17632: [Large Tensor] Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#issuecomment-594120815
 
 
   @apeforest @zixuanweeei I believe my latest commit incorporates all of the 
changes we have discussed. The `b_size` variable and `GetRnnParamSize` function 
are now of type `index_t`, and the changes have been successfully tested on 
both small and large tensor input.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aGiant commented on issue #6333: Automatic context to select CPU or GPU(s)

2020-03-03 Thread GitBox
aGiant commented on issue #6333: Automatic context to select CPU or GPU(s)
URL: 
https://github.com/apache/incubator-mxnet/issues/6333#issuecomment-594113416
 
 
   Still error with as_in_context():
in performance_measure(y_pred, y_true)
12 with mx.gpu(0):
13 half = nd.ones(y_true.shape)*0.5
   ---> 14 right = nd.lesser(nd.abs(y_pred - y_true), 
half.as_in_context(model_ctx))
15 wrong = nd.greater(nd.abs(y_pred - y_true), 
half.as_in_context(model_ctx))
16 take_opt = nd.greater(nd.abs(y_pred), 
half.as_in_context(model_ctx))
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aGiant commented on issue #6333: Automatic context to select CPU or GPU(s)

2020-03-03 Thread GitBox
aGiant commented on issue #6333: Automatic context to select CPU or GPU(s)
URL: 
https://github.com/apache/incubator-mxnet/issues/6333#issuecomment-594112031
 
 
   Almost 3 years, the error remains the same:

   MXNetError: [19:56:15] src/imperative/./imperative_utils.h:71: Check failed: 
inputs[i]->ctx().dev_mask() == ctx.dev_mask() (1 vs. 2) : Operator 
broadcast_sub require all inputs live on the same context. But the first 
argument is on gpu(0) while the 2-th argument is on cpu(0)
   
   Even with gpu():
   
12 with mx.gpu():
13 half = nd.ones(y_true.shape)*0.5
   ---> 14 right = nd.lesser(nd.abs(y_pred - y_true), half)
15 wrong = nd.greater(nd.abs(y_pred - y_true), half)
16 take_opt = nd.greater(nd.abs(y_pred), half)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu opened a new pull request #17751: Fix MKL static link & default to static link on unix

2020-03-03 Thread GitBox
leezu opened a new pull request #17751: Fix MKL static link & default to static 
link on unix
URL: https://github.com/apache/incubator-mxnet/pull/17751
 
 
   Fixes https://github.com/apache/incubator-mxnet/issues/17641
   
   Changes
   - Fix MKL Static linkage and default to it on Unix
   - Don't build llvm openmp when linking to MKL. If MKL is present, intel omp 
is also present.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-03 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new f4cf7e1  Bump the publish timestamp.
f4cf7e1 is described below

commit f4cf7e1eb621842075d3f5538205de3b582b6d37
Author: mxnet-ci 
AuthorDate: Tue Mar 3 18:52:21 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..ce5189e
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Mar  3 18:52:21 UTC 2020



[GitHub] [incubator-mxnet] cjolivier01 commented on issue #17641: OpenMP Error

2020-03-03 Thread GitBox
cjolivier01 commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-594100964
 
 
   yesh just submit the static linkage
   
   On Tue, Mar 3, 2020 at 10:18 AM Leonard Lausen 
   wrote:
   
   > @cjolivier01  please prioritize the PR,
   > as this affects other users. For example #17733
   > 
   >
   > Let me know if I may resubmit the MKL static linkage commit earlier
   > included in #17645 .
   >
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or unsubscribe
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17658: Update website, README and NEWS with 1.6.0

2020-03-03 Thread GitBox
leezu commented on issue #17658: Update website, README and NEWS with 1.6.0
URL: https://github.com/apache/incubator-mxnet/pull/17658#issuecomment-594099603
 
 
   Seems 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fwebsite/detail/master/1697/pipeline
 builds the website


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17658: Update website, README and NEWS with 1.6.0

2020-03-03 Thread GitBox
leezu commented on issue #17658: Update website, README and NEWS with 1.6.0
URL: https://github.com/apache/incubator-mxnet/pull/17658#issuecomment-594099416
 
 
   I haven't verified it, but my assumption is that you can check the status of 
the jenkins jobs at https://github.com/apache/incubator-mxnet/commits/master
   One of them should publish the website?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on issue #17658: Update website, README and NEWS with 1.6.0

2020-03-03 Thread GitBox
ptrendx commented on issue #17658: Update website, README and NEWS with 1.6.0
URL: https://github.com/apache/incubator-mxnet/pull/17658#issuecomment-594098672
 
 
   @leezu Do you know when will the website refresh? I want to send the 
announcement email once that happens.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17739: bump up master branch to 2.0

2020-03-03 Thread GitBox
leezu commented on issue #17739: bump up master branch to 2.0
URL: https://github.com/apache/incubator-mxnet/pull/17739#issuecomment-594098098
 
 
   retriggered unix-gpu


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
leezu edited a comment on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-594096882
 
 
   @relaxli00 there are no ABI guarantees for C++. Please do not rely on it.
   
   See for example 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p1863r1.pdf
   
   We'll improve the build further and speed it up, so this will improve the 
build from source experience.
   
   Any Windows specific PRs are very welcome


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
leezu edited a comment on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-594096882
 
 
   @relaxli00 there are no ABI guarantees for C++. Please do not rely on it.
   
   See for example 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p1863r1.pdf
   
   We'll improve the build further and speed it up, so this will improve the 
build from source experience.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
leezu commented on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-594096882
 
 
   @relaxli00 there are no ABI guarantees for C++. Please do not rely on it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
leezu edited a comment on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-594096882
 
 
   @relaxli00 there are no ABI guarantees for C++. Please do not rely on it.
   
   See for example 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p1863r1.pdf


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17681: [CD] update pypi description, setup.py

2020-03-03 Thread GitBox
leezu commented on a change in pull request #17681: [CD] update pypi 
description, setup.py
URL: https://github.com/apache/incubator-mxnet/pull/17681#discussion_r387208011
 
 

 ##
 File path: tools/pip/doc/CPU_ADDITIONAL.md
 ##
 @@ -40,3 +40,11 @@ To install, use:
 ```bash
 pip install mxnet
 ```
+
+Nightly Builds
+--
+The nightly builds for this package can be found at: 
https://dist.mxnet.io/python/cpu
+To install the latest nightly build, use:
+```bash
+pip install --pre mxnet -f https://dist.mxnet.io/python/cpu
 
 Review comment:
   Let's use https://dist.mxnet.io/python/all here to make it easier for users 
to install other variants of mxnet pip package
   ```suggestion
   pip install --pre mxnet -f https://dist.mxnet.io/python/all
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17681: [CD] update pypi description, setup.py

2020-03-03 Thread GitBox
leezu commented on a change in pull request #17681: [CD] update pypi 
description, setup.py
URL: https://github.com/apache/incubator-mxnet/pull/17681#discussion_r387207404
 
 

 ##
 File path: tools/pip/setup.py
 ##
 @@ -152,12 +159,12 @@ def has_ext_modules(self):
 if variant.endswith('MKL'):
 if platform.system() == 'Darwin':
 shutil.copytree(os.path.join(CURRENT_DIR, 
'mxnet-build/3rdparty/mkldnn/build/install/include'),
-os.path.join(CURRENT_DIR, 'mxnet/include/mkldnn'))
+os.path.join(CURRENT_DIR, 'mxnet/include/mkldnn'))
 if platform.system() == 'Linux':
 libdir, mxdir = os.path.dirname(LIB_PATH[0]), os.path.join(CURRENT_DIR, 
'mxnet')
 if os.path.exists(os.path.join(libdir, 'libgfortran.so.3')):
 shutil.copy(os.path.join(libdir, 'libgfortran.so.3'), mxdir)
-package_data['mxnet'].append('mxnet/libgfortran.so.4')
+package_data['mxnet'].append('mxnet/libgfortran.so.3')
 
 Review comment:
   Good catch, thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17681: [CD] update pypi description, setup.py

2020-03-03 Thread GitBox
leezu commented on issue #17681: [CD] update pypi description, setup.py
URL: https://github.com/apache/incubator-mxnet/pull/17681#issuecomment-594095171
 
 
   retriggered unix gpu


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] relaxli00 commented on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
relaxli00 commented on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-594094923
 
 
   Thanks @szha . I found the DLL released in the wheels can be used for C++ 
programming also after opwrapper. Not sure if it will work all the time. 
Compiling the whole system from source code on Windows is painful for 
individual users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17749: Fix races in block scope and deferred_init

2020-03-03 Thread GitBox
leezu commented on a change in pull request #17749: Fix races in block scope 
and deferred_init
URL: https://github.com/apache/incubator-mxnet/pull/17749#discussion_r387206282
 
 

 ##
 File path: python/mxnet/gluon/block.py
 ##
 @@ -48,8 +48,9 @@ class _BlockScope(object):
 def __init__(self, block):
 self._block = block
 self._counter = {}
-self._old_scope = None
-self._name_scope = None
+self._local = threading.local()
 
 Review comment:
   How about using contextvars here? This will ensure the code is compatible 
with python coroutines.
   https://docs.python.org/3/library/contextvars.html
   
   You need to include https://pypi.org/project/contextvars/ as dependency on 
Python 3.5, 3.6


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17641: OpenMP Error

2020-03-03 Thread GitBox
leezu commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-594093568
 
 
   @cjolivier01 please prioritize the PR, as this affects other users. For 
example https://github.com/apache/incubator-mxnet/issues/17733
   
   Let me know if I may resubmit the MKL static linkage commit earlier included 
in #17645.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17750: [CI] How to restart a single stage on MXNet CI?

2020-03-03 Thread GitBox
leezu commented on issue #17750: [CI] How to restart a single stage on MXNet CI?
URL: 
https://github.com/apache/incubator-mxnet/issues/17750#issuecomment-594093877
 
 
   @ChaiBapchya is working to provide this feature for the github bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (9b7132b -> 2abd225)

2020-03-03 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 9b7132b  change error tolerance for bf16 bn (#17673)
 add 2abd225  Update website, README and NEWS with 1.6.0 (#17658)

No new revisions were added by this update.

Summary of changes:
 NEWS.md| 943 ++---
 README.md  |   1 +
 .../src/_includes/get_started/get_started.html |   7 +-
 .../_includes/get_started/linux/python/cpu/pip.md  |  17 +-
 .../_includes/get_started/linux/python/gpu/pip.md  |   9 +-
 .../_includes/get_started/macos/python/cpu/pip.md  |   8 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 .../get_started/windows/python/cpu/pip.md  |   7 +-
 .../get_started/windows/python/gpu/pip.md  |   7 +-
 9 files changed, 875 insertions(+), 126 deletions(-)



[incubator-mxnet] branch master updated (9b7132b -> 2abd225)

2020-03-03 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 9b7132b  change error tolerance for bf16 bn (#17673)
 add 2abd225  Update website, README and NEWS with 1.6.0 (#17658)

No new revisions were added by this update.

Summary of changes:
 NEWS.md| 943 ++---
 README.md  |   1 +
 .../src/_includes/get_started/get_started.html |   7 +-
 .../_includes/get_started/linux/python/cpu/pip.md  |  17 +-
 .../_includes/get_started/linux/python/gpu/pip.md  |   9 +-
 .../_includes/get_started/macos/python/cpu/pip.md  |   8 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 .../get_started/windows/python/cpu/pip.md  |   7 +-
 .../get_started/windows/python/gpu/pip.md  |   7 +-
 9 files changed, 875 insertions(+), 126 deletions(-)



[GitHub] [incubator-mxnet] leezu merged pull request #17658: Update website, README and NEWS with 1.6.0

2020-03-03 Thread GitBox
leezu merged pull request #17658: Update website, README and NEWS with 1.6.0
URL: https://github.com/apache/incubator-mxnet/pull/17658
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
szha commented on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-594078514
 
 
   Hi @smpawlowski @relaxli00. @yajiedesign is still working on the windows 
wheels for GPU. It takes particularly long for the windows compilation and 
hence the delay.
   
   In the meantime, we will look into how to improve the development experience 
and smoothness for releases in the next few weeks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on issue #17658: Update website, README and NEWS with 1.6.0

2020-03-03 Thread GitBox
ptrendx commented on issue #17658: Update website, README and NEWS with 1.6.0
URL: https://github.com/apache/incubator-mxnet/pull/17658#issuecomment-594056658
 
 
   @leezu Could you review again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 commented on issue #17665: No speedup from using FP16 (4 times slower than PyTorch)

2020-03-03 Thread GitBox
DickJC123 commented on issue #17665: No speedup from using FP16 (4 times slower 
than PyTorch)
URL: 
https://github.com/apache/incubator-mxnet/issues/17665#issuecomment-594025008
 
 
   Will take a look at it today. 
   
   > On Mar 2, 2020, at 10:31 PM, Xingjian Shi  wrote:
   > 
   > @DickJC123 @ptrendx Are there any update for this issue? Would it also 
affect batched_dot which widely used in attention layers?
   > 
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub, or unsubscribe.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] relaxli00 commented on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
relaxli00 commented on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-593981964
 
 
   Yes, please release the 1.6.0 cuda support versions also, e.g. 
mxnet-1.6.0cu101mkl. Thanks. 
   
   @szha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] smpawlowski edited a comment on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
smpawlowski edited a comment on issue #17719: 1.6.0 Windows pip installation 
fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-593861905
 
 
   Thanks for looking into it. pip install mxnet==1.6.0 works great. But I have 
trouble installing any version with cuda support like mxnet-cu101 / 
mxnet-cu101mkl. In [pypi 
mxnet-cu101mkl](https://pypi.org/project/mxnet-cu101mkl/#files) I can only find 
manylinux wheel. 
   
   CC @szha Is it possible to also release 1.6.0 with cuda / Windows?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kpuatamazon commented on issue #17559: [MXNET-1446] Quantization: intgemm matrix multiply wrappers

2020-03-03 Thread GitBox
kpuatamazon commented on issue #17559: [MXNET-1446] Quantization: intgemm 
matrix multiply wrappers 
URL: https://github.com/apache/incubator-mxnet/pull/17559#issuecomment-593952925
 
 
   The quantization operator is now parallelized with OpenMP and supports an 
arbitrary number of arguments. It is substantially faster than the current 
MXNet implementation on both 1 and 24 cores (see below for benchmarks) 
@pengzhao-intel .  
   
   Maybe this is too big of a pull request.  Would you be happy with a smaller 
pull request that takes the faster quantization code and replaces the 
implementation of the existing quantize and quantize_v2 operators, so it also 
appears in the quantization flow?  
   
   Then we can carry on with matrix multiply next.  
   
   @ciyongch I'm calling operators manually because we're using gluon and the 
quantization workflow doesn't work for us anyway.  But if you're game to have 
operators optimized, they'll automatically be in the workflow too.  
   
   ```
   OMP_NUM_THREADS=24 ./quant_bench.py
   Shape (1, 1)
   0.0001304 seconds for quantize
   0.0001076 seconds for quantize_v2
   0.310 seconds for intgemm
   0.0001114 seconds for quantize_v2_fit
   0.479 seconds for intgemm_fit
   intgemm is 3.5x faster with calibration
   intgemm is 2.3x faster without calibration
   Shape (128, 128)
   0.0001649 seconds for quantize
   0.0001399 seconds for quantize_v2
   0.329 seconds for intgemm
   0.0001533 seconds for quantize_v2_fit
   0.502 seconds for intgemm_fit
   intgemm is 4.2x faster with calibration
   intgemm is 3.1x faster without calibration
   Shape (256, 256)
   0.0001660 seconds for quantize
   0.0001404 seconds for quantize_v2
   0.335 seconds for intgemm
   0.0001599 seconds for quantize_v2_fit
   0.505 seconds for intgemm_fit
   intgemm is 4.2x faster with calibration
   intgemm is 3.2x faster without calibration
   Shape (512, 512)
   0.0001691 seconds for quantize
   0.0001434 seconds for quantize_v2
   0.342 seconds for intgemm
   0.0001813 seconds for quantize_v2_fit
   0.540 seconds for intgemm_fit
   intgemm is 4.2x faster with calibration
   intgemm is 3.4x faster without calibration
   Shape (1024, 1024)
   0.0001920 seconds for quantize
   0.0001538 seconds for quantize_v2
   0.511 seconds for intgemm
   0.0002390 seconds for quantize_v2_fit
   0.827 seconds for intgemm_fit
   intgemm is 3.0x faster with calibration
   intgemm is 2.9x faster without calibration
   Shape (2048, 2048)
   0.0002364 seconds for quantize
   0.0001989 seconds for quantize_v2
   0.875 seconds for intgemm
   0.0004747 seconds for quantize_v2_fit
   0.0001531 seconds for intgemm_fit
   intgemm is 2.3x faster with calibration
   intgemm is 3.1x faster without calibration
   Shape (20971520,)
   0.0011446 seconds for quantize
   0.0010902 seconds for quantize_v2
   0.0008950 seconds for intgemm
   0.0023337 seconds for quantize_v2_fit
   0.0015005 seconds for intgemm_fit
   intgemm is 1.2x faster with calibration
   intgemm is 1.6x faster without calibration
   Shape (8, 4096)
   0.0001636 seconds for quantize
   0.0001392 seconds for quantize_v2
   0.364 seconds for intgemm
   0.0001508 seconds for quantize_v2_fit
   0.651 seconds for intgemm_fit
   intgemm is 3.8x faster with calibration
   intgemm is 2.3x faster without calibration
   Shape (4096, 8)
   0.0001642 seconds for quantize
   0.0001392 seconds for quantize_v2
   0.370 seconds for intgemm
   0.0001515 seconds for quantize_v2_fit
   0.654 seconds for intgemm_fit
   intgemm is 3.8x faster with calibration
   intgemm is 2.3x faster without calibration
   ```
   ```
   OMP_NUM_THREADS=1 ./quant_bench.py
   Shape (1, 1)
   0.630 seconds for quantize
   0.706 seconds for quantize_v2
   0.294 seconds for intgemm
   0.632 seconds for quantize_v2_fit
   0.475 seconds for intgemm_fit
   intgemm is 2.1x faster with calibration
   intgemm is 1.3x faster without calibration
   Shape (128, 128)
   0.860 seconds for quantize
   0.898 seconds for quantize_v2
   0.324 seconds for intgemm
   0.996 seconds for quantize_v2_fit
   0.464 seconds for intgemm_fit
   intgemm is 2.6x faster with calibration
   intgemm is 2.1x faster without calibration
   Shape (256, 256)
   0.976 seconds for quantize
   0.0001028 seconds for quantize_v2
   0.339 seconds for intgemm
   0.0001513 seconds for quantize_v2_fit
   0.521 seconds for intgemm_fit
   intgemm is 2.9x faster with calibration
   intgemm is 2.9x faster without calibration
   Shape (512, 512)
   0.0001724 seconds for quantize
   0.0001693 seconds for quantize_v2
   0.839 seconds for intgemm
   0.0004351 seconds for quantize_v2_fit
   0.0001420 seconds for intgemm_fit
   intgemm is 2.0x faster with calibration
   intgemm is 3.1x faster without calibration
   Shape (1024, 1024)
   0.0003559 seconds for quantize
   0.0003481 seconds for quantize_v2
   0.0002384 seconds for intgemm
   

[GitHub] [incubator-mxnet] HolyTerra commented on issue #17733: "Segmentation fault: 11" after "Running: OpWrapperGenerator.py"

2020-03-03 Thread GitBox
HolyTerra commented on issue #17733:  "Segmentation fault: 11" after "Running: 
OpWrapperGenerator.py"
URL: 
https://github.com/apache/incubator-mxnet/issues/17733#issuecomment-593946597
 
 
   
   
   
   > After switching to MKL, you may run into #17641
   > 
   > Can you try deleting `3rdparty/openmp` to see if the segfault is due to 
conflict between llvm openmp and intel openmp?
   
   You're right.Build finished after `3rdparty/openmp` removed,.Thank you for 
your support!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] HolyTerra closed issue #17733: "Segmentation fault: 11" after "Running: OpWrapperGenerator.py"

2020-03-03 Thread GitBox
HolyTerra closed issue #17733:  "Segmentation fault: 11" after "Running: 
OpWrapperGenerator.py"
URL: https://github.com/apache/incubator-mxnet/issues/17733
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-03 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 150c342  Bump the publish timestamp.
150c342 is described below

commit 150c342dfb68aa572f1d3ea0c09fab7db29f3508
Author: mxnet-ci 
AuthorDate: Tue Mar 3 12:40:00 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..16254b9
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Mar  3 12:40:00 UTC 2020



[GitHub] [incubator-mxnet] zixuanweeei commented on issue #17632: [Large Tensor] Fixed RNN op

2020-03-03 Thread GitBox
zixuanweeei commented on issue #17632: [Large Tensor] Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#issuecomment-593920112
 
 
   > @zixuanweeei you're absolutely right - the segfault was generated on line 
2032 of `rnn_impl.h` during the backward pass when I ran the op in ReLU mode. 
This line lies within the iteration section of the omp loop and, as @apeforest 
astutely pointed out, the omp loop requires a signed index, which led to errors 
when the `size_t` changes were implemented.
   
   Thanks for trying out the `size_t` type. Let's keep the signed `index_t`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn opened a new issue #17750: [CI] How to restart a single stage on MXNet CI?

2020-03-03 Thread GitBox
wkcn opened a new issue #17750: [CI] How to restart a single stage on MXNet CI?
URL: https://github.com/apache/incubator-mxnet/issues/17750
 
 
   Hi, there.
   
   Sometimes, due to strage reason, the CI test fails, but it may be not 
relevant to the PR.
   
   There are many PRs failing only a check because of strage reason. As we 
know, it will cost a lot of money and take a long time to restart all stages : (
   
   It will be economic and convenient if it is available to restart a single 
stage on MXNet CI.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on issue #17632: [Large Tensor] Fixed RNN op

2020-03-03 Thread GitBox
connorgoggins commented on issue #17632: [Large Tensor] Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#issuecomment-593884226
 
 
   @zixuanweeei you're absolutely right - the segfault was generated on line 
2032 of `rnn_impl.h` during the backward pass when I ran the op in ReLU mode. 
This line lies within the iteration section of the omp loop and, as @apeforest 
astutely pointed out, the omp loop requires a signed index, which led to errors 
when the `size_t` changes were implemented.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-03-03 Thread GitBox
connorgoggins commented on a change in pull request #17632: [Large Tensor] 
Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r386910492
 
 

 ##
 File path: src/operator/rnn_impl.h
 ##
 @@ -146,15 +146,15 @@ void LstmForwardTraining(DType* ws,
   Tensor hx(hx_ptr, Shape3(total_layers, N, H));
   Tensor cx(cx_ptr, Shape3(total_layers, N, H));
   const int b_size = 2 * H * 4;
 
 Review comment:
   @zixuanweeei thanks for your feedback. I’m happy to bump b_size up to 
index_t here if there are overflow concerns.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-03-03 Thread GitBox
connorgoggins commented on a change in pull request #17632: [Large Tensor] 
Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r386908504
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -361,9 +361,9 @@ void RNNBackward(DType* ws,
  DType* rs,
  const int num_layers,
  const int direction,
- const int seq_length,
- const int batch_size,
- const int input_size,
+ const index_t seq_length,
+ const index_t batch_size,
+ const index_t input_size,
 
 Review comment:
   I agree @apeforest! I believe the omp loop’s required signed index was the 
root cause of the segfault when I made the size_t changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] smpawlowski commented on issue #17719: 1.6.0 Windows pip installation fails

2020-03-03 Thread GitBox
smpawlowski commented on issue #17719: 1.6.0 Windows pip installation fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17719#issuecomment-593861905
 
 
   Thanks for looking into it. pip install mxnet==1.6.0 works great. But I have 
trouble installing any version with cuda support like mxnet-cu101 / 
mxnet-cu101mkl. In [pypi 
mxnet-cu101mkl](https://pypi.org/project/mxnet-cu101mkl/#files) I can only find 
manylinux wheel. 
   
   Is it possible to also release 1.6.0 with cuda / Windows?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Bumblebee269 commented on issue #16167: [RFC] Apache MXNet 2.0 Roadmap

2020-03-03 Thread GitBox
Bumblebee269 commented on issue #16167: [RFC] Apache MXNet 2.0 Roadmap
URL: 
https://github.com/apache/incubator-mxnet/issues/16167#issuecomment-593859752
 
 
   We use the c++ interface for inference on a sorting machine. But also we 
would like to provide the users of our machines an easy and integrated user 
interface for training new sorting recipes. Now we use python or Mathematica 
scripts which is far of user friendly for non-programmers. So we want to use 
the c++ (shielded with a C# wrapper) to provide a custom training environment 
for non-programmers.
   
   Unfortunately, building the mxnet library with c++ support on Windows 
machine with MKL / CUDA is an ongoing nightmare. But we really like MxNet


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan edited a comment on issue #17749: Fix races in block scope and deferred_init

2020-03-03 Thread GitBox
hzfan edited a comment on issue #17749: Fix races in block scope and 
deferred_init
URL: https://github.com/apache/incubator-mxnet/pull/17749#issuecomment-593838223
 
 
   @eric-haibin-lin Tests added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zixuanweeei commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-03-03 Thread GitBox
zixuanweeei commented on a change in pull request #17632: [Large Tensor] Fixed 
RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r386880227
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -361,9 +361,9 @@ void RNNBackward(DType* ws,
  DType* rs,
  const int num_layers,
  const int direction,
- const int seq_length,
- const int batch_size,
- const int input_size,
+ const index_t seq_length,
+ const index_t batch_size,
+ const index_t input_size,
 
 Review comment:
   Agree with keeping the signed `index_t` :100:


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zixuanweeei commented on issue #17632: [Large Tensor] Fixed RNN op

2020-03-03 Thread GitBox
zixuanweeei commented on issue #17632: [Large Tensor] Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#issuecomment-593839158
 
 
   @connorgoggins Just curious about the reason for the segfault. I don't have 
much knowledge about that. But I guess it may caused by `for (size_t t = T - 1; 
t >= 0; --t) {}` in the Backward pass.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on issue #17749: Fix races in block scope and deferred_init

2020-03-03 Thread GitBox
hzfan commented on issue #17749: Fix races in block scope and deferred_init
URL: https://github.com/apache/incubator-mxnet/pull/17749#issuecomment-593838223
 
 
   @eric-haibin-lin Tests added. Seems that ctx is not relevant to this issue, 
and I just test it on cpu.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >