[GitHub] sxjscience commented on issue #10000: [MXNET-80] Fix average pooling kernel size assignment error

2018-03-13 Thread GitBox
sxjscience commented on issue #1: [MXNET-80] Fix average pooling kernel 
size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#issuecomment-372566536
 
 
   We may also need to revise the shape assignment logic: 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling.cc#L103-L215


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-03-13 Thread GitBox
reminisce commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO 
NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r174032235
 
 

 ##
 File path: tests/ci_build/Dockerfile.build_cuda8_cudnn7
 ##
 @@ -0,0 +1,26 @@
+FROM nvidia/cuda:8.0-cudnn7-devel
+# cuda8.0 has to be used because this is the first ubuntu16.04 container
+# which is required due to OpenBLAS being incompatible with ubuntu14.04
+# the reason we used a gpu base container because we are going to test MKLDNN
+# operator implementation against GPU implementation
 
 Review comment:
   @marcoabreu It seems the required version of CUDNN is not installed properly 
so that tests failed. Do we still need to put this docker file under 
`ci/docker/` to get the required cudnn from the NV's docker hub?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on issue #10075: Fix CMake build issue with MKL.

2018-03-13 Thread GitBox
KellenSunderland commented on issue #10075: Fix CMake build issue with MKL.
URL: https://github.com/apache/incubator-mxnet/pull/10075#issuecomment-372569505
 
 
   @pengzhao-intel Would also be great if you could review the PR. Does it look 
ok from you POV?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #10085: Ordering operators do not support kNullOp

2018-03-13 Thread GitBox
chinakook commented on issue #10085: Ordering operators do not support kNullOp
URL: 
https://github.com/apache/incubator-mxnet/issues/10085#issuecomment-372569521
 
 
   @sxjscience thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on issue #10075: Fix CMake build issue with MKL.

2018-03-13 Thread GitBox
KellenSunderland commented on issue #10075: Fix CMake build issue with MKL.
URL: https://github.com/apache/incubator-mxnet/pull/10075#issuecomment-372569505
 
 
   @pengzhao-intel Would also be great if you could review the PR. Does it look 
ok from your point of view?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CoinCheung commented on issue #10000: [MXNET-80] Fix average pooling kernel size assignment error

2018-03-13 Thread GitBox
CoinCheung commented on issue #1: [MXNET-80] Fix average pooling kernel 
size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#issuecomment-372572675
 
 
   @sxjscience 
   But I did not see logic error in this function. From the printed error 
message, I see no CHECK() failure triggered, and in every scenario with 
global_pool, the output shape is set to [-1,1] or [-1,1,1,] or [-1,1,1,1].  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10000: [MXNET-80] Fix average pooling kernel size assignment error

2018-03-13 Thread GitBox
sxjscience commented on issue #1: [MXNET-80] Fix average pooling kernel 
size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#issuecomment-372573233
 
 
   I think it has not handled the case when kernel.ndim()=0
   
   Get Outlook for iOS
   
   From: CoinCheung 
   Sent: Tuesday, March 13, 2018 12:33:34 AM
   To: apache/incubator-mxnet
   Cc: Xingjian SHI; Mention
   Subject: Re: [apache/incubator-mxnet] [MXNET-80] Fix average pooling kernel 
size assignment error (#1)
   
   
   @sxjscience
   But I did not see logic error in this function. From the printed error 
message, I see no CHECK() failure triggered, and in every scenario with 
global_pool, the output shape is set to [-1,1] or [-1,1,1,] or [-1,1,1,1].
   
   ?
   You are receiving this because you were mentioned.
   Reply to this email directly, view it on 
GitHub,
 or mute the 
thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #9993: cmake cannot build mxnet

2018-03-13 Thread GitBox
jinhuang415 commented on issue #9993: cmake cannot build mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/9993#issuecomment-372580279
 
 
   @jacky4323  your issue should be the same with 
https://github.com/apache/incubator-mxnet/issues/10072, would you try to patch 
the related PR in https://github.com/apache/incubator-mxnet/issues/10072 and 
see if the issue is gone?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #9993: cmake cannot build mxnet

2018-03-13 Thread GitBox
jinhuang415 commented on issue #9993: cmake cannot build mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/9993#issuecomment-372580279
 
 
   @jacky4323  your issue should be the same with 
https://github.com/apache/incubator-mxnet/issues/10072, would you try to patch 
the related PR(https://github.com/apache/incubator-mxnet/pull/10075) in 
https://github.com/apache/incubator-mxnet/issues/10072 and see if the issue is 
gone?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] AndreGuerra123 commented on issue #9967: Error in output.shape[[output.names]]

2018-03-13 Thread GitBox
AndreGuerra123 commented on issue #9967: Error in output.shape[[output.names]]
URL: 
https://github.com/apache/incubator-mxnet/issues/9967#issuecomment-372597352
 
 
   The number of observations is the same. (Edited above) What do you mean as a 
label passed as the y argument? I have a vector of the type factor with the 
labels "I","II","III", etc... Could you please look into the dataset I shared?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-03-13 Thread GitBox
marcoabreu commented on a change in pull request #9552: [REQUEST FOR REVIEW | 
DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r174076331
 
 

 ##
 File path: tests/ci_build/Dockerfile.build_cuda8_cudnn7
 ##
 @@ -0,0 +1,26 @@
+FROM nvidia/cuda:8.0-cudnn7-devel
+# cuda8.0 has to be used because this is the first ubuntu16.04 container
+# which is required due to OpenBLAS being incompatible with ubuntu14.04
+# the reason we used a gpu base container because we are going to test MKLDNN
+# operator implementation against GPU implementation
 
 Review comment:
   Yes. But how about we just upgrade to cudnn7? Could it cause any issues?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-03-13 Thread GitBox
marcoabreu commented on a change in pull request #9552: [REQUEST FOR REVIEW | 
DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r174076331
 
 

 ##
 File path: tests/ci_build/Dockerfile.build_cuda8_cudnn7
 ##
 @@ -0,0 +1,26 @@
+FROM nvidia/cuda:8.0-cudnn7-devel
+# cuda8.0 has to be used because this is the first ubuntu16.04 container
+# which is required due to OpenBLAS being incompatible with ubuntu14.04
+# the reason we used a gpu base container because we are going to test MKLDNN
+# operator implementation against GPU implementation
 
 Review comment:
   Yes. But how about we just upgrade all tests to cudnn7? Could it cause any 
issues? In That case, you could just change ubuntu_gpu to CuDNN7.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #10078: Support float16 in L2Normalization operator

2018-03-13 Thread GitBox
piiswrong commented on a change in pull request #10078: Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r174083179
 
 

 ##
 File path: src/operator/l2_normalization.cc
 ##
 @@ -26,13 +26,22 @@
 namespace mxnet {
 namespace op {
 template<>
-Operator* CreateOp(L2NormalizationParam param) {
-  return new L2NormalizationOp(param);
+Operator* CreateOp(L2NormalizationParam param, int dtype) {
+  Operator* op = NULL;
+  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
+op = new L2NormalizationOp(param);
+  });
+  return op;
 }
 
 // DO_BIND_DISPATCH comes from static_operator_common.h
-Operator* L2NormalizationProp::CreateOperator(Context ctx) const {
-  DO_BIND_DISPATCH(CreateOp, param_);
+Operator* L2NormalizationProp::CreateOperatorEx(Context ctx, 
std::vector *in_shape,
+std::vector *in_type) 
const {
+  std::vector out_shape, aux_shape;
 
 Review comment:
   these checks are not necessary


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10074: Add vocabulary and embedding

2018-03-13 Thread GitBox
piiswrong commented on issue #10074: Add vocabulary and embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#issuecomment-372623377
 
 
   This should be in gluon.
   If we put vocab and embedding in mxnet.text and textdataset in gluon it's 
going to be really confusing


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #10058: Adding back comments to index.md that cause nightly test to fail

2018-03-13 Thread GitBox
piiswrong closed pull request #10058: Adding back comments to index.md that 
cause nightly test to fail
URL: https://github.com/apache/incubator-mxnet/pull/10058
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/install/index.md b/docs/install/index.md
index ae8cc698b21..e4767618e65 100644
--- a/docs/install/index.md
+++ b/docs/install/index.md
@@ -269,6 +269,7 @@ pip install graphviz
 
 
 
+
 
 
 
@@ -492,6 +493,7 @@ pip install graphviz
  
  
  
+-
 
 
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Adding back comments to index.md that cause nightly test to fail (#10058)

2018-03-13 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 0bcf334  Adding back comments to index.md that cause nightly test to 
fail (#10058)
0bcf334 is described below

commit 0bcf3346bfbec903270434ff2dd4e02b3c9a214c
Author: mbaijal <30911248+mbai...@users.noreply.github.com>
AuthorDate: Tue Mar 13 03:47:58 2018 -0700

Adding back comments to index.md that cause nightly test to fail (#10058)
---
 docs/install/index.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/docs/install/index.md b/docs/install/index.md
index ae8cc69..e476761 100644
--- a/docs/install/index.md
+++ b/docs/install/index.md
@@ -269,6 +269,7 @@ pip install graphviz
 
 
 
+
 
 
 
@@ -492,6 +493,7 @@ pip install graphviz
  
  
  
+-
 
 
 

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.


[GitHub] CoinCheung commented on issue #10000: [MXNET-80] Fix average pooling kernel size assignment error

2018-03-13 Thread GitBox
CoinCheung commented on issue #1: [MXNET-80] Fix average pooling kernel 
size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#issuecomment-372624274
 
 
   I tried but failed. So what should be the correct behavior when 
kernel.ndim() is 0? @sxjscience 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #8532: mxnet-mkl (v0.12.0) crash when using (conda-installed) numpy with MKL

2018-03-13 Thread GitBox
pengzhao-intel commented on issue #8532: mxnet-mkl (v0.12.0) crash when using 
(conda-installed) numpy with MKL
URL: 
https://github.com/apache/incubator-mxnet/issues/8532#issuecomment-372671112
 
 
   @fhieber we have tried the latest release the problem is gone. 
   Would you mind to try again?
   
   ```
   
   [chenxiny@mlt-ace Downloads]$ conda activate
   (base) [chenxiny@mlt-ace Downloads]$ pip install 
mxnet_cu80mkl-1.1.0-py2.py3-none-manylinux1_x86_64.whl
   Processing ./mxnet_cu80mkl-1.1.0-py2.py3-none-manylinux1_x86_64.whl
   Requirement already satisfied: graphviz==0.8.1 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: requests==2.18.4 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: numpy<=1.13.3 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: chardet<3.1.0,>=3.0.2 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: idna<2.7,>=2.5 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: urllib3<1.23,>=1.21.1 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: certifi>=2017.4.17 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Installing collected packages: mxnet-cu80mkl
   Successfully installed mxnet-cu80mkl-1.1.0
   (base) [chenxiny@mlt-ace Downloads]$ python
   Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49)
   [GCC 7.2.0] on linux
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet as mx
   
   Intel python distribution
   [chenxiny@mlt-ace Downloads]$ conda activate idp
   (idp) [chenxiny@mlt-ace Downloads]$ pip install 
mxnet_cu80mkl-1.1.0-py2.py3-none-manylinux1_x86_64.whl
   Processing ./mxnet_cu80mkl-1.1.0-py2.py3-none-manylinux1_x86_64.whl
   Requirement already satisfied: numpy<=1.13.3 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: requests==2.18.4 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: graphviz==0.8.1 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: chardet<3.1.0,>=3.0.2 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: idna<2.7,>=2.5 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: urllib3<1.23,>=1.21.1 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: certifi>=2017.4.17 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages/certifi-2017.7.27.1-py3.6.egg
 (from requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Installing collected packages: mxnet-cu80mkl
   Successfully installed mxnet-cu80mkl-1.1.0
   (idp) [chenxiny@mlt-ace Downloads]$ python
   Python 3.6.3 |Intel Corporation| (default, Oct 16 2017, 15:28:36)
   [GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
   Type "help", "copyright", "credits" or "license" for more information.
   Intel(R) Distribution for Python is brought to you by Intel Corporation.
   Please check out: https://software.intel.com/en-us/python-distribution
   >>> import mxnet as mx
   
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #8532: mxnet-mkl (v0.12.0) crash when using (conda-installed) numpy with MKL

2018-03-13 Thread GitBox
pengzhao-intel commented on issue #8532: mxnet-mkl (v0.12.0) crash when using 
(conda-installed) numpy with MKL
URL: 
https://github.com/apache/incubator-mxnet/issues/8532#issuecomment-372671112
 
 
   @fhieber we have tried the latest release the problem is gone. 
   Would you mind to try again?
   
   ```
   
   [chenxiny@mlt-ace Downloads]$ conda activate
   (base) [chenxiny@mlt-ace Downloads]$ pip install 
mxnet_cu80mkl-1.1.0-py2.py3-none-manylinux1_x86_64.whl
   Processing ./mxnet_cu80mkl-1.1.0-py2.py3-none-manylinux1_x86_64.whl
   Requirement already satisfied: graphviz==0.8.1 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: requests==2.18.4 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: numpy<=1.13.3 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: chardet<3.1.0,>=3.0.2 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: idna<2.7,>=2.5 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: urllib3<1.23,>=1.21.1 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: certifi>=2017.4.17 in 
/home/chenxiny/anaconda3/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Installing collected packages: mxnet-cu80mkl
   Successfully installed mxnet-cu80mkl-1.1.0
   (base) [chenxiny@mlt-ace Downloads]$ python
   Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49)
   [GCC 7.2.0] on linux
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet as mx
   
   Intel python distribution
   [chenxiny@mlt-ace Downloads]$ conda activate idp
   (idp) [chenxiny@mlt-ace Downloads]$ pip install 
mxnet_cu80mkl-1.1.0-py2.py3-none-manylinux1_x86_64.whl
   Processing ./mxnet_cu80mkl-1.1.0-py2.py3-none-manylinux1_x86_64.whl
   Requirement already satisfied: numpy<=1.13.3 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: requests==2.18.4 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: graphviz==0.8.1 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: chardet<3.1.0,>=3.0.2 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: idna<2.7,>=2.5 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: urllib3<1.23,>=1.21.1 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages (from 
requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Requirement already satisfied: certifi>=2017.4.17 in 
/home/chenxiny/anaconda3/envs/idp/lib/python3.6/site-packages/certifi-2017.7.27.1-py3.6.egg
 (from requests==2.18.4->mxnet-cu80mkl==1.1.0)
   Installing collected packages: mxnet-cu80mkl
   Successfully installed mxnet-cu80mkl-1.1.0
   (idp) [chenxiny@mlt-ace Downloads]$ python
   Python 3.6.3 |Intel Corporation| (default, Oct 16 2017, 15:28:36)
   [GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
   Type "help", "copyright", "credits" or "license" for more information.
   Intel(R) Distribution for Python is brought to you by Intel Corporation.
   Please check out: https://software.intel.com/en-us/python-distribution
   >>> import mxnet as mx
   
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kobenaxie commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-13 Thread GitBox
kobenaxie commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174160853
 
 

 ##
 File path: python/mxnet/text/embedding.py
 ##
 @@ -38,8 +38,12 @@
 
 def register(embedding_cls):
 """Registers a new token embedding.
+
+
 Once an embedding is registered, we can create an instance of this 
embedding with
-:func:`~mxnet.contrib.text.embedding.create`.
+:func:`~mxnet.text.embedding.create`.
+
+
 Examples
 
 >>> @mxnet.contrib.text.embedding.register
 
 Review comment:
   `>>> @mxnet.text.embedding.register`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu opened a new issue #10086: test_autograd.test_unary_func @ Python3: MKLDNN-CPU

2018-03-13 Thread GitBox
marcoabreu opened a new issue #10086: test_autograd.test_unary_func @ Python3: 
MKLDNN-CPU
URL: https://github.com/apache/incubator-mxnet/issues/10086
 
 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/474/pipeline/485/
   
   ```
   [INFO] Setting module np/mx/python random seeds, use 
MXNET_MODULE_SEED=56561434 to reproduce.
   
   test_autograd.test_unary_func ... /work/runtime_functions.sh: line 312: 
6 Segmentation fault  (core dumped) nosetests-3.4 --verbose 
tests/python/unittest
   
   build.py: 2018-03-11 11:21:48,801 Running of command in container failed: 
docker run --rm -v 
/home/jenkins_slave/workspace/ut-python3-mkldnn-cpu:/work/mxnet -v 
/home/jenkins_slave/workspace/ut-python3-mkldnn-cpu/build:/work/build -u 
1001:1001 mxnet/build.ubuntu_cpu /work/runtime_functions.sh 
unittest_ubuntu_python3_cpu
   
   build.py: 2018-03-11 11:21:48,801 You can try to get into the container by 
using the following command: docker run --rm -v 
/home/jenkins_slave/workspace/ut-python3-mkldnn-cpu:/work/mxnet -v 
/home/jenkins_slave/workspace/ut-python3-mkldnn-cpu/build:/work/build -u 
1001:1001 -ti --entrypoint bash mxnet/build.ubuntu_cpu 
/work/runtime_functions.sh unittest_ubuntu_python3_cpu
   
   Traceback (most recent call last):
   
 File "ci/build.py", line 179, in 
   
   sys.exit(main())
   
 File "ci/build.py", line 159, in main
   
   container_run(platform, docker_binary, command)
   
 File "ci/build.py", line 110, in container_run
   
   raise subprocess.CalledProcessError(ret, cmd)
   
   subprocess.CalledProcessError: Command 'docker run --rm -v 
/home/jenkins_slave/workspace/ut-python3-mkldnn-cpu:/work/mxnet -v 
/home/jenkins_slave/workspace/ut-python3-mkldnn-cpu/build:/work/build -u 
1001:1001 mxnet/build.ubuntu_cpu /work/runtime_functions.sh 
unittest_ubuntu_python3_cpu' returned non-zero exit status 139
   
   script returned exit code 1
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #10083: [TENSOR] Fix DLTensor conversion for int64

2018-03-13 Thread GitBox
tqchen commented on issue #10083: [TENSOR] Fix DLTensor conversion for int64
URL: https://github.com/apache/incubator-mxnet/pull/10083#issuecomment-372718542
 
 
   layer norm testcase is not related to changes introduced here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #10083: [TENSOR] Fix DLTensor conversion for int64

2018-03-13 Thread GitBox
tqchen commented on issue #10083: [TENSOR] Fix DLTensor conversion for int64
URL: https://github.com/apache/incubator-mxnet/pull/10083#issuecomment-372719428
 
 
   cc @sxjscience who introduces the test


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #10083: [TENSOR] Fix DLTensor conversion for int64

2018-03-13 Thread GitBox
tqchen commented on issue #10083: [TENSOR] Fix DLTensor conversion for int64
URL: https://github.com/apache/incubator-mxnet/pull/10083#issuecomment-372719428
 
 
   cc @sxjscience who introduces the test, here is the old error log 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-10083/1/pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9963: [MXNET-34] Onnx Module to import onnx models into mxnet

2018-03-13 Thread GitBox
marcoabreu commented on a change in pull request #9963: [MXNET-34] Onnx Module 
to import onnx models into mxnet
URL: https://github.com/apache/incubator-mxnet/pull/9963#discussion_r174191440
 
 

 ##
 File path: tests/python-pytest/onnx/onnx_test.py
 ##
 @@ -0,0 +1,138 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Tests for individual operators
+This module contains operator tests which currently do not exist on
+ONNX backend test framework. Once we have PRs on the ONNX repo and get
+those PRs merged, this file will get EOL'ed.
+"""
+from __future__ import absolute_import
+import sys
+import os
+import unittest
+import logging
+import numpy as np
+import numpy.testing as npt
+from onnx import helper
+import backend as mxnet_backend
+CURR_PATH = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
+sys.path.insert(0, os.path.join(CURR_PATH, '../../python/unittest'))
+from common import with_seed
+
+# set up logger
+logging.basicConfig()
+LOGGER = logging.getLogger()
+LOGGER.setLevel(logging.INFO)
 
 Review comment:
   Please remove this


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9963: [MXNET-34] Onnx Module to import onnx models into mxnet

2018-03-13 Thread GitBox
marcoabreu commented on a change in pull request #9963: [MXNET-34] Onnx Module 
to import onnx models into mxnet
URL: https://github.com/apache/incubator-mxnet/pull/9963#discussion_r174191587
 
 

 ##
 File path: tests/python-pytest/onnx/onnx_test.py
 ##
 @@ -0,0 +1,138 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Tests for individual operators
+This module contains operator tests which currently do not exist on
+ONNX backend test framework. Once we have PRs on the ONNX repo and get
+those PRs merged, this file will get EOL'ed.
+"""
+from __future__ import absolute_import
+import sys
+import os
+import unittest
+import logging
+import numpy as np
+import numpy.testing as npt
+from onnx import helper
+import backend as mxnet_backend
+CURR_PATH = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
+sys.path.insert(0, os.path.join(CURR_PATH, '../../python/unittest'))
+from common import with_seed
+
+# set up logger
+logging.basicConfig()
+LOGGER = logging.getLogger()
+LOGGER.setLevel(logging.INFO)
+
+@with_seed()
+def test_reduce_max():
+"""Test for ReduceMax operator"""
+node_def = helper.make_node("ReduceMax", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.max(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op)
+
+@with_seed()
+def test_reduce_mean():
+"""Test for ReduceMean operator"""
+node_def = helper.make_node("ReduceMean", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.mean(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op, decimal=5)
+
+@with_seed()
+def test_reduce_min():
+"""Test for ReduceMin operator"""
+node_def = helper.make_node("ReduceMin", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.min(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op)
+
+@with_seed()
+def test_reduce_sum():
+"""Test for ReduceSum operator"""
+node_def = helper.make_node("ReduceSum", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.sum(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op, decimal=5)
+
+@with_seed()
+def test_reduce_prod():
+"""Test for ReduceProd operator"""
+node_def = helper.make_node("ReduceProd", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.prod(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op, decimal=5)
+
+@with_seed()
+def test_squeeze():
+"""Test for Squeeze operator"""
+node_def = helper.make_node("Squeeze", ["input1"], ["output"], axes=[1, 3])
+input1 = np.random.ranf([3, 1, 2, 1, 4]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+npt.assert_almost_equal(output, np.squeeze(input1, axis=[1, 3]))
+
+def test_super_resolution():
 
 Review comment:
   test_super_resolution_example


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10083: [TENSOR] Fix DLTensor conversion for int64

2018-03-13 Thread GitBox
sxjscience commented on issue #10083: [TENSOR] Fix DLTensor conversion for int64
URL: https://github.com/apache/incubator-mxnet/pull/10083#issuecomment-372725480
 
 
   I need to revise the gradient test of layer norm


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10078: Support float16 in L2Normalization operator

2018-03-13 Thread GitBox
haojin2 commented on a change in pull request #10078: Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r174202873
 
 

 ##
 File path: src/operator/l2_normalization.cc
 ##
 @@ -26,13 +26,22 @@
 namespace mxnet {
 namespace op {
 template<>
-Operator* CreateOp(L2NormalizationParam param) {
-  return new L2NormalizationOp(param);
+Operator* CreateOp(L2NormalizationParam param, int dtype) {
+  Operator* op = NULL;
+  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
+op = new L2NormalizationOp(param);
+  });
+  return op;
 }
 
 // DO_BIND_DISPATCH comes from static_operator_common.h
-Operator* L2NormalizationProp::CreateOperator(Context ctx) const {
-  DO_BIND_DISPATCH(CreateOp, param_);
+Operator* L2NormalizationProp::CreateOperatorEx(Context ctx, 
std::vector *in_shape,
+std::vector *in_type) 
const {
+  std::vector out_shape, aux_shape;
 
 Review comment:
   Do you mean the checks for InferType and InferShape?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9963: [MXNET-34] Onnx Module to import onnx models into mxnet

2018-03-13 Thread GitBox
marcoabreu commented on a change in pull request #9963: [MXNET-34] Onnx Module 
to import onnx models into mxnet
URL: https://github.com/apache/incubator-mxnet/pull/9963#discussion_r174206407
 
 

 ##
 File path: tests/python-pytest/onnx/onnx_test.py
 ##
 @@ -0,0 +1,133 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Tests for individual operators
+This module contains operator tests which currently do not exist on
+ONNX backend test framework. Once we have PRs on the ONNX repo and get
+those PRs merged, this file will get EOL'ed.
+"""
+from __future__ import absolute_import
+import sys
+import os
+import unittest
+import logging
+import numpy as np
+import numpy.testing as npt
+from onnx import helper
+import backend as mxnet_backend
+CURR_PATH = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
+sys.path.insert(0, os.path.join(CURR_PATH, '../../python/unittest'))
+from common import with_seed
+
+@with_seed()
+def test_reduce_max():
+"""Test for ReduceMax operator"""
+node_def = helper.make_node("ReduceMax", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.max(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op)
+
+@with_seed()
+def test_reduce_mean():
+"""Test for ReduceMean operator"""
+node_def = helper.make_node("ReduceMean", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.mean(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op, decimal=5)
+
+@with_seed()
+def test_reduce_min():
+"""Test for ReduceMin operator"""
+node_def = helper.make_node("ReduceMin", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.min(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op)
+
+@with_seed()
+def test_reduce_sum():
+"""Test for ReduceSum operator"""
+node_def = helper.make_node("ReduceSum", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.sum(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op, decimal=5)
+
+@with_seed()
+def test_reduce_prod():
+"""Test for ReduceProd operator"""
+node_def = helper.make_node("ReduceProd", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.prod(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op, decimal=5)
+
+@with_seed()
+def test_squeeze():
+"""Test for Squeeze operator"""
+node_def = helper.make_node("Squeeze", ["input1"], ["output"], axes=[1, 3])
+input1 = np.random.ranf([3, 1, 2, 1, 4]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+npt.assert_almost_equal(output, np.squeeze(input1, axis=[1, 3]))
+
+def test_super_resolution_example():
+"""Test the super resolution example in the example/onnx folder"""
+sys.path.insert(0, os.path.join(CURR_PATH, '../../../example/onnx/'))
+import super_resolution
+
+sym, params = super_resolution.import_onnx()
+assert sym is not None
+assert params is not None
+
+inputs = sym.list_inputs()
+assert len(inputs) == 9
+for i, input_param in enumerate(['param_7', 'param_5', 'param_3', 
'param_1',
+ 'input_0', 'param_0', 'param_2', 
'param_4', 'param_6']):
+assert inputs[i] == input_param
+
+assert len(sym.list_outputs()) == 1
+assert sym.list_outputs()[0] == 'reshape5_output'
+
+attrs_keys = sym.attr_dict().keys()
+assert len(attrs_keys) == 19
+for i, key_item in enumerate(['reshape4', 'param_5', 'param_4', 'param_7',
+  'param_6', 'param_1', '

[GitHub] marcoabreu commented on a change in pull request #9963: [MXNET-34] Onnx Module to import onnx models into mxnet

2018-03-13 Thread GitBox
marcoabreu commented on a change in pull request #9963: [MXNET-34] Onnx Module 
to import onnx models into mxnet
URL: https://github.com/apache/incubator-mxnet/pull/9963#discussion_r174206700
 
 

 ##
 File path: tests/python-pytest/onnx/onnx_test.py
 ##
 @@ -0,0 +1,133 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Tests for individual operators
+This module contains operator tests which currently do not exist on
+ONNX backend test framework. Once we have PRs on the ONNX repo and get
+those PRs merged, this file will get EOL'ed.
+"""
+from __future__ import absolute_import
+import sys
+import os
+import unittest
+import logging
+import numpy as np
+import numpy.testing as npt
+from onnx import helper
+import backend as mxnet_backend
+CURR_PATH = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
+sys.path.insert(0, os.path.join(CURR_PATH, '../../python/unittest'))
+from common import with_seed
+
+@with_seed()
+def test_reduce_max():
+"""Test for ReduceMax operator"""
+node_def = helper.make_node("ReduceMax", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.max(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op)
+
+@with_seed()
+def test_reduce_mean():
+"""Test for ReduceMean operator"""
+node_def = helper.make_node("ReduceMean", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.mean(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op, decimal=5)
+
+@with_seed()
+def test_reduce_min():
+"""Test for ReduceMin operator"""
+node_def = helper.make_node("ReduceMin", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.min(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op)
+
+@with_seed()
+def test_reduce_sum():
+"""Test for ReduceSum operator"""
+node_def = helper.make_node("ReduceSum", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.sum(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op, decimal=5)
+
+@with_seed()
+def test_reduce_prod():
+"""Test for ReduceProd operator"""
+node_def = helper.make_node("ReduceProd", ["input1"], ["output"], axes=[1, 
0], keepdims=1)
+input1 = np.random.ranf([3, 10]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+numpy_op = np.prod(input1, axis=(1, 0), keepdims=True)
+npt.assert_almost_equal(output, numpy_op, decimal=5)
+
+@with_seed()
+def test_squeeze():
+"""Test for Squeeze operator"""
+node_def = helper.make_node("Squeeze", ["input1"], ["output"], axes=[1, 3])
+input1 = np.random.ranf([3, 1, 2, 1, 4]).astype("float32")
+output = mxnet_backend.run_node(node_def, [input1])[0]
+npt.assert_almost_equal(output, np.squeeze(input1, axis=[1, 3]))
+
+def test_super_resolution_example():
+"""Test the super resolution example in the example/onnx folder"""
+sys.path.insert(0, os.path.join(CURR_PATH, '../../../example/onnx/'))
+import super_resolution
+
+sym, params = super_resolution.import_onnx()
+assert sym is not None
+assert params is not None
+
+inputs = sym.list_inputs()
+assert len(inputs) == 9
+for i, input_param in enumerate(['param_7', 'param_5', 'param_3', 
'param_1',
+ 'input_0', 'param_0', 'param_2', 
'param_4', 'param_6']):
+assert inputs[i] == input_param
+
+assert len(sym.list_outputs()) == 1
+assert sym.list_outputs()[0] == 'reshape5_output'
+
+attrs_keys = sym.attr_dict().keys()
+assert len(attrs_keys) == 19
+for i, key_item in enumerate(['reshape4', 'param_5', 'param_4', 'param_7',
+  'param_6', 'param_1', '

[incubator-mxnet] branch master updated: [TENSOR] Fix DLTensor conversion for int64 (#10083)

2018-03-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new a9c1717  [TENSOR] Fix DLTensor conversion for int64 (#10083)
a9c1717 is described below

commit a9c1717f6673a2194a1f82ba7d74fb276f0ef24e
Author: Tianqi Chen 
AuthorDate: Tue Mar 13 10:21:53 2018 -0700

[TENSOR] Fix DLTensor conversion for int64 (#10083)

* [TENSOR] Fix DLTensor conversion for int64

* trigger build
---
 include/mxnet/tensor_blob.h | 23 +--
 tests/python/gpu/test_tvm_bridge.py | 21 +++--
 2 files changed, 24 insertions(+), 20 deletions(-)

diff --git a/include/mxnet/tensor_blob.h b/include/mxnet/tensor_blob.h
index 59c1eac..6f604a5 100755
--- a/include/mxnet/tensor_blob.h
+++ b/include/mxnet/tensor_blob.h
@@ -322,16 +322,19 @@ class TBlob {
 
  private:
   static DLDataType DTypeTransform(int type_flag) {
-static std::unordered_map
-  MSHADOW_DTYPE_TO_DLPACK_DTYPE = {
-{0, {2, 32, 1}},  // Float32
-{1, {2, 64, 1}},  // Float64
-{2, {2, 16, 1}},  // Float16
-{3, {1,  8, 1}},  // UInt8
-{4, {0, 32, 1}},  // Int32
-{5, {0,  8, 1}}   // Int8
-  };
-return MSHADOW_DTYPE_TO_DLPACK_DTYPE[type_flag];
+switch (type_flag) {
+  case mshadow::kFloat32: return DLDataType{kDLFloat, 32, 1};
+  case mshadow::kFloat64: return DLDataType{kDLFloat, 64, 1};
+  case mshadow::kFloat16: return DLDataType{kDLFloat, 16, 1};
+  case mshadow::kUint8: return DLDataType{kDLUInt, 8, 1};
+  case mshadow::kInt32: return DLDataType{kDLInt, 32, 1};
+  case mshadow::kInt8: return DLDataType{kDLInt, 8, 1};
+  case mshadow::kInt64: return DLDataType{kDLInt, 64, 1};
+  default: {
+LOG(FATAL) << "Unknown type_flag=" << type_flag;
+return DLDataType();
+  }
+}
   }
 
   inline void SetDLTensor(int dev_mask, int dev_id) {
diff --git a/tests/python/gpu/test_tvm_bridge.py 
b/tests/python/gpu/test_tvm_bridge.py
index 292b9d9..69a713d 100644
--- a/tests/python/gpu/test_tvm_bridge.py
+++ b/tests/python/gpu/test_tvm_bridge.py
@@ -30,13 +30,13 @@ def test_tvm_bridge():
 logging.warn("TVM bridge test skipped because TVM is missing...")
 return
 
-def check(target):
+def check(target, dtype):
 shape = (20,)
 scale = tvm.var("scale", dtype="float32")
-x = tvm.placeholder(shape)
-y = tvm.placeholder(shape)
+x = tvm.placeholder(shape, dtype=dtype)
+y = tvm.placeholder(shape, dtype=dtype)
 z = tvm.compute(shape, lambda i: x[i] + y[i])
-zz = tvm.compute(shape, lambda *i: z(*i) * scale)
+zz = tvm.compute(shape, lambda *i: z(*i) * scale.astype(dtype))
 ctx = mx.gpu(0) if target == "cuda" else mx.cpu(0)
 target = tvm.target.create(target)
 
@@ -47,17 +47,18 @@ def test_tvm_bridge():
 
 # get a mxnet version
 mxf = tvm.contrib.mxnet.to_mxnet_func(f, const_loc=[0, 1])
-xx = mx.nd.uniform(shape=shape, ctx=ctx)
-yy = mx.nd.uniform(shape=shape, ctx=ctx)
-zz = mx.nd.empty(shape=shape, ctx=ctx)
+xx = mx.nd.uniform(shape=shape, ctx=ctx).astype(dtype)
+yy = mx.nd.uniform(shape=shape, ctx=ctx).astype(dtype)
+zz = mx.nd.empty(shape=shape, ctx=ctx).astype(dtype)
 # invoke myf: this runs in mxnet engine
 mxf(xx, yy, zz, 10.0)
 np.testing.assert_allclose(
 zz.asnumpy(), (xx.asnumpy() + yy.asnumpy()) * 10)
 
-check("llvm")
-check("cuda")
-
+for tgt in ["llvm", "cuda"]:
+for dtype in ["int8", "uint8", "int64",
+  "float32", "float64"]:
+check(tgt, dtype)
 
 
 if __name__ == "__main__":

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] szha closed pull request #10083: [TENSOR] Fix DLTensor conversion for int64

2018-03-13 Thread GitBox
szha closed pull request #10083: [TENSOR] Fix DLTensor conversion for int64
URL: https://github.com/apache/incubator-mxnet/pull/10083
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/include/mxnet/tensor_blob.h b/include/mxnet/tensor_blob.h
index 59c1eacb2c5..6f604a5bb8d 100755
--- a/include/mxnet/tensor_blob.h
+++ b/include/mxnet/tensor_blob.h
@@ -322,16 +322,19 @@ class TBlob {
 
  private:
   static DLDataType DTypeTransform(int type_flag) {
-static std::unordered_map
-  MSHADOW_DTYPE_TO_DLPACK_DTYPE = {
-{0, {2, 32, 1}},  // Float32
-{1, {2, 64, 1}},  // Float64
-{2, {2, 16, 1}},  // Float16
-{3, {1,  8, 1}},  // UInt8
-{4, {0, 32, 1}},  // Int32
-{5, {0,  8, 1}}   // Int8
-  };
-return MSHADOW_DTYPE_TO_DLPACK_DTYPE[type_flag];
+switch (type_flag) {
+  case mshadow::kFloat32: return DLDataType{kDLFloat, 32, 1};
+  case mshadow::kFloat64: return DLDataType{kDLFloat, 64, 1};
+  case mshadow::kFloat16: return DLDataType{kDLFloat, 16, 1};
+  case mshadow::kUint8: return DLDataType{kDLUInt, 8, 1};
+  case mshadow::kInt32: return DLDataType{kDLInt, 32, 1};
+  case mshadow::kInt8: return DLDataType{kDLInt, 8, 1};
+  case mshadow::kInt64: return DLDataType{kDLInt, 64, 1};
+  default: {
+LOG(FATAL) << "Unknown type_flag=" << type_flag;
+return DLDataType();
+  }
+}
   }
 
   inline void SetDLTensor(int dev_mask, int dev_id) {
diff --git a/tests/python/gpu/test_tvm_bridge.py 
b/tests/python/gpu/test_tvm_bridge.py
index 292b9d91e5f..69a713d6a28 100644
--- a/tests/python/gpu/test_tvm_bridge.py
+++ b/tests/python/gpu/test_tvm_bridge.py
@@ -30,13 +30,13 @@ def test_tvm_bridge():
 logging.warn("TVM bridge test skipped because TVM is missing...")
 return
 
-def check(target):
+def check(target, dtype):
 shape = (20,)
 scale = tvm.var("scale", dtype="float32")
-x = tvm.placeholder(shape)
-y = tvm.placeholder(shape)
+x = tvm.placeholder(shape, dtype=dtype)
+y = tvm.placeholder(shape, dtype=dtype)
 z = tvm.compute(shape, lambda i: x[i] + y[i])
-zz = tvm.compute(shape, lambda *i: z(*i) * scale)
+zz = tvm.compute(shape, lambda *i: z(*i) * scale.astype(dtype))
 ctx = mx.gpu(0) if target == "cuda" else mx.cpu(0)
 target = tvm.target.create(target)
 
@@ -47,17 +47,18 @@ def check(target):
 
 # get a mxnet version
 mxf = tvm.contrib.mxnet.to_mxnet_func(f, const_loc=[0, 1])
-xx = mx.nd.uniform(shape=shape, ctx=ctx)
-yy = mx.nd.uniform(shape=shape, ctx=ctx)
-zz = mx.nd.empty(shape=shape, ctx=ctx)
+xx = mx.nd.uniform(shape=shape, ctx=ctx).astype(dtype)
+yy = mx.nd.uniform(shape=shape, ctx=ctx).astype(dtype)
+zz = mx.nd.empty(shape=shape, ctx=ctx).astype(dtype)
 # invoke myf: this runs in mxnet engine
 mxf(xx, yy, zz, 10.0)
 np.testing.assert_allclose(
 zz.asnumpy(), (xx.asnumpy() + yy.asnumpy()) * 10)
 
-check("llvm")
-check("cuda")
-
+for tgt in ["llvm", "cuda"]:
+for dtype in ["int8", "uint8", "int64",
+  "float32", "float64"]:
+check(tgt, dtype)
 
 
 if __name__ == "__main__":


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on issue #8971: Clojure Library for mxnet

2018-03-13 Thread GitBox
yzhliu commented on issue #8971: Clojure Library for mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/8971#issuecomment-372748901
 
 
   @gigasquid I do not have a mac with GPU, would be good if you can check why 
doesn't it work - not an urgent issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10042: Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10042: Gluon dataloader crash on speech 
recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372750505
 
 
   Created JIRA work item: https://issues.apache.org/jira/browse/MXNET-86


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya opened a new issue #10087: Flaky test on Ubuntu: test_operator_gpu.test_batchnorm_with_type

2018-03-13 Thread GitBox
anirudhacharya opened a new issue #10087: Flaky test on Ubuntu: 
test_operator_gpu.test_batchnorm_with_type
URL: https://github.com/apache/incubator-mxnet/issues/10087
 
 
   ## Description
   Flaky test on ubuntu_gpu for the test - 
test_operator_gpu.test_batchnorm_with_type It is a precision error.
   
   ## Environment info (Required)
   
   Package used (Python/R/Scala/Julia):
   Python
   
   MXNet commit hash:
   8bf1ff1ad3e3d3f07e62043560e933848b440f57
   
   Link to the CI run log:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/rest/organizations/jenkins/pipelines/incubator-mxnet/branches/PR-9963/runs/47/nodes/486/log/?start=0
   
   ## Error Message:
   
   > FAIL: test_operator_gpu.test_batchnorm_with_type
   > --
   > Traceback (most recent call last):
   >   File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in 
runTest
   > self.test(*self.arg)
   >   File "/work/mxnet/tests/python/gpu/../unittest/common.py", line 157, in 
test_new
   > orig_test(*args, **kwargs)
   >   File "/work/mxnet/tests/python/gpu/test_operator_gpu.py", line 320, in 
test_batchnorm_with_type
   > check_consistency(sym, ctx_list_v2_2D)
   >   File "/work/mxnet/python/mxnet/test_utils.py", line 1346, in 
check_consistency
   > raise e
   >   File "/work/mxnet/python/mxnet/test_utils.py", line 1341, in 
check_consistency
   > equal_nan=equal_nan)
   >   File "/work/mxnet/python/mxnet/test_utils.py", line 493, in 
assert_almost_equal
   > raise AssertionError(msg)
   > AssertionError: 
   > Items are not equal:
   > Error 1.588932 exceeds tolerance rtol=0.10, atol=0.10.  Location 
of maximum error:(1,), a=-0.082301, b=0.091003
   >  a: array([-1308.3785,-0.08230112], dtype=float32)
   >  b: array([-1310.   , 0.091], dtype=float16)
   
   ## Steps to reproduce
   1. Not able to reproduce locally


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #9412: Flaky Tests Tracking Issue

2018-03-13 Thread GitBox
anirudhacharya commented on issue #9412: Flaky Tests Tracking Issue
URL: 
https://github.com/apache/incubator-mxnet/issues/9412#issuecomment-372752027
 
 
   One more issue to be tracked - 
https://github.com/apache/incubator-mxnet/issues/10087


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on issue #8971: Clojure Library for mxnet

2018-03-13 Thread GitBox
gigasquid commented on issue #8971: Clojure Library for mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/8971#issuecomment-366543008
 
 
   An update - I've ported over the MnistModule example to clojure - yay! ? 
   
https://github.com/gigasquid/incubator-mxnet/tree/clojure-package/clojure-package/examples
   
   
   @yzhliu I'm currently developing on a Mac and would like to get GPU going 
there if possible so I can test that out too. Can I help with a PR to add this 
to the Scala build? https://github.com/apache/incubator-mxnet/issues/4469? 
   
   EDIT - taking a quick look at it and seeing if I can help out in this area. 
If not, I can test the GPU on the Clojure package on AWS.
   
   - Followup: Wasn't able to get gpu build for the base on my mac, but was 
able to verify that it runs fine on AWS linux gpu


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10087: Flaky test on Ubuntu: test_operator_gpu.test_batchnorm_with_type

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10087: Flaky test on Ubuntu: 
test_operator_gpu.test_batchnorm_with_type
URL: 
https://github.com/apache/incubator-mxnet/issues/10087#issuecomment-372759257
 
 
   Did this every used to fail before the MKL changes?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed issue #8923: GTEST tests are neither built nor run by CI

2018-03-13 Thread GitBox
cjolivier01 closed issue #8923: GTEST tests are neither built nor run by CI
URL: https://github.com/apache/incubator-mxnet/issues/8923
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10078: Support float16 in L2Normalization operator

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10078: Support float16 in L2Normalization 
operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#issuecomment-372761495
 
 
   Please add a JIRA ticket


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jmacglashan commented on issue #9822: gluon HybridBlock wrapper of constant nd.array, is it possible?

2018-03-13 Thread GitBox
jmacglashan commented on issue #9822: gluon HybridBlock wrapper of constant 
nd.array, is it possible?
URL: 
https://github.com/apache/incubator-mxnet/issues/9822#issuecomment-372761565
 
 
   @feevos I haven't tried your code yet, but I think the issue is that your 
`hybrid_forward` does not have **kwargs nor an argument named the same as the 
parameter.
   
   Mxnet hybrid blocks will push parameters as inputs to the hybrid forward 
method (I believe because this is how it resolves passing them in as variable 
symbols when it compiles a graph). So you should add that argument and get the 
"constant" parameter from the function arguments.
   
   For example, consider the `hybrid_forward` definition of the `Dense` block 
in Mxnet:
   ```
def hybrid_forward(self, F, x, weight, bias=None):
   if bias is None:
   act = F.FullyConnected(x, weight, no_bias=True, 
num_hidden=self._units,
  name='fwd')
   else:
   act = F.FullyConnected(x, weight, bias, num_hidden=self._units,
  name='fwd')
   if self.act is not None:
   act = self.act(act)
   return act
   ```
   
   Note that the method receives `weight` and `bias` as arguments. These are 
defined as parameters inside the Block's ParameterDict and the forward 
operation of the HybridBlock will automatically push all parameters to 
`hybrid_forward`.
   
   So you should change your code to be:
   
   ```
   def hybrid_forward(self,F,_x, Bijkl):
   ```
   
   And then you don't need to pull it from the parameter dict in the hybrid 
forward, just use the arg.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jmacglashan commented on issue #9822: gluon HybridBlock wrapper of constant nd.array, is it possible?

2018-03-13 Thread GitBox
jmacglashan commented on issue #9822: gluon HybridBlock wrapper of constant 
nd.array, is it possible?
URL: 
https://github.com/apache/incubator-mxnet/issues/9822#issuecomment-372761565
 
 
   @feevos I haven't tried your code yet, but I think the issue is that your 
`hybrid_forward` does not have **kwargs nor an argument named the same as the 
parameter.
   
   Mxnet hybrid blocks will push parameters as inputs to the hybrid forward 
method (I believe because this is how it resolves passing them in as variable 
symbols when it compiles a graph). So you should add that argument and get the 
"constant" parameter from the function arguments.
   
   For example, consider the `hybrid_forward` definition of the `Dense` block 
in Mxnet:
   ```
def hybrid_forward(self, F, x, weight, bias=None):
   if bias is None:
   act = F.FullyConnected(x, weight, no_bias=True, 
num_hidden=self._units,
  name='fwd')
   else:
   act = F.FullyConnected(x, weight, bias, num_hidden=self._units,
  name='fwd')
   if self.act is not None:
   act = self.act(act)
   return act
   ```
   
   Note that the method receives `weight` and `bias` as arguments. These are 
are defined as parameters inside the Block's ParameterDict and the forward 
operation of the HybridBlock will automatically push all parameters to 
`hybrid_forward`.
   
   So you should change your code to be:
   
   ```
   def hybrid_forward(self,F,_x, Bijkl):
   ```
   
   And then you don't need to pull it from the parameter dict in the hybrid 
forward, just use the arg.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #10087: Flaky test on Ubuntu: test_operator_gpu.test_batchnorm_with_type

2018-03-13 Thread GitBox
anirudhacharya commented on issue #10087: Flaky test on Ubuntu: 
test_operator_gpu.test_batchnorm_with_type
URL: 
https://github.com/apache/incubator-mxnet/issues/10087#issuecomment-372761638
 
 
   @cjolivier01 This failure was yesterday. I am not sure which MKL changes you 
are referring to. If you mean this - 
https://github.com/apache/incubator-mxnet/pull/9862 then yes, it was after the 
MKL changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10087: Flaky test on Ubuntu: test_operator_gpu.test_batchnorm_with_type

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10087: Flaky test on Ubuntu: 
test_operator_gpu.test_batchnorm_with_type
URL: 
https://github.com/apache/incubator-mxnet/issues/10087#issuecomment-372762525
 
 
   For @marcoabreu do you know if this test used to fail intermittently before 
the MKL changes?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10087: Flaky test on Ubuntu: test_operator_gpu.test_batchnorm_with_type

2018-03-13 Thread GitBox
zheng-da commented on issue #10087: Flaky test on Ubuntu: 
test_operator_gpu.test_batchnorm_with_type
URL: 
https://github.com/apache/incubator-mxnet/issues/10087#issuecomment-372762672
 
 
   I have observed the problem before. 
https://github.com/apache/incubator-mxnet/issues/9916
   It's some precision problem in batchnorm.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #10087: Flaky test on Ubuntu: test_operator_gpu.test_batchnorm_with_type

2018-03-13 Thread GitBox
reminisce commented on issue #10087: Flaky test on Ubuntu: 
test_operator_gpu.test_batchnorm_with_type
URL: 
https://github.com/apache/incubator-mxnet/issues/10087#issuecomment-372763627
 
 
   We are going to make the unit test stable. See here for comments and action 
items.
   https://github.com/apache/incubator-mxnet/issues/9916#issuecomment-371736378


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10087: Flaky test on Ubuntu: test_operator_gpu.test_batchnorm_with_type

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10087: Flaky test on Ubuntu: 
test_operator_gpu.test_batchnorm_with_type
URL: 
https://github.com/apache/incubator-mxnet/issues/10087#issuecomment-372764215
 
 
   Great! Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9996: add softsign operator to v1.1.0

2018-03-13 Thread GitBox
marcoabreu commented on issue #9996: add softsign operator to v1.1.0
URL: https://github.com/apache/incubator-mxnet/pull/9996#issuecomment-372764271
 
 
   Are we backporting this operator now?
   
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #10048: [MXNET-68] Random shuffle implementation

2018-03-13 Thread GitBox
reminisce commented on issue #10048: [MXNET-68] Random shuffle implementation
URL: https://github.com/apache/incubator-mxnet/pull/10048#issuecomment-372764995
 
 
   Agree multi-threaded copy is not trivial to optimize due to 
hardware/data_size difference. This PR has provided a functional shuffle op and 
unit tests with sufficient coverage.
   @piiswrong Do you mind to take a look and decide whether to merge it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #10000: [MXNET-80] Fix average pooling kernel size assignment error

2018-03-13 Thread GitBox
sxjscience commented on a change in pull request #1: [MXNET-80] Fix average 
pooling kernel size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#discussion_r174233854
 
 

 ##
 File path: src/operator/nn/pooling.cc
 ##
 @@ -98,9 +103,24 @@ static bool PoolingShape(const nnvm::NodeAttrs &attrs,
   << "Pooling: Input data should be  3D in (batch, channel, x)"
   << " Or 4D in (batch, channel, y, x) "
   << " Or 5D in (batch, channel, d, y, x)";
+  CHECK_LE(dshape.ndim(), 5U)
+  << "Pooling: Input data should be  3D in (batch, channel, x)"
+  << " Or 4D in (batch, channel, y, x) "
+  << " Or 5D in (batch, channel, d, y, x)";
   TShape oshape = dshape;
   if (dshape.ndim() == 0) return false;
-  if (param.kernel.ndim() == 1) {
+  if (param.global_pool) {
+  if (dshape.ndim() == 3) {
+  oshape[2] = 1;
+  } else if (dshape.ndim() == 4) {
+  oshape[2] = 1;
+  oshape[3] = 1;
+  } else if (dshape.ndim() == 5) {
+  oshape[2] = 1;
+  oshape[3] = 1;
+  oshape[4] = 1;
+  }
 
 Review comment:
   Need to push the oshape to `out_shape`. See 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling.cc#L163-L168.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #10000: [MXNET-80] Fix average pooling kernel size assignment error

2018-03-13 Thread GitBox
sxjscience commented on a change in pull request #1: [MXNET-80] Fix average 
pooling kernel size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#discussion_r174234503
 
 

 ##
 File path: src/operator/nn/pooling.cc
 ##
 @@ -98,9 +103,24 @@ static bool PoolingShape(const nnvm::NodeAttrs &attrs,
   << "Pooling: Input data should be  3D in (batch, channel, x)"
   << " Or 4D in (batch, channel, y, x) "
   << " Or 5D in (batch, channel, d, y, x)";
+  CHECK_LE(dshape.ndim(), 5U)
+  << "Pooling: Input data should be  3D in (batch, channel, x)"
+  << " Or 4D in (batch, channel, y, x) "
+  << " Or 5D in (batch, channel, d, y, x)";
   TShape oshape = dshape;
   if (dshape.ndim() == 0) return false;
-  if (param.kernel.ndim() == 1) {
+  if (param.global_pool) {
+  if (dshape.ndim() == 3) {
+  oshape[2] = 1;
+  } else if (dshape.ndim() == 4) {
+  oshape[2] = 1;
+  oshape[3] = 1;
+  } else if (dshape.ndim() == 5) {
+  oshape[2] = 1;
+  oshape[3] = 1;
+  oshape[4] = 1;
+  }
 
 Review comment:
   Also, you can use a for-loop instead of if.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #10000: [MXNET-80] Fix average pooling kernel size assignment error

2018-03-13 Thread GitBox
sxjscience commented on a change in pull request #1: [MXNET-80] Fix average 
pooling kernel size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#discussion_r174234577
 
 

 ##
 File path: src/operator/nn/pooling.cc
 ##
 @@ -98,9 +103,24 @@ static bool PoolingShape(const nnvm::NodeAttrs &attrs,
   << "Pooling: Input data should be  3D in (batch, channel, x)"
   << " Or 4D in (batch, channel, y, x) "
   << " Or 5D in (batch, channel, d, y, x)";
+  CHECK_LE(dshape.ndim(), 5U)
+  << "Pooling: Input data should be  3D in (batch, channel, x)"
+  << " Or 4D in (batch, channel, y, x) "
+  << " Or 5D in (batch, channel, d, y, x)";
   TShape oshape = dshape;
   if (dshape.ndim() == 0) return false;
-  if (param.kernel.ndim() == 1) {
+  if (param.global_pool) {
+  if (dshape.ndim() == 3) {
+  oshape[2] = 1;
+  } else if (dshape.ndim() == 4) {
+  oshape[2] = 1;
+  oshape[3] = 1;
+  } else if (dshape.ndim() == 5) {
+  oshape[2] = 1;
+  oshape[3] = 1;
+  oshape[4] = 1;
+  }
+  } else if (param.kernel.ndim() == 1) {
 CHECK_EQ(dshape.ndim(), 3U)
 << "Pooling: Input data should be 3D in (batch, channel, x)";
 if (param.global_pool) {
 
 Review comment:
   These `if` can be removed because the global_pool case is handled before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #10078: Support float16 in L2Normalization operator

2018-03-13 Thread GitBox
reminisce commented on a change in pull request #10078: Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r174235637
 
 

 ##
 File path: src/operator/l2_normalization-inl.h
 ##
 @@ -235,6 +247,19 @@ class L2NormalizationProp : public OperatorProperty {
 return param_.__DICT__();
   }
 
+  bool InferType(std::vector *in_type,
+ std::vector *out_type,
+ std::vector *aux_type) const override {
+CHECK_EQ(in_type->size(), 1U);
+int dtype = (*in_type)[0];
+CHECK_NE(dtype, -1) << "Input must have specified type";
 
 Review comment:
   Please use mutual inference instead of terminating the program.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] robbig2871 opened a new issue #10088: can not install

2018-03-13 Thread GitBox
robbig2871 opened a new issue #10088: can not install
URL: https://github.com/apache/incubator-mxnet/issues/10088
 
 
   > cran <- getOption("repos")
   > cran["dmlc"] <- 
"https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/CRAN/";
   > options(repos = cran)
   > install.packages("mxnet")
   Installing package into ?/usr/local/lib/R/3.4/site-library?
   (as ?lib? is unspecified)
   trying URL 
'https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/CRAN/src/contrib/mxnet_0.10.1.tar.gz'
   Warning in install.packages :
 cannot open URL 
'https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/CRAN/src/contrib/mxnet_0.10.1.tar.gz':
 HTTP status was '404 Not Found'
   Error in download.file(url, destfile, method, mode = "wb", ...) : 
 cannot open URL 
'https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/CRAN/src/contrib/mxnet_0.10.1.tar.gz'
   Warning in install.packages :
 download of package ?mxnet? failed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10078: [MXNET-92] Support float16 in L2Normalization operator

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10078: [MXNET-92] Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#issuecomment-372773464
 
 
   JIRA: https://issues.apache.org/jira/browse/MXNET-92


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9996: add softsign operator to v1.1.0

2018-03-13 Thread GitBox
cjolivier01 commented on issue #9996: add softsign operator to v1.1.0
URL: https://github.com/apache/incubator-mxnet/pull/9996#issuecomment-372775775
 
 
   I have no idea. I just see an open PR without a JIRA.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #9930: Support single array input for metric

2018-03-13 Thread GitBox
szha closed pull request #9930: Support single array input for metric
URL: https://github.com/apache/incubator-mxnet/pull/9930
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/metric.py b/python/mxnet/metric.py
index ddffc01bd23..ff4cce944e0 100644
--- a/python/mxnet/metric.py
+++ b/python/mxnet/metric.py
@@ -30,8 +30,25 @@
 from . import registry
 
 
-def check_label_shapes(labels, preds, shape=0):
-if shape == 0:
+def check_label_shapes(labels, preds, wrap=False, shape=False):
+"""Helper function for checking shape of label and prediction
+
+Parameters
+--
+labels : list of `NDArray`
+The labels of the data.
+
+preds : list of `NDArray`
+Predicted values.
+
+wrap : boolean
+If True, wrap labels/preds in a list if they are single NDArray
+
+shape : boolean
+If True, check the shape of labels and preds;
+Otherwise only check their length.
+"""
+if not shape:
 label_shape, pred_shape = len(labels), len(preds)
 else:
 label_shape, pred_shape = labels.shape, preds.shape
@@ -40,6 +57,13 @@ def check_label_shapes(labels, preds, shape=0):
 raise ValueError("Shape of labels {} does not match shape of "
  "predictions {}".format(label_shape, pred_shape))
 
+if wrap:
+if isinstance(labels, ndarray.ndarray.NDArray):
+labels = [labels]
+if isinstance(preds, ndarray.ndarray.NDArray):
+preds = [preds]
+
+return labels, preds
 
 class EvalMetric(object):
 """Base class for all evaluation metrics.
@@ -386,7 +410,7 @@ def update(self, labels, preds):
 Prediction values for samples. Each prediction value can either be 
the class index,
 or a vector of likelihoods for all classes.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred_label in zip(labels, preds):
 if pred_label.shape != label.shape:
@@ -394,7 +418,7 @@ def update(self, labels, preds):
 pred_label = pred_label.asnumpy().astype('int32')
 label = label.asnumpy().astype('int32')
 
-check_label_shapes(label, pred_label)
+labels, preds = check_label_shapes(label, pred_label)
 
 self.sum_metric += (pred_label.flat == label.flat).sum()
 self.num_inst += len(pred_label.flat)
@@ -456,7 +480,7 @@ def update(self, labels, preds):
 preds : list of `NDArray`
 Predicted values.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred_label in zip(labels, preds):
 assert(len(pred_label.shape) <= 2), 'Predictions should be no more 
than 2 dims'
@@ -614,7 +638,7 @@ def update(self, labels, preds):
 preds : list of `NDArray`
 Predicted values.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred in zip(labels, preds):
 self.metrics.update_binary_stats(label, pred)
@@ -785,7 +809,7 @@ def update(self, labels, preds):
 preds : list of `NDArray`
 Predicted values.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred in zip(labels, preds):
 label = label.asnumpy()
@@ -793,6 +817,8 @@ def update(self, labels, preds):
 
 if len(label.shape) == 1:
 label = label.reshape(label.shape[0], 1)
+if len(pred.shape) == 1:
+pred = pred.reshape(pred.shape[0], 1)
 
 self.sum_metric += numpy.abs(label - pred).mean()
 self.num_inst += 1 # numpy.prod(label.shape)
@@ -843,7 +869,7 @@ def update(self, labels, preds):
 preds : list of `NDArray`
 Predicted values.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred in zip(labels, preds):
 label = label.asnumpy()
@@ -851,6 +877,8 @@ def update(self, labels, preds):
 
 if len(label.shape) == 1:
 label = label.reshape(label.shape[0], 1)
+if len(pred.shape) == 1:
+pred = pred.reshape(pred.shape[0], 1)
 
 self.sum_metric += ((label - pred)**2.0).mean()
 self.num_inst += 1 # numpy.prod(label.shape)
@@ -901,7 +929,7 @@ def update(self, labels, preds):
 preds : list of `NDArray`
 Predicted values.
 """
-check_label_shapes(labels, preds)

[GitHub] yzhliu commented on a change in pull request #9678: [MXNET-50] Scala Inference APIs

2018-03-13 Thread GitBox
yzhliu commented on a change in pull request #9678: [MXNET-50] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r174241847
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/Classifier.scala
 ##
 @@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.{DataDesc, NDArray}
+import java.io.File
+
+import org.slf4j.LoggerFactory
+
+import scala.io
+import scala.collection.mutable.ListBuffer
+
+trait ClassifierBase {
+
+  /**
+* Takes an Array of Floats and returns corresponding labels, score tuples.
+* @param input: IndexedSequence one-dimensional array of Floats.
+* @param topK: (Optional) How many top_k(sorting will be based on the last 
axis)
+* elements to return, if not passed returns unsorted output.
+* @return IndexedSequence of (Label, Score) tuples.
+*/
+  def classify(input: IndexedSeq[Array[Float]],
+   topK: Option[Int] = None): List[(String, Float)]
+
+  /**
+* Takes a Sequence of NDArrays and returns Label, Score tuples.
+* @param input: Indexed Sequence of NDArrays
+* @param topK: (Optional) How many top_k(sorting will be based on the last 
axis)
+* elements to return, if not passed returns unsorted output.
+* @return Traversable Sequence of (Label, Score) tuple, Score will be in 
the form of NDArray
+*/
+  def classifyWithNDArray(input: IndexedSeq[NDArray],
+  topK: Option[Int] = None): IndexedSeq[List[(String, 
Float)]]
+}
+
+/**
+  * A class for classifier tasks
+  * @param modelPathPrefix PathPrefix from where to load the symbol, 
parameters and synset.txt
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json
+  *file://model-dir/synset.txt
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  */
+class Classifier(modelPathPrefix: String, protected val inputDescriptors: 
IndexedSeq[DataDesc])
+  extends ClassifierBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Classifier])
+
+  val predictor: PredictBase = getPredictor(modelPathPrefix, inputDescriptors)
+
+  val synsetFilePath = getSynsetFilePath(modelPathPrefix)
+
+  val synset = readSynsetFile(synsetFilePath)
 
 Review comment:
   I think it is better to make synset optional


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Support single array input for metric (#9930)

2018-03-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new dd1f21b  Support single array input for metric (#9930)
dd1f21b is described below

commit dd1f21b4369371f4d20fc8a88c1d10834f8cf53b
Author: Tong He 
AuthorDate: Tue Mar 13 11:47:26 2018 -0700

Support single array input for metric (#9930)

* fix #9865

* add unittest

* fix format

* fix format

* fix superfluous loop in metric

* fix lint
---
 python/mxnet/metric.py   | 59 +++-
 tests/python/unittest/test_metric.py | 21 +
 2 files changed, 66 insertions(+), 14 deletions(-)

diff --git a/python/mxnet/metric.py b/python/mxnet/metric.py
index ddffc01..ff4cce9 100644
--- a/python/mxnet/metric.py
+++ b/python/mxnet/metric.py
@@ -30,8 +30,25 @@ from . import ndarray
 from . import registry
 
 
-def check_label_shapes(labels, preds, shape=0):
-if shape == 0:
+def check_label_shapes(labels, preds, wrap=False, shape=False):
+"""Helper function for checking shape of label and prediction
+
+Parameters
+--
+labels : list of `NDArray`
+The labels of the data.
+
+preds : list of `NDArray`
+Predicted values.
+
+wrap : boolean
+If True, wrap labels/preds in a list if they are single NDArray
+
+shape : boolean
+If True, check the shape of labels and preds;
+Otherwise only check their length.
+"""
+if not shape:
 label_shape, pred_shape = len(labels), len(preds)
 else:
 label_shape, pred_shape = labels.shape, preds.shape
@@ -40,6 +57,13 @@ def check_label_shapes(labels, preds, shape=0):
 raise ValueError("Shape of labels {} does not match shape of "
  "predictions {}".format(label_shape, pred_shape))
 
+if wrap:
+if isinstance(labels, ndarray.ndarray.NDArray):
+labels = [labels]
+if isinstance(preds, ndarray.ndarray.NDArray):
+preds = [preds]
+
+return labels, preds
 
 class EvalMetric(object):
 """Base class for all evaluation metrics.
@@ -386,7 +410,7 @@ class Accuracy(EvalMetric):
 Prediction values for samples. Each prediction value can either be 
the class index,
 or a vector of likelihoods for all classes.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred_label in zip(labels, preds):
 if pred_label.shape != label.shape:
@@ -394,7 +418,7 @@ class Accuracy(EvalMetric):
 pred_label = pred_label.asnumpy().astype('int32')
 label = label.asnumpy().astype('int32')
 
-check_label_shapes(label, pred_label)
+labels, preds = check_label_shapes(label, pred_label)
 
 self.sum_metric += (pred_label.flat == label.flat).sum()
 self.num_inst += len(pred_label.flat)
@@ -456,7 +480,7 @@ class TopKAccuracy(EvalMetric):
 preds : list of `NDArray`
 Predicted values.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred_label in zip(labels, preds):
 assert(len(pred_label.shape) <= 2), 'Predictions should be no more 
than 2 dims'
@@ -614,7 +638,7 @@ class F1(EvalMetric):
 preds : list of `NDArray`
 Predicted values.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred in zip(labels, preds):
 self.metrics.update_binary_stats(label, pred)
@@ -785,7 +809,7 @@ class MAE(EvalMetric):
 preds : list of `NDArray`
 Predicted values.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred in zip(labels, preds):
 label = label.asnumpy()
@@ -793,6 +817,8 @@ class MAE(EvalMetric):
 
 if len(label.shape) == 1:
 label = label.reshape(label.shape[0], 1)
+if len(pred.shape) == 1:
+pred = pred.reshape(pred.shape[0], 1)
 
 self.sum_metric += numpy.abs(label - pred).mean()
 self.num_inst += 1 # numpy.prod(label.shape)
@@ -843,7 +869,7 @@ class MSE(EvalMetric):
 preds : list of `NDArray`
 Predicted values.
 """
-check_label_shapes(labels, preds)
+labels, preds = check_label_shapes(labels, preds, True)
 
 for label, pred in zip(labels, preds):
 label = label.asnumpy()
@@ -851,6 +877,8 @@ class MSE(EvalMetric):
 
 if len(label.shape) == 1:
 label = label.reshape(label.shap

[GitHub] yzhliu commented on a change in pull request #9678: [MXNET-50] Scala Inference APIs

2018-03-13 Thread GitBox
yzhliu commented on a change in pull request #9678: [MXNET-50] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r174218503
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/Classifier.scala
 ##
 @@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.{DataDesc, NDArray}
+import java.io.File
+
+import org.slf4j.LoggerFactory
+
+import scala.io
+import scala.collection.mutable.ListBuffer
+
+trait ClassifierBase {
+
+  /**
+* Takes an Array of Floats and returns corresponding labels, score tuples.
+* @param input: IndexedSequence one-dimensional array of Floats.
+* @param topK: (Optional) How many top_k(sorting will be based on the last 
axis)
+* elements to return, if not passed returns unsorted output.
+* @return IndexedSequence of (Label, Score) tuples.
+*/
+  def classify(input: IndexedSeq[Array[Float]],
+   topK: Option[Int] = None): List[(String, Float)]
+
+  /**
+* Takes a Sequence of NDArrays and returns Label, Score tuples.
+* @param input: Indexed Sequence of NDArrays
+* @param topK: (Optional) How many top_k(sorting will be based on the last 
axis)
+* elements to return, if not passed returns unsorted output.
+* @return Traversable Sequence of (Label, Score) tuple, Score will be in 
the form of NDArray
+*/
+  def classifyWithNDArray(input: IndexedSeq[NDArray],
+  topK: Option[Int] = None): IndexedSeq[List[(String, 
Float)]]
+}
+
+/**
+  * A class for classifier tasks
+  * @param modelPathPrefix PathPrefix from where to load the symbol, 
parameters and synset.txt
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json
+  *file://model-dir/synset.txt
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  */
+class Classifier(modelPathPrefix: String, protected val inputDescriptors: 
IndexedSeq[DataDesc])
+  extends ClassifierBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Classifier])
+
+  val predictor: PredictBase = getPredictor(modelPathPrefix, inputDescriptors)
+
+  val synsetFilePath = getSynsetFilePath(modelPathPrefix)
+
+  val synset = readSynsetFile(synsetFilePath)
+
+  val handler = MXNetHandler()
 
 Review comment:
   can any of the above variable be 'private' or 'private[mxnet]'?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #9678: [MXNET-50] Scala Inference APIs

2018-03-13 Thread GitBox
yzhliu commented on a change in pull request #9678: [MXNET-50] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r174244994
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/Predictor.scala
 ##
 @@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+ * Base Trait for MXNet Predictor classes.
+ */
+private[mxnet] trait PredictBase {
+
+  /**
+   * This method will take input as IndexedSeq one dimensional arrays and 
creates
+   * NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+   * @param input: A IndexedSequence of Scala one-dimensional array, An 
IndexedSequence is
+   * is needed when the model has more than one input/output
+   * @return IndexedSequence array of outputs.
+   */
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+   * Predict using NDArray as input. This method is useful when the input is a 
batch of data
+   * or when multiple operations on the input/output have to performed.
+   * Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+   * @param input: IndexedSequence NDArrays.
+   * @return output of Predictions as NDArrays.
+   */
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+ * Implementation of predict routines.
+ *
+ * @param modelPathPrefix PathPrefix from where to load the model.
+ *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+ * @param inputDescriptors Descriptors defining the input node names, shape,
+ * layout and Type parameters.
+ * Note: If the input Descriptors is missing batchSize('N' in layout),
+ * a batchSize of 1 is assumed for the model.
+ * 
+ */
+class Predictor(modelPathPrefix: String, protected val inputDescriptors: 
IndexedSeq[DataDesc])
+  extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// Note: this is assuming that the input needs a batch
+logger.warn("InputDescriptor does not have batchSize, using 1 as the 
default batchSize")
+iDescriptors = inputDescriptors.map((f: DataDesc) => new DataDesc(f.name,
+  Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout))
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+   * This method will take input as IndexedSeq one dimensional arrays and 
creates
+   * NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+   *
+   * @param input : A IndexedSequence of Scala one-dimensional array, An 
IndexedSequence is
+   *  is needed when the model has more than one input/output
+   * @return IndexedSequence array of outputs.
+   */
+  override def predict(input: IndexedSeq[Array[Float]])
+  : IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " does not match number of inputs in inputDescriptors: 
%d".format(input.length,
+inputDescriptors.length))
+
+for((i, d) <- input.zip(inputDescriptors)) {
+  require (i.length == d.shape.product/batchSize, "number of elements:" +
+" %d in the input does not match the shape:%s".format( i.length, 
d.shape.toString()))
+}
+var inputND: ListBuffer[NDAr

[GitHub] szha closed issue #9865: Confusing behavior of some evaluation metrics

2018-03-13 Thread GitBox
szha closed issue #9865: Confusing behavior of some evaluation metrics
URL: https://github.com/apache/incubator-mxnet/issues/9865
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #9678: [MXNET-50] Scala Inference APIs

2018-03-13 Thread GitBox
yzhliu commented on a change in pull request #9678: [MXNET-50] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r174218993
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/Classifier.scala
 ##
 @@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.{DataDesc, NDArray}
+import java.io.File
+
+import org.slf4j.LoggerFactory
+
+import scala.io
+import scala.collection.mutable.ListBuffer
+
+trait ClassifierBase {
+
+  /**
+* Takes an Array of Floats and returns corresponding labels, score tuples.
+* @param input: IndexedSequence one-dimensional array of Floats.
+* @param topK: (Optional) How many top_k(sorting will be based on the last 
axis)
+* elements to return, if not passed returns unsorted output.
+* @return IndexedSequence of (Label, Score) tuples.
+*/
+  def classify(input: IndexedSeq[Array[Float]],
+   topK: Option[Int] = None): List[(String, Float)]
 
 Review comment:
   Is `IndexedSeq` better than `List`? (the return type)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on a change in pull request #9678: [MXNET-50] Scala Inference APIs

2018-03-13 Thread GitBox
nswamy commented on a change in pull request #9678: [MXNET-50] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r174247041
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/Classifier.scala
 ##
 @@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.{DataDesc, NDArray}
+import java.io.File
+
+import org.slf4j.LoggerFactory
+
+import scala.io
+import scala.collection.mutable.ListBuffer
+
+trait ClassifierBase {
+
+  /**
+* Takes an Array of Floats and returns corresponding labels, score tuples.
+* @param input: IndexedSequence one-dimensional array of Floats.
+* @param topK: (Optional) How many top_k(sorting will be based on the last 
axis)
+* elements to return, if not passed returns unsorted output.
+* @return IndexedSequence of (Label, Score) tuples.
+*/
+  def classify(input: IndexedSeq[Array[Float]],
+   topK: Option[Int] = None): List[(String, Float)]
+
+  /**
+* Takes a Sequence of NDArrays and returns Label, Score tuples.
+* @param input: Indexed Sequence of NDArrays
+* @param topK: (Optional) How many top_k(sorting will be based on the last 
axis)
+* elements to return, if not passed returns unsorted output.
+* @return Traversable Sequence of (Label, Score) tuple, Score will be in 
the form of NDArray
+*/
+  def classifyWithNDArray(input: IndexedSeq[NDArray],
+  topK: Option[Int] = None): IndexedSeq[List[(String, 
Float)]]
+}
+
+/**
+  * A class for classifier tasks
+  * @param modelPathPrefix PathPrefix from where to load the symbol, 
parameters and synset.txt
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json
+  *file://model-dir/synset.txt
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  */
+class Classifier(modelPathPrefix: String, protected val inputDescriptors: 
IndexedSeq[DataDesc])
+  extends ClassifierBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Classifier])
+
+  val predictor: PredictBase = getPredictor(modelPathPrefix, inputDescriptors)
+
+  val synsetFilePath = getSynsetFilePath(modelPathPrefix)
+
+  val synset = readSynsetFile(synsetFilePath)
 
 Review comment:
   Classifier is just taking the predictor and using Synset to map the labels, 
this is the only difference between the predictor/classifier


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #10078: [MXNET-92] Support float16 in L2Normalization operator

2018-03-13 Thread GitBox
anirudh2290 commented on a change in pull request #10078: [MXNET-92] Support 
float16 in L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r174246995
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -2396,21 +2396,22 @@ def check_l2_normalization(in_shape, mode, 
norm_eps=1e-10):
 exe = out.simple_bind(ctx=ctx, data=in_data.shape)
 output = exe.forward(is_train=True, data=in_data)
 # compare numpy + mxnet
-assert_almost_equal(exe.outputs[0].asnumpy(), np_out, rtol=1e-5)
+assert_almost_equal(exe.outputs[0].asnumpy(), np_out, rtol=1e-2 if dtype 
is 'float16' else 1e-5)
 
 Review comment:
   can  you also pass atol here. Default is 1e-20 which may result in test 
becoming flaky if the numbers are small.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10078: [MXNET-92] Support float16 in L2Normalization operator

2018-03-13 Thread GitBox
cjolivier01 commented on a change in pull request #10078: [MXNET-92] Support 
float16 in L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r174248453
 
 

 ##
 File path: src/operator/l2_normalization.cc
 ##
 @@ -26,13 +26,22 @@
 namespace mxnet {
 namespace op {
 template<>
-Operator* CreateOp(L2NormalizationParam param) {
-  return new L2NormalizationOp(param);
+Operator* CreateOp(L2NormalizationParam param, int dtype) {
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #10081: [MXNET-82] [WIP] Sparse op tutorial for developers

2018-03-13 Thread GitBox
anirudh2290 commented on a change in pull request #10081: [MXNET-82] [WIP] 
Sparse op tutorial for developers
URL: https://github.com/apache/incubator-mxnet/pull/10081#discussion_r174248908
 
 

 ##
 File path: docs/how_to/add_sparse_op_in_backend.md
 ##
 @@ -0,0 +1,457 @@
+# TODO param c == 0.0
+# A Guide to Implementing Sparse Operators in MXNet Backend
+
+## Prerequisites
+- Basic knowledge of [how to implement a dense operator in MXNet 
backend](https://mxnet.incubator.apache.org/versions/master/how_to/add_op_in_backend.html)
+- Basic knowledge of 
[CSRNDArray](http://mxnet.incubator.apache.org/tutorials/sparse/csr.html) and 
[RowSparseNDArray](http://mxnet.incubator.apache.org/tutorials/sparse/row_sparse.html)
 in MXNet
+
+## Introduction
+In the [previous 
tutorial](https://mxnet.incubator.apache.org/versions/master/how_to/add_op_in_backend.html),
+we went through the steps to implementing an operator using C++ in the MXNet 
backend.
+In this tutorial, we will cover how sparse operators are implemented
+in the backend. Specifically, we will practice adding CSRNDArray support to 
the forward function of the `quadratic` operator.
+
+## Implementation
+### A Sparse Operator Example
+
+Let's consider the quadratic function `f(x) = ax^2+bx+c` when x is a 
CSRNDArray. 
+Notice that if the input x is sparse and c is 0.0, the output is also sparse.
+If c is non-zero, the output is dense. In MXNet frontend, the operator works 
like this:
+
+```python
+>>> x = mx.nd.array([[0,1],[2,0]).tostype('csr')
+>>> x
+
+>>> y = mx.nd.sparse.quadratic(x, a=1, b=2, c=0)
+>>> y
+
+>>> z = mx.nd.quadratic(x, a=1, b=2, c=3)
+>>> z
+[[  3.   6.]
+ [ 11.   3.]]
+
+```
+
+The statement `z = mx.nd.quadratic(x, a=1, b=2, c=3)` generates a warning 
message which says
+the sparse input is converted to dense storage, and the dense operator is used 
to compute the dense output.
+This is the "storage fallback" mechanism in MXNet, where a dense operator is 
automatically used for
+inputs that a sparse operator doesn't have special kernels for.
+
+In this tutorial, we will implement the forward function of the sparse 
quadratic operator.
+The storage type of the output depends on the inputs:
+- quadratic('csr', a, b, 0.0) outputs 'csr'
+- otherwise, outputs 'default'
+
+To implement this, we first register the storage type inference property of 
the operator, from which the operator
+infers the output storage type based on operator arguments and inputs types. 
Then we implement the forward
+function for the case where c is 0.0 and x is a CSRNDArray.
+
+Next, we are going to
+
+- Understand the FComputeEx and relevant NDArray interfaces in backend.
+- Define storage type inference functions in quadratic_op-inl.h.
+- Define the forward function in quadratic_op-inl.h.
+- Register the sparse operator using nnvm in quadratic_op.cc and 
quadratic_op.cu for CPU and GPU computing, respectively.
+
+Now let's walk through the process step by step.
+
+### The FComputeEx and Relevant NDArray Interfaces in Backend
+
+Before we dive into the details of relevant interfaces, here are two 
differences between
+dense and sparse operators:
+- Dense operators only handle dense inputs and outputs. Sparse operators 
support various combinations of
+storage types.
+- Memories of inputs and outputs are pre-allocated based their shapes for 
dense operators. However, with sparse representations, memories for sparse 
inputs and outputs depend on the number of non-zero elements they have,
+which is only known at runtime.
+
+With these differences in mind, let's review the `FCompute` interface 
introduced in the previous operator tutorial:
+```cpp
+void (const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs);
+```
+Notice the `FCompute` interface doesn't include data structures that could be 
used to query storage
+types of inputs, nor manipulate auxiliary arrays like `indices` and `indptr`. 
+Therefore, instead of the `FCompute` interface, sparse operators are 
registered with the following `FComputeEx` interface:
+```cpp
+void (const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs);
+```
+where the vectors of TBlobs are replaced with vectors of NDArrays. Now, let's 
go through a few important methods in the NDArray class.
+
+In the python frontend, there are three types of NDArrays, namely 
`mx.nd.NDArray`, `mx.nd.sparse.RowSparseNDArray` and `mx.nd.sparse.CSRNDArray`. 
In the C++ backend, however, all of them are represented by the 
`mxnet::NDArray` class.
+The `storage_type()` method indicates the storage type of the NDArray:
+```cpp
+enum NDArrayStorageType {
+  kUndefinedStorage = -1,  // undefined storage
+  kDefaultStorage, // dense
+  kRowSparseStorage,   // row sparse
+  kCSRStorage, // csr
+};
+
+// return the 

[GitHub] haojin2 commented on a change in pull request #10078: [MXNET-92] Support float16 in L2Normalization operator

2018-03-13 Thread GitBox
haojin2 commented on a change in pull request #10078: [MXNET-92] Support 
float16 in L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r174252529
 
 

 ##
 File path: src/operator/l2_normalization-inl.h
 ##
 @@ -235,6 +247,19 @@ class L2NormalizationProp : public OperatorProperty {
 return param_.__DICT__();
   }
 
+  bool InferType(std::vector *in_type,
+ std::vector *out_type,
+ std::vector *aux_type) const override {
+CHECK_EQ(in_type->size(), 1U);
+int dtype = (*in_type)[0];
+CHECK_NE(dtype, -1) << "Input must have specified type";
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10078: [MXNET-92] Support float16 in L2Normalization operator

2018-03-13 Thread GitBox
haojin2 commented on a change in pull request #10078: [MXNET-92] Support 
float16 in L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r174252581
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -2396,21 +2396,22 @@ def check_l2_normalization(in_shape, mode, 
norm_eps=1e-10):
 exe = out.simple_bind(ctx=ctx, data=in_data.shape)
 output = exe.forward(is_train=True, data=in_data)
 # compare numpy + mxnet
-assert_almost_equal(exe.outputs[0].asnumpy(), np_out, rtol=1e-5)
+assert_almost_equal(exe.outputs[0].asnumpy(), np_out, rtol=1e-2 if dtype 
is 'float16' else 1e-5)
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #10075: [MXNET-83] Fix CMake build issue with MKL.

2018-03-13 Thread GitBox
anirudh2290 commented on issue #10075: [MXNET-83] Fix CMake build issue with 
MKL.
URL: https://github.com/apache/incubator-mxnet/pull/10075#issuecomment-372787414
 
 
   @pengzhao-intel thank you for the explanation. Is this information 
documented somewhere on the Intel or mxnet docs ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10081: [MXNET-82] [WIP] Sparse op tutorial for developers

2018-03-13 Thread GitBox
cjolivier01 commented on a change in pull request #10081: [MXNET-82] [WIP] 
Sparse op tutorial for developers
URL: https://github.com/apache/incubator-mxnet/pull/10081#discussion_r174262825
 
 

 ##
 File path: docs/how_to/add_sparse_op_in_backend.md
 ##
 @@ -0,0 +1,457 @@
+# TODO param c == 0.0
+# A Guide to Implementing Sparse Operators in MXNet Backend
+
+## Prerequisites
+- Basic knowledge of [how to implement a dense operator in MXNet 
backend](https://mxnet.incubator.apache.org/versions/master/how_to/add_op_in_backend.html)
+- Basic knowledge of 
[CSRNDArray](http://mxnet.incubator.apache.org/tutorials/sparse/csr.html) and 
[RowSparseNDArray](http://mxnet.incubator.apache.org/tutorials/sparse/row_sparse.html)
 in MXNet
+
+## Introduction
+In the [previous 
tutorial](https://mxnet.incubator.apache.org/versions/master/how_to/add_op_in_backend.html),
+we went through the steps to implementing an operator using C++ in the MXNet 
backend.
+In this tutorial, we will cover how sparse operators are implemented
+in the backend. Specifically, we will practice adding CSRNDArray support to 
the forward function of the `quadratic` operator.
+
+## Implementation
+### A Sparse Operator Example
+
+Let's consider the quadratic function `f(x) = ax^2+bx+c` when x is a 
CSRNDArray. 
+Notice that if the input x is sparse and c is 0.0, the output is also sparse.
+If c is non-zero, the output is dense. In MXNet frontend, the operator works 
like this:
+
+```python
+>>> x = mx.nd.array([[0,1],[2,0]).tostype('csr')
+>>> x
+
+>>> y = mx.nd.sparse.quadratic(x, a=1, b=2, c=0)
+>>> y
+
+>>> z = mx.nd.quadratic(x, a=1, b=2, c=3)
+>>> z
+[[  3.   6.]
+ [ 11.   3.]]
+
+```
+
+The statement `z = mx.nd.quadratic(x, a=1, b=2, c=3)` generates a warning 
message which says
+the sparse input is converted to dense storage, and the dense operator is used 
to compute the dense output.
+This is the "storage fallback" mechanism in MXNet, where a dense operator is 
automatically used for
+inputs that a sparse operator doesn't have special kernels for.
+
+In this tutorial, we will implement the forward function of the sparse 
quadratic operator.
+The storage type of the output depends on the inputs:
+- quadratic('csr', a, b, 0.0) outputs 'csr'
+- otherwise, outputs 'default'
+
+To implement this, we first register the storage type inference property of 
the operator, from which the operator
+infers the output storage type based on operator arguments and inputs types. 
Then we implement the forward
+function for the case where c is 0.0 and x is a CSRNDArray.
+
+Next, we are going to
+
+- Understand the FComputeEx and relevant NDArray interfaces in backend.
+- Define storage type inference functions in quadratic_op-inl.h.
+- Define the forward function in quadratic_op-inl.h.
+- Register the sparse operator using nnvm in quadratic_op.cc and 
quadratic_op.cu for CPU and GPU computing, respectively.
+
+Now let's walk through the process step by step.
+
+### The FComputeEx and Relevant NDArray Interfaces in Backend
+
+Before we dive into the details of relevant interfaces, here are two 
differences between
+dense and sparse operators:
+- Dense operators only handle dense inputs and outputs. Sparse operators 
support various combinations of
+storage types.
+- Memories of inputs and outputs are pre-allocated based their shapes for 
dense operators. However, with sparse representations, memories for sparse 
inputs and outputs depend on the number of non-zero elements they have,
+which is only known at runtime.
+
+With these differences in mind, let's review the `FCompute` interface 
introduced in the previous operator tutorial:
+```cpp
+void (const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs);
+```
+Notice the `FCompute` interface doesn't include data structures that could be 
used to query storage
+types of inputs, nor manipulate auxiliary arrays like `indices` and `indptr`. 
+Therefore, instead of the `FCompute` interface, sparse operators are 
registered with the following `FComputeEx` interface:
+```cpp
+void (const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs);
+```
+where the vectors of TBlobs are replaced with vectors of NDArrays. Now, let's 
go through a few important methods in the NDArray class.
+
+In the python frontend, there are three types of NDArrays, namely 
`mx.nd.NDArray`, `mx.nd.sparse.RowSparseNDArray` and `mx.nd.sparse.CSRNDArray`. 
In the C++ backend, however, all of them are represented by the 
`mxnet::NDArray` class.
+The `storage_type()` method indicates the storage type of the NDArray:
+```cpp
+enum NDArrayStorageType {
+  kUndefinedStorage = -1,  // undefined storage
+  kDefaultStorage, // dense
+  kRowSparseStorage,   // row sparse
+  kCSRStorage, // csr
+};
+
+// return the 

[GitHub] cjolivier01 commented on issue #10075: [MXNET-83] Fix CMake build issue with MKL.

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10075: [MXNET-83] Fix CMake build issue with 
MKL.
URL: https://github.com/apache/incubator-mxnet/pull/10075#issuecomment-372797327
 
 
   I am not sure if the mkl flags in cmake still follow those, but I don't 
think these changes affect that behavior (allowable overlap of mkl libraries).
   At some point, someone will have to check the combinations with cmake 
configs (possibly renaming config items, changing their descriptions) and make 
it all work again. Maybe i works now, but I kind of doubt it based upon what I 
have seen changed before. Maybe I am wrong, however.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on 
speech recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372801771
 
 
   This script just freezes for me without doing anything.  In 
Connection._recv, it seems., What is it doing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on 
speech recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372804704
 
 
   On which epoch/batch does it occur?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on 
speech recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372801771
 
 
   This script just freezes for me without doing anything.  In 
Connection._recv, it seems., What is it doing?
   I don't get to the error output described.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on 
speech recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372804704
 
 
   On which epoch/batch does it occur?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9928: host doc on s3

2018-03-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #9928: host doc on s3
URL: https://github.com/apache/incubator-mxnet/pull/9928#discussion_r174279994
 
 

 ##
 File path: tests/ci_build/deploy/ci_deploy_doc.sh
 ##
 @@ -0,0 +1,35 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#
+# Execute command outside a docker container
+#
+# Usage: ci_deploy_doc.sh  
+#
+# PR_ID: the PR number
+#
+# BUILD_ID: the current build ID for the specified PR
+#
+
+# TODO szha@: installation of awscli here should be removed once slave hosts 
have them during
 
 Review comment:
   Ideally we should leave no TODOs in the code. Maybe a github issue to track? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #9919: Update PR Template

2018-03-13 Thread GitBox
eric-haibin-lin commented on issue #9919: Update PR Template
URL: https://github.com/apache/incubator-mxnet/pull/9919#issuecomment-372815970
 
 
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da opened a new pull request #10089: enable all activations in MKLDNN.

2018-03-13 Thread GitBox
zheng-da opened a new pull request #10089: enable all activations in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/10089
 
 
   ## Description ##
   Previously, some activation types in MKLDNN aren't used because there was a 
precision problem. 
   This is to enable all activations in MKLDNN.
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
Jerryzcn commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech 
recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372816606
 
 
   The error output only happens when you train on the actual data. When use 
the test script, it will freeze. If revert back to 1.1.0 the problem is 
resolved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
Jerryzcn commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech 
recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372816606
 
 
   The error output only happens when you train on the actual data. However, 
when use the test script, it will freeze. If revert back to 1.1.0 the problem 
is resolved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
Jerryzcn commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech 
recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372816606
 
 
   The error output only happens when you train on the actual data. When you 
use the test script, it will freeze. If revert back to 1.1.0 the problem is 
resolved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
Jerryzcn commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech 
recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372817360
 
 
   there seems to be other issues as well, after training for 1 day or so i got 
segfault. This does not happen with small dataset. Segfault is tested with 
1.2.0. I will try previous version


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
sxjscience commented on issue #10042: [MXNET-86] Gluon dataloader crash on 
speech recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372817252
 
 
   @cjolivier01 Before the commit it will not freeze and print the data.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
sxjscience commented on issue #10042: [MXNET-86] Gluon dataloader crash on 
speech recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372817252
 
 
   @cjolivier01 Before the commit it will not freeze and will print the data 
instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on a change in pull request #9678: [MXNET-50] Scala Inference APIs

2018-03-13 Thread GitBox
nswamy commented on a change in pull request #9678: [MXNET-50] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r174285487
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/Predictor.scala
 ##
 @@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+ * Base Trait for MXNet Predictor classes.
+ */
+private[mxnet] trait PredictBase {
+
+  /**
+   * This method will take input as IndexedSeq one dimensional arrays and 
creates
+   * NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+   * @param input: A IndexedSequence of Scala one-dimensional array, An 
IndexedSequence is
+   * is needed when the model has more than one input/output
+   * @return IndexedSequence array of outputs.
+   */
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+   * Predict using NDArray as input. This method is useful when the input is a 
batch of data
+   * or when multiple operations on the input/output have to performed.
+   * Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+   * @param input: IndexedSequence NDArrays.
+   * @return output of Predictions as NDArrays.
+   */
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+ * Implementation of predict routines.
+ *
+ * @param modelPathPrefix PathPrefix from where to load the model.
+ *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+ * @param inputDescriptors Descriptors defining the input node names, shape,
+ * layout and Type parameters.
+ * Note: If the input Descriptors is missing batchSize('N' in layout),
+ * a batchSize of 1 is assumed for the model.
+ * 
+ */
+class Predictor(modelPathPrefix: String, protected val inputDescriptors: 
IndexedSeq[DataDesc])
+  extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// Note: this is assuming that the input needs a batch
+logger.warn("InputDescriptor does not have batchSize, using 1 as the 
default batchSize")
+iDescriptors = inputDescriptors.map((f: DataDesc) => new DataDesc(f.name,
+  Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout))
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+   * This method will take input as IndexedSeq one dimensional arrays and 
creates
+   * NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+   *
+   * @param input : A IndexedSequence of Scala one-dimensional array, An 
IndexedSequence is
+   *  is needed when the model has more than one input/output
+   * @return IndexedSequence array of outputs.
+   */
+  override def predict(input: IndexedSeq[Array[Float]])
+  : IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " does not match number of inputs in inputDescriptors: 
%d".format(input.length,
+inputDescriptors.length))
+
+for((i, d) <- input.zip(inputDescriptors)) {
+  require (i.length == d.shape.product/batchSize, "number of elements:" +
+" %d in the input does not match the shape:%s".format( i.length, 
d.shape.toString()))
+}
+var inputND: ListBuffer[NDAr

[GitHub] marcoabreu commented on a change in pull request #9928: host doc on s3

2018-03-13 Thread GitBox
marcoabreu commented on a change in pull request #9928: host doc on s3
URL: https://github.com/apache/incubator-mxnet/pull/9928#discussion_r174286591
 
 

 ##
 File path: tests/ci_build/deploy/ci_deploy_doc.sh
 ##
 @@ -0,0 +1,35 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#
+# Execute command outside a docker container
+#
+# Usage: ci_deploy_doc.sh  
+#
+# PR_ID: the PR number
+#
+# BUILD_ID: the current build ID for the specified PR
+#
+
+# TODO szha@: installation of awscli here should be removed once slave hosts 
have them during
 
 Review comment:
   This is already implemented in the test environment and just waits for 
redeployment


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 opened a new pull request #10090: Revert to pre-profile-changes copy code

2018-03-13 Thread GitBox
cjolivier01 opened a new pull request #10090: Revert to pre-profile-changes 
copy code
URL: https://github.com/apache/incubator-mxnet/pull/10090
 
 
   ## Description ##
   
   Fix for: https://github.com/apache/incubator-mxnet/issues/10042
   
   For whatever reason, the new approach isn't fork-friendly.  Just reverted to 
the previous code for this function from before my PR: 
106f97f1881e6bb1a00c56a0ae55200e27297733
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9919: Update PR Template

2018-03-13 Thread GitBox
marcoabreu commented on issue #9919: Update PR Template
URL: https://github.com/apache/incubator-mxnet/pull/9919#issuecomment-372826823
 
 
   Could we also add an item to remind people linking the JIRA issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10090: [MXNET-86] Revert to pre-profile-changes copy code

2018-03-13 Thread GitBox
marcoabreu commented on issue #10090: [MXNET-86] Revert to pre-profile-changes 
copy code
URL: https://github.com/apache/incubator-mxnet/pull/10090#issuecomment-372827316
 
 
   Would you mind creating a test with the MWE the user provided to ensure this 
does not happen again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on 
speech recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372827323
 
 
   I think that would be a separate issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on speech recognition training

2018-03-13 Thread GitBox
cjolivier01 commented on issue #10042: [MXNET-86] Gluon dataloader crash on 
speech recognition training
URL: 
https://github.com/apache/incubator-mxnet/issues/10042#issuecomment-372827323
 
 
   I think that would be a separate issue.  This one so far is just the "stuck" 
fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10090: [MXNET-86] Revert to pre-profile-changes copy code

2018-03-13 Thread GitBox
marcoabreu commented on issue #10090: [MXNET-86] Revert to pre-profile-changes 
copy code
URL: https://github.com/apache/incubator-mxnet/pull/10090#issuecomment-372827316
 
 
   Would you mind creating a test with the MVE the user provided to ensure this 
does not happen again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10089: enable all activations in MKLDNN.

2018-03-13 Thread GitBox
marcoabreu commented on a change in pull request #10089: enable all activations 
in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/10089#discussion_r174293369
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_act.cc
 ##
 @@ -45,11 +45,9 @@ namespace op {
 bool SupportMKLDNNAct(const ActivationParam& param) {
   // We only enable ReLU for now. It seems other activations have some 
precision
 
 Review comment:
   Remove comment


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10089: enable all activations in MKLDNN.

2018-03-13 Thread GitBox
marcoabreu commented on issue #10089: enable all activations in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/10089#issuecomment-372827762
 
 
   How have the precision problems been resolved? Is there a test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10089: enable all activations in MKLDNN.

2018-03-13 Thread GitBox
cjolivier01 commented on a change in pull request #10089: enable all 
activations in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/10089#discussion_r174294065
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_act.cc
 ##
 @@ -45,11 +45,9 @@ namespace op {
 bool SupportMKLDNNAct(const ActivationParam& param) {
   // We only enable ReLU for now. It seems other activations have some 
precision
 
 Review comment:
   Remove comment why?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #10090: [MXNET-86] Revert to pre-profile-changes copy code

2018-03-13 Thread GitBox
sxjscience commented on a change in pull request #10090: [MXNET-86] Revert to 
pre-profile-changes copy code
URL: https://github.com/apache/incubator-mxnet/pull/10090#discussion_r174294274
 
 

 ##
 File path: src/ndarray/ndarray_function.cc
 ##
 @@ -26,35 +26,24 @@
 #include "./ndarray_function.h"
 #include "./ndarray_function-inl.h"
 #include "../common/utils.h"
-#include "../operator/mxnet_op.h"
 
 namespace mxnet {
 namespace ndarray {
 template<>
 void Copy(const TBlob &from, TBlob *to,
 Context from_ctx, Context to_ctx,
 RunContext ctx) {
-  using namespace mxnet::op;
   MSHADOW_TYPE_SWITCH(to->type_flag_, DType, {
 if (to->type_flag_ == from.type_flag_) {
-  TBlob dest = to->FlatTo1D();
-  TBlob src = from.FlatTo1D();
-  const size_t size = src.Size();
-  if (dest.CheckContiguous() && src.CheckContiguous() && size >= 2 /* 
non-trivial size */) {
-CHECK_EQ(dest.shape_, src.shape_)
-  << "Copy:shape mismatch:" << dest.shape_ << " vs " << src.shape_;
-  mxnet_op::Kernel, cpu>::Launch(
-ctx.get_stream(), src.Size(), dest.dptr(), 
src.dptr());
 
 Review comment:
   So we cannot use kernel::launch together with shared memory?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10090: [MXNET-86] Revert to pre-profile-changes copy code

2018-03-13 Thread GitBox
cjolivier01 commented on a change in pull request #10090: [MXNET-86] Revert to 
pre-profile-changes copy code
URL: https://github.com/apache/incubator-mxnet/pull/10090#discussion_r174295660
 
 

 ##
 File path: src/ndarray/ndarray_function.cc
 ##
 @@ -26,35 +26,24 @@
 #include "./ndarray_function.h"
 #include "./ndarray_function-inl.h"
 #include "../common/utils.h"
-#include "../operator/mxnet_op.h"
 
 namespace mxnet {
 namespace ndarray {
 template<>
 void Copy(const TBlob &from, TBlob *to,
 Context from_ctx, Context to_ctx,
 RunContext ctx) {
-  using namespace mxnet::op;
   MSHADOW_TYPE_SWITCH(to->type_flag_, DType, {
 if (to->type_flag_ == from.type_flag_) {
-  TBlob dest = to->FlatTo1D();
-  TBlob src = from.FlatTo1D();
-  const size_t size = src.Size();
-  if (dest.CheckContiguous() && src.CheckContiguous() && size >= 2 /* 
non-trivial size */) {
-CHECK_EQ(dest.shape_, src.shape_)
-  << "Copy:shape mismatch:" << dest.shape_ << " vs " << src.shape_;
-  mxnet_op::Kernel, cpu>::Launch(
-ctx.get_stream(), src.Size(), dest.dptr(), 
src.dptr());
 
 Review comment:
   I don't see why not, but I am just going to revert now because the change 
wasn't worth the trouble it caused.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10090: [MXNET-86] Revert to pre-profile-changes copy code

2018-03-13 Thread GitBox
cjolivier01 commented on a change in pull request #10090: [MXNET-86] Revert to 
pre-profile-changes copy code
URL: https://github.com/apache/incubator-mxnet/pull/10090#discussion_r174295804
 
 

 ##
 File path: src/ndarray/ndarray_function.cc
 ##
 @@ -26,35 +26,24 @@
 #include "./ndarray_function.h"
 #include "./ndarray_function-inl.h"
 #include "../common/utils.h"
-#include "../operator/mxnet_op.h"
 
 namespace mxnet {
 namespace ndarray {
 template<>
 void Copy(const TBlob &from, TBlob *to,
 Context from_ctx, Context to_ctx,
 RunContext ctx) {
-  using namespace mxnet::op;
   MSHADOW_TYPE_SWITCH(to->type_flag_, DType, {
 if (to->type_flag_ == from.type_flag_) {
-  TBlob dest = to->FlatTo1D();
-  TBlob src = from.FlatTo1D();
-  const size_t size = src.Size();
-  if (dest.CheckContiguous() && src.CheckContiguous() && size >= 2 /* 
non-trivial size */) {
-CHECK_EQ(dest.shape_, src.shape_)
-  << "Copy:shape mismatch:" << dest.shape_ << " vs " << src.shape_;
-  mxnet_op::Kernel, cpu>::Launch(
-ctx.get_stream(), src.Size(), dest.dptr(), 
src.dptr());
 
 Review comment:
   It does get stuck in the kernel launch code for whatever reason.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #10090: [MXNET-86] Revert to pre-profile-changes copy code

2018-03-13 Thread GitBox
sxjscience commented on a change in pull request #10090: [MXNET-86] Revert to 
pre-profile-changes copy code
URL: https://github.com/apache/incubator-mxnet/pull/10090#discussion_r174296439
 
 

 ##
 File path: src/ndarray/ndarray_function.cc
 ##
 @@ -26,35 +26,24 @@
 #include "./ndarray_function.h"
 #include "./ndarray_function-inl.h"
 #include "../common/utils.h"
-#include "../operator/mxnet_op.h"
 
 namespace mxnet {
 namespace ndarray {
 template<>
 void Copy(const TBlob &from, TBlob *to,
 Context from_ctx, Context to_ctx,
 RunContext ctx) {
-  using namespace mxnet::op;
   MSHADOW_TYPE_SWITCH(to->type_flag_, DType, {
 if (to->type_flag_ == from.type_flag_) {
-  TBlob dest = to->FlatTo1D();
-  TBlob src = from.FlatTo1D();
-  const size_t size = src.Size();
-  if (dest.CheckContiguous() && src.CheckContiguous() && size >= 2 /* 
non-trivial size */) {
-CHECK_EQ(dest.shape_, src.shape_)
-  << "Copy:shape mismatch:" << dest.shape_ << " vs " << src.shape_;
-  mxnet_op::Kernel, cpu>::Launch(
-ctx.get_stream(), src.Size(), dest.dptr(), 
src.dptr());
 
 Review comment:
   Yes, I think we can revert first and find the cause later. What do you think 
@marcoabreu . This error is really critical as the DataLoader is really common 
among users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >