[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-20 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 7d40fc1  Bump the publish timestamp.
7d40fc1 is described below

commit 7d40fc16522c54c643b0a19ccb40d8d98263ada5
Author: mxnet-ci 
AuthorDate: Sat Sep 21 06:40:56 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..abd7ab5
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Sep 21 06:40:56 UTC 2019



[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #15855: [Numpy] cumprod

2019-09-20 Thread GitBox
hzfan commented on a change in pull request #15855: [Numpy] cumprod
URL: https://github.com/apache/incubator-mxnet/pull/15855#discussion_r326845669
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -367,6 +367,7 @@ build_ubuntu_cpu_openblas() {
 set -ex
 export CC="gcc"
 export CXX="g++"
+pip3 install --user psutil
 
 Review comment:
   > How about you add Ubuntu tvm to the CPU dockerfile?
   
   Yes, but I will also need a new file `ubuntu_tvm_cpu.sh` because 
`ubuntu_tvm.sh` installs tvm with cuda and cannot be run on cpu.
   
   > I mean if we want to run tvm cpu tests, we should also install it, right?
   
   No, it is not installed in ci. TVM is a 3rdparty in MXNet, so it is 
installed in cmake.
   
   Will a new file like `ubuntu_tvm_cpu.sh` be acceptable?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] JiaoPaner edited a comment on issue #15632: Building MxNet with CPP_PACKAGE on Windows10 (2019-07-23)

2019-09-20 Thread GitBox
JiaoPaner edited a comment on issue #15632: Building MxNet with CPP_PACKAGE on 
Windows10 (2019-07-23)
URL: 
https://github.com/apache/incubator-mxnet/issues/15632#issuecomment-533443667
 
 
   @QueensGambit 
   I read the #15144 ,modified the line " #if !defined(_MSC_VER)" which is 744 
in "include/mxnet/tuple.h" to "#if !(defined(_MSC_VER) && _MSC_VER < 1900)" 
.Then, compile again that config includes USE_CPP_PACKAGE, my operating system 
is win10 x64 and IDE is Visual Studio 2015.
   After that the compile result has no any errors and no any above errors.
   And the examples can run without error , the solution does work. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] WenmuZhou opened a new issue #16232: how can I load a pre-trained model with different shape and same name of last layer

2019-09-20 Thread GitBox
WenmuZhou opened a new issue #16232: how can I load a pre-trained model with 
different shape and same name of last layer
URL: https://github.com/apache/incubator-mxnet/issues/16232
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] WenmuZhou commented on issue #15797: how can I concat two dataset with a sepcial ratio?

2019-09-20 Thread GitBox
WenmuZhou commented on issue #15797: how can I concat two dataset with a 
sepcial ratio?
URL: 
https://github.com/apache/incubator-mxnet/issues/15797#issuecomment-533758392
 
 
   I have write a dataset like this and it work 
https://github.com/WenmuZhou/crnn.gluon/blob/master/data_loader/dataset.py#L140-L195


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #16025: Numpy add numpy op left_shift and right_shift

2019-09-20 Thread GitBox
gyshi commented on a change in pull request #16025: Numpy add numpy op 
left_shift and right_shift
URL: https://github.com/apache/incubator-mxnet/pull/16025#discussion_r326840977
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cc
 ##
 @@ -144,5 +165,21 @@ 
MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_backward_npi_rcopysign_scalar)
 .set_attr("FCompute",
 BinaryScalarOp::Backward);
 
+MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_left_shift_scalar)
+.set_attr("FCompute", BitScalarCompute)
+.set_attr("FGradient", MakeZeroGradNodes);
 
 Review comment:
   it's right, i wll add backward


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-20 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new f748aec  Bump the publish timestamp.
f748aec is described below

commit f748aec79f50d800c76824ef060f9f9e1c14695d
Author: mxnet-ci 
AuthorDate: Sat Sep 21 00:41:08 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..55e4300
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Sep 21 00:41:08 UTC 2019



[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16174: How to build new operators?

2019-09-20 Thread GitBox
ChaiBapchya commented on issue #16174: How to build new operators?
URL: 
https://github.com/apache/incubator-mxnet/issues/16174#issuecomment-533746591
 
 
   With today's new updated website launch - 
https://mxnet.incubator.apache.org/api/faq/new_op should be the link @zachgk 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (618c481 -> be7296b)

2019-09-20 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 618c481  [MXNET-1422] Fix wrong results of min([inf, inf]) and 
max([-inf,-inf]) (#16226)
 add be7296b  removing MXNDArrayLoadFromBuffer64 and MXNDArrayLoad64 
(#16203)

No new revisions were added by this update.

Summary of changes:
 include/mxnet/c_api.h | 13 -
 1 file changed, 13 deletions(-)



[GitHub] [incubator-mxnet] apeforest merged pull request #16203: removing 64 bit un-used APIs MXNDArrayLoadFromBuffer64 and MXNDArrayLoad64

2019-09-20 Thread GitBox
apeforest merged pull request #16203: removing 64 bit un-used APIs 
MXNDArrayLoadFromBuffer64 and MXNDArrayLoad64
URL: https://github.com/apache/incubator-mxnet/pull/16203
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on issue #15657: Eliminate common expressions

2019-09-20 Thread GitBox
ptrendx commented on issue #15657: Eliminate common expressions
URL: https://github.com/apache/incubator-mxnet/pull/15657#issuecomment-533742273
 
 
   @DickJC123 Ok, answering your questions:
   1. This definition of Node equality is quite tied to what I want to do with 
them, so I believe, at least right now, it belongs to CSE code.
   2. While I agree it would be beneficial, it would also be pretty hard to do 
- the parameter structure is kept without the type information as the 
`shared_ptr`. Comparing memory (which could be dine if `any` gave 
size of the type held) is also not really feasible since `dict` is actually 
part of the `Parameter` structure, so that would be different.
   3. This is the purpose of this `THasDeterministicOutput` property.
   4. I will think about that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16231: [Website] MXNet Version selection Drop-down makes page jump to top of page

2019-09-20 Thread GitBox
ChaiBapchya commented on issue #16231: [Website] MXNet Version selection 
Drop-down makes page jump to top of page
URL: 
https://github.com/apache/incubator-mxnet/issues/16231#issuecomment-533742086
 
 
   @mxnet-label-bot add [website-beta]
   
   @aaronmarkham What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new issue #16231: [Website] MXNet Version selection Drop-down makes page jump to top of page

2019-09-20 Thread GitBox
ChaiBapchya opened a new issue #16231: [Website] MXNet Version selection 
Drop-down makes page jump to top of page
URL: https://github.com/apache/incubator-mxnet/issues/16231
 
 
   Upon selecting the version from the drop-down, it moves the screen to the 
top of the page - not the desired outcome.
   
   ![Install 
page](https://user-images.githubusercontent.com/10992635/65363728-548a5100-dbc2-11e9-80a5-a3f051b3e766.png)
   
   
   However, upon selecting the rest of the options - OS Platform, Langugage 
binding, GPU/CPU etc the screen remains in the original position (instead of 
moving to the top of the page) which is the desired outcome.
   
   I'm guessing it involves playing with the button's on-click and using 
`$e.preventDefault();` somewhere around showContent() after populating new 
options
   
   
   
https://stackoverflow.com/questions/43611173/button-onclick-makes-page-jump-to-top-of-page
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16231: [Website] MXNet Version selection Drop-down makes page jump to top of page

2019-09-20 Thread GitBox
mxnet-label-bot commented on issue #16231: [Website] MXNet Version selection 
Drop-down makes page jump to top of page
URL: 
https://github.com/apache/incubator-mxnet/issues/16231#issuecomment-533741984
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] salmanmashayekh opened a new issue #16230: Loading Sagemaker NTM Artifacts

2019-09-20 Thread GitBox
salmanmashayekh opened a new issue #16230: Loading Sagemaker NTM Artifacts
URL: https://github.com/apache/incubator-mxnet/issues/16230
 
 
   I have trained a Neural Topic Model with Sagemaker and now I am trying to 
load/deploy the model locally. The artifacts include a `symbol` and an a 
`parameters` file. 
   
   I am using the following to load the model:
   ```
   sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, iteration)
   module_model = mx.mod.Module(symbol=sym, label_names=None, context=mx.cpu())
   ```
   
   But when I try to `bind` the model:
   ```
   module_model.bind(
   for_training = False,
   data_shapes = [('data', (1, VOCAB_SIZE))]
   )
   ```
   
   It fails with the following error:
   ```
   ---
   MXNetErrorTraceback (most recent call last)
   ~/anaconda3/envs/python3/lib/python3.6/site-packages/mxnet/symbol/symbol.py 
in simple_bind(self, ctx, grad_req, type_dict, stype_dict, group2ctx, 
shared_arg_names, shared_exec, shared_buffer, **kwargs)
  1622
shared_exec_handle,
   -> 1623
ctypes.byref(exe_handle)))
  1624 except MXNetError as e:
   
   ~/anaconda3/envs/python3/lib/python3.6/site-packages/mxnet/base.py in 
check_call(ret)
   252 if ret != 0:
   --> 253 raise MXNetError(py_str(_LIB.MXGetLastError()))
   254 
   
   MXNetError: Error in operator sample_normal0: vector::_M_range_insert
   
   During handling of the above exception, another exception occurred:
   
   RuntimeError  Traceback (most recent call last)
in ()
 8 for_training = True,
 9 data_shapes = [('data', (1,VOCAB_SIZE))],
   ---> 10 force_rebind = True,
11 )
12 
   
   ~/anaconda3/envs/python3/lib/python3.6/site-packages/mxnet/module/module.py 
in bind(self, data_shapes, label_shapes, for_training, inputs_need_grad, 
force_rebind, shared_module, grad_req)
   427  
fixed_param_names=self._fixed_param_names,
   428  
grad_req=grad_req, group2ctxs=self._group2ctxs,
   --> 429  
state_names=self._state_names)
   430 self._total_exec_bytes = self._exec_group._total_exec_bytes
   431 if shared_module is not None:
   
   
~/anaconda3/envs/python3/lib/python3.6/site-packages/mxnet/module/executor_group.py
 in __init__(self, symbol, contexts, workload, data_shapes, label_shapes, 
param_names, for_training, inputs_need_grad, shared_group, logger, 
fixed_param_names, grad_req, state_names, group2ctxs)
   277 self.num_outputs = len(self.symbol.list_outputs())
   278 
   --> 279 self.bind_exec(data_shapes, label_shapes, shared_group)
   280 
   281 def decide_slices(self, data_shapes):
   
   
~/anaconda3/envs/python3/lib/python3.6/site-packages/mxnet/module/executor_group.py
 in bind_exec(self, data_shapes, label_shapes, shared_group, reshape)
   373 else:
   374 self.execs.append(self._bind_ith_exec(i, 
data_shapes_i, label_shapes_i,
   --> 375   shared_group))
   376 
   377 self.data_shapes = data_shapes
   
   
~/anaconda3/envs/python3/lib/python3.6/site-packages/mxnet/module/executor_group.py
 in _bind_ith_exec(self, i, data_shapes, label_shapes, shared_group)
   660type_dict=input_types, 
shared_arg_names=self.param_names,
   661shared_exec=shared_exec, 
group2ctx=group2ctx,
   --> 662
shared_buffer=shared_data_arrays, **input_shapes)
   663 self._total_exec_bytes += 
int(executor.debug_str().split('\n')[-3].split()[1])
   664 return executor
   
   ~/anaconda3/envs/python3/lib/python3.6/site-packages/mxnet/symbol/symbol.py 
in simple_bind(self, ctx, grad_req, type_dict, stype_dict, group2ctx, 
shared_arg_names, shared_exec, shared_buffer, **kwargs)
  1627 error_msg += "%s: %s\n" % (k, v)
  1628 error_msg += "%s" % e
   -> 1629 raise RuntimeError(error_msg)
  1630 
  1631 # update shared_buffer
   
   RuntimeError: simple_bind error. Arguments:
   data: (1, 52908)
   Error in operator sample_normal0: vector::_M_range_insert
   ```
   
   From the model architecture (https://arxiv.org/pdf/1809.02687.pdf), I know 
that the input data shape is a vector with `VOCAB_SIZE` length. 
   
   Any ideas what I am doing wrong?


This is an automated message from the Apache Git Service.

[GitHub] [incubator-mxnet] rondogency opened a new pull request #16228: [DO NOT MERGE] Debugging UNIX_GPU CI time regression since 15164

2019-09-20 Thread GitBox
rondogency opened a new pull request #16228: [DO NOT MERGE] Debugging UNIX_GPU 
CI time regression since 15164
URL: https://github.com/apache/incubator-mxnet/pull/16228
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] dtracz opened a new pull request #16229: Pr transpose

2019-09-20 Thread GitBox
dtracz opened a new pull request #16229: Pr transpose
URL: https://github.com/apache/incubator-mxnet/pull/16229
 
 
   Fast pseudo-2D transpose kernel.
   Supports only transposes that satisfy:
   Exists n and m such that:
   params = (0, ..., n-1, n+m, ..., params.size, n, ..., n+m-1)
   Example: (0, 2, 3, 1) or (0, 3, 1, 2), but not (0, 2, 1, 3).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn closed issue #16206: Wrong results of min([inf, inf]) and max([-inf, -inf])

2019-09-20 Thread GitBox
wkcn closed issue #16206: Wrong results of min([inf, inf]) and max([-inf,-inf])
URL: https://github.com/apache/incubator-mxnet/issues/16206
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #16206: Wrong results of min([inf, inf]) and max([-inf, -inf])

2019-09-20 Thread GitBox
wkcn commented on issue #16206: Wrong results of min([inf, inf]) and 
max([-inf,-inf])
URL: 
https://github.com/apache/incubator-mxnet/issues/16206#issuecomment-533734755
 
 
   Close it since the issue has been addressed in PR #16226 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (d61ed3f -> 618c481)

2019-09-20 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d61ed3f  Solve #14116, #15143 (#15144)
 add 618c481  [MXNET-1422] Fix wrong results of min([inf, inf]) and 
max([-inf,-inf]) (#16226)

No new revisions were added by this update.

Summary of changes:
 3rdparty/mshadow/mshadow/base.h| 43 --
 tests/python/unittest/test_operator.py | 17 ++
 2 files changed, 58 insertions(+), 2 deletions(-)



[GitHub] [incubator-mxnet] marcoabreu commented on issue #16154: Deprecated MXNET website is still online ?

2019-09-20 Thread GitBox
marcoabreu commented on issue #16154: Deprecated MXNET website is still online ?
URL: 
https://github.com/apache/incubator-mxnet/issues/16154#issuecomment-533731855
 
 
   @ThomasDelteil 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu merged pull request #16226: [MXNET-1422] Fix wrong results of min([inf, inf]) and max([-inf, -inf])

2019-09-20 Thread GitBox
marcoabreu merged pull request #16226: [MXNET-1422] Fix wrong results of 
min([inf, inf]) and max([-inf,-inf])
URL: https://github.com/apache/incubator-mxnet/pull/16226
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #10051: Nondeterministic order of data in a data batch from NDArrayIter

2019-09-20 Thread GitBox
anirudhacharya commented on issue #10051: Nondeterministic order of data in a 
data batch from NDArrayIter
URL: 
https://github.com/apache/incubator-mxnet/issues/10051#issuecomment-533728736
 
 
   @eric-haibin-lin can this issue be closed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on issue #13593: Low CPU usage of MXNet in subprocesses

2019-09-20 Thread GitBox
zhreshold commented on issue #13593: Low CPU usage of MXNet in subprocesses
URL: 
https://github.com/apache/incubator-mxnet/issues/13593#issuecomment-533722252
 
 
   @YutingZhang 
   Just tested out the master version, the ENV variable `OMP_NUM_THREADS` can 
now effectively control the OMP threads each worker is allowed to use.
   
   For example, `OMP_NUM_THREADS=32 python3 mxnet_cpu_test.py --num-workers=2` 
gives
   
   
![image](https://user-images.githubusercontent.com/3307514/65360902-88f81000-dbb6-11e9-85fb-c91fc6c1f9f4.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (986cecd -> d61ed3f)

2019-09-20 Thread zachgk
This is an automated email from the ASF dual-hosted git repository.

zachgk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 986cecd  Update MKL-DNN dependency (#16073)
 add d61ed3f  Solve #14116, #15143 (#15144)

No new revisions were added by this update.

Summary of changes:
 include/mxnet/tuple.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-mxnet] zachgk closed issue #15143: dmlc::type_name_helper specialization of mxnet::tuple should not be disabled for MSVC

2019-09-20 Thread GitBox
zachgk closed issue #15143: dmlc::type_name_helper specialization of 
mxnet::tuple should not be disabled for MSVC
URL: https://github.com/apache/incubator-mxnet/issues/15143
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk closed issue #14116: Failure in generated op.h in version 1.3.1

2019-09-20 Thread GitBox
zachgk closed issue #14116: Failure in generated op.h in version 1.3.1
URL: https://github.com/apache/incubator-mxnet/issues/14116
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk merged pull request #15144: Relax Visual Studio version constraint in the specialization of `dmlc::type_name_helper` for `DT=mxnet::Tuple`

2019-09-20 Thread GitBox
zachgk merged pull request #15144: Relax Visual Studio version constraint in 
the specialization of `dmlc::type_name_helper` for `DT=mxnet::Tuple`
URL: https://github.com/apache/incubator-mxnet/pull/15144
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk commented on issue #15797: how can I concat two dataset with a sepcial ratio?

2019-09-20 Thread GitBox
zachgk commented on issue #15797: how can I concat two dataset with a sepcial 
ratio?
URL: 
https://github.com/apache/incubator-mxnet/issues/15797#issuecomment-533698381
 
 
   @crossLi Were you able to get the custom data loader working?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Jerryzcn commented on a change in pull request #16215: New ops for RCNN + old ops improvements for RCNN

2019-09-20 Thread GitBox
Jerryzcn commented on a change in pull request #16215: New ops for RCNN + old 
ops improvements for RCNN
URL: https://github.com/apache/incubator-mxnet/pull/16215#discussion_r326790569
 
 

 ##
 File path: src/operator/contrib/bounding_box-inl.h
 ##
 @@ -787,6 +787,284 @@ void BipartiteMatchingBackward(const nnvm::NodeAttrs& 
attrs,
   });
 }
 
+
+inline bool BoxEncodeShape(const nnvm::NodeAttrs& attrs,
+   mxnet::ShapeVector *in_attrs,
+   mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 6U);
+  CHECK_EQ(out_attrs->size(), 2U);
+  mxnet::TShape& sshape = (*in_attrs)[0];
+  mxnet::TShape& mshape = (*in_attrs)[1];
+  mxnet::TShape& ashape = (*in_attrs)[2];
+  mxnet::TShape& rshape = (*in_attrs)[3];
+
+  CHECK_EQ(sshape.ndim(), 2)
+<< "samples shape must have dim == 2, "
+<< sshape.ndim() << " provided";
+
+  CHECK_GE(mshape.ndim(), 2)
+<< "matches shape must have dim == 2, "
+<< mshape.ndim() << " provided";
+
+  CHECK_GE(ashape.ndim(), 3)
+<< "matches shape must have dim == 3, "
+<< ashape.ndim() << " provided";
+  int ldim = ashape[ashape.ndim() - 1];
+  CHECK_EQ(ldim, 4)
+<< "last dimension of anchors must be 4, "
+<< ldim << " provided";
+
+  CHECK_GE(rshape.ndim(), 3)
+<< "refs shape must have dim == 3, "
+<< ashape.ndim() << " provided";
+  ldim = rshape[rshape.ndim() - 1];
+  CHECK_EQ(ldim, 4)
+<< "last dimension of anchors must be 4, "
+<< ldim << " provided";
+
+  // asign input shape
+  SHAPE_ASSIGN_CHECK(*in_attrs, 4, mshadow::Shape1(4));
+  SHAPE_ASSIGN_CHECK(*in_attrs, 5, mshadow::Shape1(4));
+
+  // assign output shape
+  mxnet::TShape oshape = ashape;
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1, oshape);
+  return shape_is_known(oshape);
+}
+
+struct box_encode {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType *out_targets, DType *out_masks,
 
 Review comment:
   understood.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Jerryzcn commented on a change in pull request #16215: New ops for RCNN + old ops improvements for RCNN

2019-09-20 Thread GitBox
Jerryzcn commented on a change in pull request #16215: New ops for RCNN + old 
ops improvements for RCNN
URL: https://github.com/apache/incubator-mxnet/pull/16215#discussion_r326790631
 
 

 ##
 File path: src/operator/contrib/bounding_box-inl.h
 ##
 @@ -787,6 +787,284 @@ void BipartiteMatchingBackward(const nnvm::NodeAttrs& 
attrs,
   });
 }
 
+
+inline bool BoxEncodeShape(const nnvm::NodeAttrs& attrs,
+   mxnet::ShapeVector *in_attrs,
+   mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 6U);
+  CHECK_EQ(out_attrs->size(), 2U);
+  mxnet::TShape& sshape = (*in_attrs)[0];
+  mxnet::TShape& mshape = (*in_attrs)[1];
+  mxnet::TShape& ashape = (*in_attrs)[2];
+  mxnet::TShape& rshape = (*in_attrs)[3];
+
+  CHECK_EQ(sshape.ndim(), 2)
+<< "samples shape must have dim == 2, "
+<< sshape.ndim() << " provided";
+
+  CHECK_GE(mshape.ndim(), 2)
+<< "matches shape must have dim == 2, "
+<< mshape.ndim() << " provided";
+
+  CHECK_GE(ashape.ndim(), 3)
+<< "matches shape must have dim == 3, "
+<< ashape.ndim() << " provided";
+  int ldim = ashape[ashape.ndim() - 1];
+  CHECK_EQ(ldim, 4)
+<< "last dimension of anchors must be 4, "
+<< ldim << " provided";
+
+  CHECK_GE(rshape.ndim(), 3)
+<< "refs shape must have dim == 3, "
+<< ashape.ndim() << " provided";
+  ldim = rshape[rshape.ndim() - 1];
+  CHECK_EQ(ldim, 4)
+<< "last dimension of anchors must be 4, "
+<< ldim << " provided";
+
+  // asign input shape
+  SHAPE_ASSIGN_CHECK(*in_attrs, 4, mshadow::Shape1(4));
+  SHAPE_ASSIGN_CHECK(*in_attrs, 5, mshadow::Shape1(4));
+
+  // assign output shape
+  mxnet::TShape oshape = ashape;
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1, oshape);
+  return shape_is_known(oshape);
+}
+
+struct box_encode {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType *out_targets, DType *out_masks,
+  const DType *samples, const DType *matches,
+  const DType *anchors, const DType *refs,
+  const DType *means, const DType *stds,
+  const int m, const int n) {
+int j = i / n;
+int match = matches[i];
+// xmin: 0, ymin:1, xmax: 2, ymax: 3
+// x:0, y:1, w:2, h:3
+int ref_index = (j * m + match) * 4;
+DType ref_xmin = refs[ref_index + 0];
+DType ref_ymin = refs[ref_index + 1];
+DType ref_width = refs[ref_index + 2] - ref_xmin;
+DType ref_height = refs[ref_index + 3] - ref_ymin;
+DType ref_x = ref_xmin + ref_width * 0.5;
+DType ref_y = ref_ymin + ref_height * 0.5;
+int a_index = i * 4;
+DType a_xmin = anchors[a_index + 0];
+DType a_ymin = anchors[a_index + 1];
+DType a_width = anchors[a_index + 2] - a_xmin;
+DType a_height = anchors[a_index + 3] - a_ymin;
+DType a_x = a_xmin + a_width * 0.5;
+DType a_y = a_ymin + a_height * 0.5;
+DType valid = samples[i] > 0.5 ? 1.0 : 0.0;
+out_masks[a_index + 0] = valid;
+out_masks[a_index + 1] = valid;
+out_masks[a_index + 2] = valid;
+out_masks[a_index + 3] = valid;
+out_targets[a_index + 0] = valid > static_cast(0.5) ?
+((ref_x - a_x) / a_width - static_cast(means[0])) /
+static_cast(stds[0]) : static_cast(0.0);
+out_targets[a_index + 1] = valid > static_cast(0.5) ?
+((ref_y - a_y) / a_height - static_cast(means[1])) /
+static_cast(stds[1]) : static_cast(0.0);
+out_targets[a_index + 2] = valid > static_cast(0.5) ?
+(log(ref_width / a_width) - static_cast(means[2])) /
+static_cast(stds[2]) : static_cast(0.0);
+out_targets[a_index + 3] = valid > static_cast(0.5) ?
+(log(ref_height / a_height) - static_cast(means[3])) /
+static_cast(stds[3]) : static_cast(0.0);
+  }
+};
+
+template
+void BoxEncodeForward(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  using namespace mshadow;
+  using namespace mshadow::expr;
+  using namespace mxnet_op;
+  CHECK_EQ(inputs.size(), 6U);
+  CHECK_EQ(outputs.size(), 2U);
+  Stream *s = ctx.get_stream();
+  // samples, matches, anchors, refs, means, stds
+  mxnet::TShape anchor_shape = inputs[2].shape_;
+  int loop_size = anchor_shape.ProdShape(0, 2);
+  int b = anchor_shape[0];
+  int n = anchor_shape[1];
+  int m = inputs[3].shape_[1];
+  MSHADOW_REAL_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+Tensor samples = inputs[0]
+ .get_with_shape(Shape2(b, n), s);
+Tensor matches = inputs[1]
+ .get_with_shape(Shape2(b, n), s);
+Tensor anchors = inputs[2]
+ .get_with_shape(Shape3(b, n, 4), s);
+Tensor refs = inputs[3]
+ .get_with_shape(Shape3(b, m, 4), s);
+Tensor means = inputs[4]
+ .get_with_shape(Shape1(4), s);

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-20 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 2c2fb24  Bump the publish timestamp.
2c2fb24 is described below

commit 2c2fb2401c0b0b7e85e1a90d807f7cf1784f91cb
Author: mxnet-ci 
AuthorDate: Fri Sep 20 18:41:41 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..98174a3
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Sep 20 18:41:41 UTC 2019



[GitHub] [incubator-mxnet] zachgk commented on issue #16224: mxnet.base.MXNetError: [13:43:35] C:\Jenkins\workspace\mxnet-tag\mxnet\src\ndarray\ndarray.cc:1755: Check failed: fi->Read(data) Invalid ND

2019-09-20 Thread GitBox
zachgk commented on issue #16224: mxnet.base.MXNetError: [13:43:35] 
C:\Jenkins\workspace\mxnet-tag\mxnet\src\ndarray\ndarray.cc:1755: Check failed: 
fi->Read(data) Invalid NDArray file format while loading resnet-18
URL: 
https://github.com/apache/incubator-mxnet/issues/16224#issuecomment-533666873
 
 
   I ran this and it seemed to work fine.
   
   The mx.test_utils.download directory tries to download the files to the 
local directory. Can you try navigating to an empty directory before running 
this?
   
   
   Can you also share your environment info:
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk commented on issue #16227: How to use a pre-trained model in model zoo as a sub-block in my own model?

2019-09-20 Thread GitBox
zachgk commented on issue #16227: How to use a pre-trained model in model zoo 
as a sub-block in my own model? 
URL: 
https://github.com/apache/incubator-mxnet/issues/16227#issuecomment-533663914
 
 
   You should use Mymodel.name_scope() when defining the child blocks. Take a 
look at the documentation in 
http://beta.mxnet.io/api/gluon/mxnet.gluon.nn.Block.html. Also, use 
model.save_parameters as save_params is deprecated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk commented on issue #9656: Sparse Tensor support in scala api

2019-09-20 Thread GitBox
zachgk commented on issue #9656: Sparse Tensor support in scala api
URL: 
https://github.com/apache/incubator-mxnet/issues/9656#issuecomment-533658182
 
 
   Sparse Support was added in 
https://github.com/apache/incubator-mxnet/pull/15378


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk closed issue #9656: Sparse Tensor support in scala api

2019-09-20 Thread GitBox
zachgk closed issue #9656: Sparse Tensor support in scala api
URL: https://github.com/apache/incubator-mxnet/issues/9656
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk commented on issue #15144: Relax Visual Studio version constraint in the specialization of `dmlc::type_name_helper` for `DT=mxnet::Tuple`

2019-09-20 Thread GitBox
zachgk commented on issue #15144: Relax Visual Studio version constraint in the 
specialization of `dmlc::type_name_helper` for `DT=mxnet::Tuple`
URL: https://github.com/apache/incubator-mxnet/pull/15144#issuecomment-533651460
 
 
   @QueensGambit Thanks for reviewing. You can actually use the official 
approve GitHub approve button under Files Changed -> Review Changes -> Approve 
even if you are not a committer.
   
   @anirudh2290 @junrushao1994 @eric-haibin-lin Can any of you help review this 
PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk commented on issue #16144: added -DCMAKE_BUILD_TYPE=Release to docs for building from source

2019-09-20 Thread GitBox
zachgk commented on issue #16144: added -DCMAKE_BUILD_TYPE=Release to docs for 
building from source
URL: https://github.com/apache/incubator-mxnet/pull/16144#issuecomment-533647577
 
 
   @QueensGambit There are merge conflicts since we released our new website 
(https://github.com/apache/incubator-mxnet/pull/15883). Can you rebase?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] drivanov commented on a change in pull request #16218: Improving performance of argmax operator

2019-09-20 Thread GitBox
drivanov commented on a change in pull request #16218: Improving performance of 
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326727093
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce_op.h
 ##
 @@ -556,6 +556,162 @@ inline bool ReduceAxesOpForwardStorage(const 
nnvm::NodeAttrs& attrs,
   return dispatched;
 }
 
+struct argmax {
+  template
+  MSHADOW_XINLINE static void Map(int i, const int nWorkers, const DType 
*in_data, DType *out_data,
 
 Review comment:
   OK, I will do that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] drivanov commented on a change in pull request #16218: Improving performance of argmax operator

2019-09-20 Thread GitBox
drivanov commented on a change in pull request #16218: Improving performance of 
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326727093
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce_op.h
 ##
 @@ -556,6 +556,162 @@ inline bool ReduceAxesOpForwardStorage(const 
nnvm::NodeAttrs& attrs,
   return dispatched;
 }
 
+struct argmax {
+  template
+  MSHADOW_XINLINE static void Map(int i, const int nWorkers, const DType 
*in_data, DType *out_data,
 
 Review comment:
   OK, I will do that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] drivanov edited a comment on issue #16178: [WIP]improving argmax perf

2019-09-20 Thread GitBox
drivanov edited a comment on issue #16178: [WIP]improving argmax perf
URL: https://github.com/apache/incubator-mxnet/pull/16178#issuecomment-533637642
 
 
   @access2rohit: It seems that to pass following test from  
/work/mxnet/julia/test/unittest/ndarray.jl, lines1511-1531: 
   ```
   function test_argmax()
   . . .
 @info "NDArray::argmax"
  @info "NDArray::argmax::NaN"
 let
   A = [1.  5 3;
NaN 2 6]
   x = NDArray(A)
   
   @test copy(argmax(x, dims = 1)) == [1 1 2]
   @test copy(argmax(x, dims = 2)) == reshape([2, 3], :, 1)
 end
   end
   ```
   We have to skip `NaN`'s. As far as I understand, in that case the outputs of 
`nd.argmax` and `np.nanarmax` should be identical and we will non need 
`nd.nanargmax`. Is it correct?
   
   BTW, what would be correct output of `nd.argmax` when all elements of vector 
will be `NaN`'s? 
   ```
   A = [1.  5 3;
NaN NaN NaN]
   ```

   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] drivanov commented on issue #16178: [WIP]improving argmax perf

2019-09-20 Thread GitBox
drivanov commented on issue #16178: [WIP]improving argmax perf
URL: https://github.com/apache/incubator-mxnet/pull/16178#issuecomment-533637642
 
 
   @access2rohit: It seems that to pass following test fron  
/work/mxnet/julia/test/unittest/ndarray.jl, lines1511-1531: 
   ```
   function test_argmax()
   . . .
 @info "NDArray::argmax"
  @info "NDArray::argmax::NaN"
 let
   A = [1.  5 3;
NaN 2 6]
   x = NDArray(A)
   
   @test copy(argmax(x, dims = 1)) == [1 1 2]
   @test copy(argmax(x, dims = 2)) == reshape([2, 3], :, 1)
 end
   end
   ```
   We have to skip `NaN`s. As far as I understand, in that case the outputs of 
`nd.argmax` and `np.nanarmax` should be identical and we will non need 
`nd.nanargmax`. Is it correct? 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on issue #15331: [fix] missing input log higher order.

2019-09-20 Thread GitBox
kshitij12345 commented on issue #15331: [fix] missing input log higher order.
URL: https://github.com/apache/incubator-mxnet/pull/15331#issuecomment-533627735
 
 
   @sxjscience I am not sure about how to test this. I was expecting that this 
missing input will cause problem with computing of higher order. Tried for 3rd 
order, which was computed successfully. For fourth order, 
   `Operator _backward_mul is non-differentiable because it didn't register 
FGradient attribute` (different problem). So I am not very sure as to how to 
write a test case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Caenorst commented on issue #16122: Add fast implementation of LARS

2019-09-20 Thread GitBox
Caenorst commented on issue #16122: Add fast implementation of LARS
URL: https://github.com/apache/incubator-mxnet/pull/16122#issuecomment-533627265
 
 
   I don't understand the errors in the CI, can somebody help me ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Caenorst commented on a change in pull request #16122: Add fast implementation of LARS

2019-09-20 Thread GitBox
Caenorst commented on a change in pull request #16122: Add fast implementation 
of LARS
URL: https://github.com/apache/incubator-mxnet/pull/16122#discussion_r326710971
 
 

 ##
 File path: tests/python/gpu/test_optimizer.py
 ##
 @@ -0,0 +1,93 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import mxnet as mx
+
+import sys
+import os
+curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
+sys.path.insert(0, os.path.join(curr_path, '../unittest'))
+from common import setup_module, with_seed
+
+# This script is testing the efficiency of LARS
+# We are training LeNet-5 at batch-size 8000 in 10 epochs above 98% accuracy
+# Which is not doable with simple SGD + momentum (from what have been tested 
so far)
+
+def lenet5():
+"""LeNet-5 Symbol"""
+#pylint: disable=no-member
+data = mx.sym.Variable('data')
+conv1 = mx.sym.Convolution(data=data, kernel=(5, 5), num_filter=20)
+tanh1 = mx.sym.Activation(data=conv1, act_type="tanh")
+pool1 = mx.sym.Pooling(data=tanh1, pool_type="max",
+   kernel=(2, 2), stride=(2, 2))
+# second conv
+conv2 = mx.sym.Convolution(data=pool1, kernel=(5, 5), num_filter=50)
+tanh2 = mx.sym.Activation(data=conv2, act_type="tanh")
+pool2 = mx.sym.Pooling(data=tanh2, pool_type="max",
+   kernel=(2, 2), stride=(2, 2))
+# first fullc
+flatten = mx.sym.Flatten(data=pool2)
+fc1 = mx.sym.FullyConnected(data=flatten, num_hidden=500)
+tanh3 = mx.sym.Activation(data=fc1, act_type="tanh")
+# second fullc
+fc2 = mx.sym.FullyConnected(data=tanh3, num_hidden=10)
+# loss
+lenet = mx.sym.SoftmaxOutput(data=fc2, name='softmax')
+#pylint: enable=no-member
+return lenet
+
+@with_seed()
+def test_lars():
 
 Review comment:
   So, I'm already testing the MXNet Ops in 
https://github.com/apache/incubator-mxnet/pull/16122/files#diff-4758fb9329d438de2836db2634a8f5f7R270-R422.
 which are the only non-python part of the optimizer, what my test is adding is 
to show that it's properly behaving: i.e allow you to train a network with a 
bigger batch. Making a fully python of it wouldn't help me testing that (the 
python could be wrong as well, let's say if I misunderstood the publication)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #15855: [Numpy] cumprod

2019-09-20 Thread GitBox
marcoabreu commented on a change in pull request #15855: [Numpy] cumprod
URL: https://github.com/apache/incubator-mxnet/pull/15855#discussion_r326710965
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -367,6 +367,7 @@ build_ubuntu_cpu_openblas() {
 set -ex
 export CC="gcc"
 export CXX="g++"
+pip3 install --user psutil
 
 Review comment:
   How about you add Ubuntu tvm to the CPU dockerfile? I mean if we want to run 
tvm cpu tests, we should also install it, right?
   
   No, please don't separate it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #15855: [Numpy] cumprod

2019-09-20 Thread GitBox
hzfan commented on a change in pull request #15855: [Numpy] cumprod
URL: https://github.com/apache/incubator-mxnet/pull/15855#discussion_r326710119
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -367,6 +367,7 @@ build_ubuntu_cpu_openblas() {
 set -ex
 export CC="gcc"
 export CXX="g++"
+pip3 install --user psutil
 
 Review comment:
   Thank you for the suggestion. I tried it and found that 
`Dockerfile.build.ubuntu_gpu_cu*` runs `ubuntu_tvm.sh`, but 
`Dockerfile.build.ubuntu_cpu` does not run it. So I think doing it in 
`ubuntu_tvm.sh` may not fix the error in unix-cpu.
   
   Can I separate `ubuntu_python.sh` into two files, one for arm device and the 
other for common x86-64 device? This worked for me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] drivanov commented on issue #16218: Improving performance of argmax operator

2019-09-20 Thread GitBox
drivanov commented on issue #16218: Improving performance of argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#issuecomment-533624848
 
 
   > It seems this PR is working on the same operator as #16178. Can you run a 
profiling using 
https://github.com/apache/incubator-mxnet/tree/master/benchmark/opperf and show 
the performance gain of your change?
   
   I will do it and will collect performance data for all three version: 
existing one, #16178 and #16218


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] drivanov commented on a change in pull request #16218: Improving performance of argmax operator

2019-09-20 Thread GitBox
drivanov commented on a change in pull request #16218: Improving performance of 
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326707791
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce_op.h
 ##
 @@ -556,6 +556,162 @@ inline bool ReduceAxesOpForwardStorage(const 
nnvm::NodeAttrs& attrs,
   return dispatched;
 }
 
+struct argmax {
+  template
+  MSHADOW_XINLINE static void Map(int i, const int nWorkers, const DType 
*in_data, DType *out_data,
+   size_t nSteps, size_t step, size_t shift, void *pIdxStorage, 
bool use_uint16) {
+// i - index of launched thread
+// nWorkers - number of threads, assigned to work on one row/column
+// iw - index of current thread among workers assigned to the same vector
+int iw = 0;
+const DType *pCurr = in_data;
+if (nWorkers > 1) {
+  // in - the vector number which current thread is assigned to
+  const auto in = i / nWorkers;
+  iw = i % nWorkers;
+  pCurr += in % step + shift * (in / step) + iw * step;
+  nSteps = (nSteps + nWorkers - 1 - iw) / nWorkers;
+  step *= nWorkers;
+} else {
+  pCurr += i % step + shift * (i / step);
+}
+
+int maxIdx = 0;
+DType maxVal = *pCurr;
+for (size_t j = 1; j < nSteps; ++j) {
+  if (maxVal < *(pCurr += step)) {
+maxVal = *pCurr;
+maxIdx = j;
+  }
+}
+
+if (nWorkers > 1) {
+  // saving index of best element found by current thread
+  if (use_uint16) {
+*(static_cast(pIdxStorage) + i) = maxIdx * nWorkers + iw;
+  } else {
+*(static_cast(pIdxStorage) + i) = maxIdx * nWorkers + iw;
+  }
+} else {
+  out_data[i] = maxIdx;// output of argmax
+}
+  }
+};
+
+struct argmax_reduce {
+  template
+  MSHADOW_XINLINE static void Map(int i, const int nWorkers, const DType 
*in_data, DType *out_data,
+   const size_t step, const size_t shift, void *pIdx, const bool 
use_uin16) {
+const DType *pCurr = in_data + i % step + shift * (i / step);
+int maxIdx;
+if (use_uin16) {
+  const auto *pIdxStorage = static_cast(pIdx);
+  maxIdx = *(pIdxStorage += i * nWorkers);
+  DType maxVal = *(pCurr + step * maxIdx);
+  for (int j = 1; j < nWorkers; ++j) {
+const auto val = *(pCurr + step * pIdxStorage[j]);
+if (maxVal < val) {
+  maxVal = val;
+  maxIdx = pIdxStorage[j];
+}
+  }
+} else {
+  const auto *pIdxStorage = static_cast(pIdx);
+  maxIdx = *(pIdxStorage += i * nWorkers);
+  DType maxVal = *(pCurr + step * maxIdx);
+  for (int j = 1; j < nWorkers; ++j) {
+const auto val = *(pCurr + step * pIdxStorage[j]);
+if (maxVal < val) {
+  maxVal = val;
+  maxIdx = pIdxStorage[j];
+}
+  }
+}
+
+out_data[i] = maxIdx;
+  }
+};
+
+template
+DType *AllocateDTypeMemory(const OpContext& ctx, const size_t num_items) {
+  const size_t memSize = num_items * sizeof(DType);
+  mshadow::Tensor workspace =
+ctx.requested[0].get_space_typed(
+  mshadow::Shape1(memSize), ctx.get_stream());
+  return reinterpret_cast(workspace.dptr_);
+}
+
+template
+void ArgMax(const OpContext& ctx, const TShape &shape, int axis, size_t step, 
size_t shift,
+const int nWorkers, void *pIdxStorage, bool use_uin16,
+const TBlob& input, const TBlob& output) {
+  using namespace mxnet_op;
+  const auto pIn = input.dptr();
+  auto *pOut = output.dptr();
+  const auto nSize = shape[axis];
+  const auto num_threads = shape.Size() / nSize;
+  mshadow::Stream *s = ctx.get_stream();
+  Kernel::Launch(s, num_threads * nWorkers, nWorkers, pIn, pOut, 
nSize,
+  step, shift, pIdxStorage, use_uin16);
+  if (nWorkers > 1) {
+Kernel::Launch(s, num_threads, nWorkers, pIn, pOut,
+   step, shift, pIdxStorage, use_uin16);
+  }
+}
+
+template
+int DefineNumbWorkers(const TShape &shape, int axis);
+
+template
+void ArgMax(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
+  const ReduceAxisParam& param = nnvm::get(attrs.parsed);
+  if (!param.axis) LOG(FATAL) << "Global reduction not supported yet";
+
+  auto shape = inputs[0].shape_;
+  auto axis = CheckAxis(param.axis.value(), shape.ndim());
+  if (shape.ndim() == 1)
+shape = AxisShapeCompact(shape, &axis, true);
+
+  void *pIdxMemory = nullptr;
+  auto nWorkers = DefineNumbWorkers(shape, axis);
+  bool use_uint16 = false;
+  while (nWorkers > 1) {
+const size_t num_items = nWorkers * shape.Size() / shape[axis];
+pIdxMemory = AllocateDTypeMemory(ctx, num_items);
+if (pIdxMemory)
+  break;
+
+// Check if indexes can be stored in uint16 format
+if (shape[axis] <= UINT16_MAX) {
+  pIdxMemory = AllocateDTypeMemory(ctx, num_items);
+ 

[GitHub] [incubator-mxnet] drivanov commented on a change in pull request #16218: Improving performance of argmax operator

2019-09-20 Thread GitBox
drivanov commented on a change in pull request #16218: Improving performance of 
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326707461
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce_op.h
 ##
 @@ -556,6 +556,162 @@ inline bool ReduceAxesOpForwardStorage(const 
nnvm::NodeAttrs& attrs,
   return dispatched;
 }
 
+struct argmax {
+  template
+  MSHADOW_XINLINE static void Map(int i, const int nWorkers, const DType 
*in_data, DType *out_data,
 
 Review comment:
   OK, I will do that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] drivanov commented on a change in pull request #16218: Improving performance of argmax operator

2019-09-20 Thread GitBox
drivanov commented on a change in pull request #16218: Improving performance of 
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326704192
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce_op.h
 ##
 @@ -556,6 +556,162 @@ inline bool ReduceAxesOpForwardStorage(const 
nnvm::NodeAttrs& attrs,
   return dispatched;
 }
 
+struct argmax {
+  template
+  MSHADOW_XINLINE static void Map(int i, const int nWorkers, const DType 
*in_data, DType *out_data,
+   size_t nSteps, size_t step, size_t shift, void *pIdxStorage, 
bool use_uint16) {
+// i - index of launched thread
+// nWorkers - number of threads, assigned to work on one row/column
+// iw - index of current thread among workers assigned to the same vector
+int iw = 0;
+const DType *pCurr = in_data;
+if (nWorkers > 1) {
+  // in - the vector number which current thread is assigned to
+  const auto in = i / nWorkers;
+  iw = i % nWorkers;
+  pCurr += in % step + shift * (in / step) + iw * step;
+  nSteps = (nSteps + nWorkers - 1 - iw) / nWorkers;
+  step *= nWorkers;
+} else {
+  pCurr += i % step + shift * (i / step);
+}
+
+int maxIdx = 0;
+DType maxVal = *pCurr;
+for (size_t j = 1; j < nSteps; ++j) {
+  if (maxVal < *(pCurr += step)) {
+maxVal = *pCurr;
+maxIdx = j;
+  }
+}
+
+if (nWorkers > 1) {
+  // saving index of best element found by current thread
+  if (use_uint16) {
+*(static_cast(pIdxStorage) + i) = maxIdx * nWorkers + iw;
+  } else {
+*(static_cast(pIdxStorage) + i) = maxIdx * nWorkers + iw;
+  }
+} else {
+  out_data[i] = maxIdx;// output of argmax
+}
+  }
+};
+
+struct argmax_reduce {
+  template
+  MSHADOW_XINLINE static void Map(int i, const int nWorkers, const DType 
*in_data, DType *out_data,
+   const size_t step, const size_t shift, void *pIdx, const bool 
use_uin16) {
+const DType *pCurr = in_data + i % step + shift * (i / step);
+int maxIdx;
+if (use_uin16) {
 
 Review comment:
   Yes, I also thought about it, and I even wrote one. Unfortunately, I could 
not compile this template. Perhaps, I will try one more time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] drivanov commented on a change in pull request #16218: Improving performance of argmax operator

2019-09-20 Thread GitBox
drivanov commented on a change in pull request #16218: Improving performance of 
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326699897
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce_op_index.cu
 ##
 @@ -43,5 +43,18 @@ NNVM_REGISTER_OP(pick)
 NNVM_REGISTER_OP(_backward_pick)
 .set_attr("FCompute", PickOpBackward);
 
+template<>
+int DefineNumbWorkers(const TShape &shape, int axis) {
+  const auto nSteps = static_cast(shape[axis]);
+  const auto nThreads = shape.Size()/nSteps;
+  if (nThreads > nSteps)
+return 1;
+
+  const auto a = static_cast(nSteps)/nThreads;
+  const auto b = log2f(a);
+  const auto numbWorkers = pow(2, (b * 5 + 28)/11);
 
 Review comment:
   This is just a heuristic. Experimenting with different values `(shape, axis, 
numbWorkers)`,  I accumulated a lot of data, it turned out that they almost 
perfectly correspond to this formula.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] drivanov commented on a change in pull request #16218: Improving performance of argmax operator

2019-09-20 Thread GitBox
drivanov commented on a change in pull request #16218: Improving performance of 
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326699897
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce_op_index.cu
 ##
 @@ -43,5 +43,18 @@ NNVM_REGISTER_OP(pick)
 NNVM_REGISTER_OP(_backward_pick)
 .set_attr("FCompute", PickOpBackward);
 
+template<>
+int DefineNumbWorkers(const TShape &shape, int axis) {
+  const auto nSteps = static_cast(shape[axis]);
+  const auto nThreads = shape.Size()/nSteps;
+  if (nThreads > nSteps)
+return 1;
+
+  const auto a = static_cast(nSteps)/nThreads;
+  const auto b = log2f(a);
+  const auto numbWorkers = pow(2, (b * 5 + 28)/11);
 
 Review comment:
   This is just a heuristic. Experimenting with different values `(shape, axis, 
numbWorkers)`, I accumulated a lot of data, it turned out that they almost 
perfectly corresponded to this formula.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #16207: Bump numpy version >=1.17.0

2019-09-20 Thread GitBox
leezu commented on a change in pull request #16207: Bump numpy version >=1.17.0
URL: https://github.com/apache/incubator-mxnet/pull/16207#discussion_r326661047
 
 

 ##
 File path: python/setup.py
 ##
 @@ -30,7 +30,11 @@
 else:
 from setuptools import setup
 from setuptools.extension import Extension
-kwargs = {'install_requires': ['numpy>=1.17.0,<2.0.0', 
'requests>=2.20.0,<3', 'graphviz<0.9.0,>=0.8.1'], 'zip_safe': False}
+kwargs = {
+'install_requires': ['requests>=2.20.0,<3', 'graphviz<0.9.0,>=0.8.1']
+.append('numpy>=1.17.0,<2.0.0' if sys.version_info[0] > 2 else 
'numpy>1.16.0,<2.0.0'),
 
 Review comment:
   Why not use https://www.python.org/dev/peps/pep-0508/#environment-markers ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16143: Failure of MKL-DNN Convolution from C API

2019-09-20 Thread GitBox
TaoLv commented on issue #16143: Failure of MKL-DNN Convolution from C API
URL: 
https://github.com/apache/incubator-mxnet/issues/16143#issuecomment-533572996
 
 
   Sorry for the delay @matteosal . I got trapped by other stuff this week. 
Will look into the python script and get back to you next week. Thanks for your 
patience.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch mkldnn-v1.0 updated: [mkldnn-v1.0] Add MKL-DNN activation (#16195)

2019-09-20 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/mkldnn-v1.0 by this push:
 new f930baa  [mkldnn-v1.0] Add MKL-DNN activation (#16195)
f930baa is described below

commit f930baa533d24497188e33c163dbb1f36707c336
Author: rongzha1 
AuthorDate: Fri Sep 20 22:09:19 2019 +0800

[mkldnn-v1.0] Add MKL-DNN activation (#16195)

* add mkldnn act; pass lint; pass mnist training

* make bwd as private member
---
 src/operator/nn/activation.cc   |  18 ++---
 src/operator/nn/mkldnn/mkldnn_act-inl.h |  40 +++---
 src/operator/nn/mkldnn/mkldnn_act.cc| 130 
 src/operator/nn/mkldnn/mkldnn_ops-inl.h |  15 ++--
 4 files changed, 75 insertions(+), 128 deletions(-)

diff --git a/src/operator/nn/activation.cc b/src/operator/nn/activation.cc
index 5abb667..f238e8f 100644
--- a/src/operator/nn/activation.cc
+++ b/src/operator/nn/activation.cc
@@ -27,10 +27,10 @@
 #include "./activation-inl.h"
 #include "../mshadow_op.h"
 #include "../tensor/elemwise_unary_op.h"
-#if MXNET_USE_MKLDNN == 1
+#if MXNET_USE_MKLDNN == 100
 #include "./mkldnn/mkldnn_base-inl.h"
 #include "./mkldnn/mkldnn_ops-inl.h"
-#endif  // MXNET_USE_MKLDNN == 1
+#endif  // MXNET_USE_MKLDNN == 100
 #include "../operator_common.h"
 #include "../../common/utils.h"
 
@@ -91,7 +91,7 @@ struct ActivationGrad {
   }
 };
 
-#if MXNET_USE_MKLDNN == 1
+#if MXNET_USE_MKLDNN == 100
 static void ActivationComputeExCPU(const nnvm::NodeAttrs& attrs,
const OpContext& ctx,
const std::vector& inputs,
@@ -150,7 +150,7 @@ inline static bool BackwardActStorageType(const 
nnvm::NodeAttrs& attrs,
   return MKLDNNStorageType(attrs, dev_mask, SupportMKLDNNAct(param),
dispatch_mode, in_attrs, out_attrs);
 }
-#endif  // MXNET_USE_MKLDNN == 1
+#endif  // MXNET_USE_MKLDNN == 100
 
 
 MXNET_OPERATOR_REGISTER_UNARY(Activation)
@@ -167,7 +167,7 @@ The following activation functions are supported:
 
 )code" ADD_FILELINE)
 .set_attr_parser(ParamParser)
-#if MXNET_USE_MKLDNN == 1
+#if MXNET_USE_MKLDNN == 100
 .set_attr("FInferStorageType", ActivationStorageType)
 #endif
 .set_attr("FListOutputNames",
@@ -175,7 +175,7 @@ The following activation functions are supported:
 return std::vector{"output"};
 })
 .set_attr("FCompute", ActivationCompute)
-#if MXNET_USE_MKLDNN == 1
+#if MXNET_USE_MKLDNN == 100
 .set_attr("TIsMKLDNN", true)
 .set_attr("FComputeEx", ActivationComputeExCPU)
 #endif
@@ -189,7 +189,7 @@ NNVM_REGISTER_OP(_backward_Activation)
 })
 .set_num_outputs(1)
 .set_attr("TIsBackward", true)
-#if MXNET_USE_MKLDNN == 1
+#if MXNET_USE_MKLDNN == 100
 .set_attr("FInferStorageType", BackwardActStorageType)
 #endif
 .set_attr("FInferShape", ElemwiseShape<-1, 1>)
@@ -197,13 +197,13 @@ NNVM_REGISTER_OP(_backward_Activation)
 .set_attr("FInplaceOption", [](const NodeAttrs& attrs){
   return std::vector >{{0, 0}};
 })
-#if MXNET_USE_MKLDNN == 1
+#if MXNET_USE_MKLDNN == 100
 .set_attr("FResourceRequest", [](const NodeAttrs& n) {
   return std::vector{ResourceRequest::kTempSpace};
 })
 #endif
 .set_attr_parser(ParamParser)
-#if MXNET_USE_MKLDNN == 1
+#if MXNET_USE_MKLDNN == 100
 .set_attr("TIsMKLDNN", true)
 .set_attr("FComputeEx", ActivationGradComputeExCPU)
 #endif
diff --git a/src/operator/nn/mkldnn/mkldnn_act-inl.h 
b/src/operator/nn/mkldnn/mkldnn_act-inl.h
index 6bf30e3..57507a5 100644
--- a/src/operator/nn/mkldnn/mkldnn_act-inl.h
+++ b/src/operator/nn/mkldnn/mkldnn_act-inl.h
@@ -20,7 +20,7 @@
 /*!
  * Copyright (c) 2019 by Contributors
  * \file mkldnn_act-inl.h
- * \brief MKLDNN(Quantized) Activation operator based on subgraph
+ * \brief MKLDNN Activation operator
  * /author Zhiyuan Huang
 */
 
@@ -28,20 +28,17 @@
 #define MXNET_OPERATOR_NN_MKLDNN_MKLDNN_ACT_INL_H_
 
 
-#if MXNET_USE_MKLDNN == 1
+#if MXNET_USE_MKLDNN == 100
 #include 
 #include 
 #include "../activation-inl.h"
-#include "./mkldnn_ops-inl.h"
-#include "./mkldnn_base-inl.h"
 
 namespace mxnet {
 namespace op {
 
 mkldnn::algorithm GetMKLDNNActAlgo(const ActivationParam& param);
 mkldnn::eltwise_forward::primitive_desc GetActFwdDescImpl(
-const ActivationParam& param, bool is_train,
-const mkldnn::memory &input_mem, int dtype);
+const ActivationParam& param, bool is_train, const mkldnn::memory 
&input_mem);
 
 class MKLDNNActForward {
  public:
@@ -49,14 +46,13 @@ class MKLDNNActForward {
 
   MKLDNNActForward(const ActivationParam& param, bool is_train,
const NDArray &data, const mkldnn::memory &mem): fwd_pd(
-   GetActFwdDescImpl(param, is_train, mem, data.dtype())) 
{}
-  void SetNewMem(const mkldnn::memory &data, const mkldnn::memory &output);
-  const mkldnn::eltwise_forward &GetFwd() const;
+   GetActFwdD

[GitHub] [incubator-mxnet] TaoLv merged pull request #16195: [mkldnn-v1.0] Add MKL-DNN activation

2019-09-20 Thread GitBox
TaoLv merged pull request #16195: [mkldnn-v1.0] Add MKL-DNN activation
URL: https://github.com/apache/incubator-mxnet/pull/16195
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv closed pull request #16213: [mkldnn-v1.0][Don't merge] Trigger CI after merging the master branch

2019-09-20 Thread GitBox
TaoLv closed pull request #16213: [mkldnn-v1.0][Don't merge] Trigger CI after 
merging the master branch
URL: https://github.com/apache/incubator-mxnet/pull/16213
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ybz79 opened a new issue #16227: How to use a pre-trained model in model zoo as a sub-block in my own model?

2019-09-20 Thread GitBox
ybz79 opened a new issue #16227: How to use a pre-trained model in model zoo as 
a sub-block in my own model? 
URL: https://github.com/apache/incubator-mxnet/issues/16227
 
 
   ## Description
   I want to use a pretrained model in the model zoo (such as BERT) as a layer 
in my own model. 
   ```python
  class Mymodel(nn.Block):
   def __init__(bert_model):
  self.bert = bert_model
  .
   
 
 bert_model = get_model()
 model = Mymodel(bert_model)
   ```
   
   But I have a small problem about the naming, since bert_model and other 
parameters in my own model didn't have a same prefix, which can cause error in 
model.save_params()
   , how can I slove this problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-20 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ca711dc  Bump the publish timestamp.
ca711dc is described below

commit ca711dc7bd0251ecddf672b4a8a9215b8d34d79f
Author: mxnet-ci 
AuthorDate: Fri Sep 20 13:13:44 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..6719902
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Sep 20 13:13:44 UTC 2019



[GitHub] [incubator-mxnet] wkcn commented on issue #15678: [MXNET-1418]Add contrib op into cpp package

2019-09-20 Thread GitBox
wkcn commented on issue #15678: [MXNET-1418]Add contrib op into cpp package
URL: https://github.com/apache/incubator-mxnet/pull/15678#issuecomment-533546216
 
 
   I don't know how to solve the issue about Scala binding.
   Does anyone help me address the problem? Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn opened a new pull request #16226: Fix wrong results of min([inf, inf]) and max([-inf, -inf])

2019-09-20 Thread GitBox
wkcn opened a new pull request #16226: Fix wrong results of min([inf, inf]) and 
max([-inf,-inf])
URL: https://github.com/apache/incubator-mxnet/pull/16226
 
 
   ## Description ##
   Hi, there.
   In the latest version of MXNet, the results of min([inf, inf]) and 
max([-inf, -inf]) are wrong. #16206 .
   The reason is that the initial values of minimum or maximum reducer are the 
MaxValue and the MinValue. They should be the +InfValue and the -InfValue.
   
   In this PR, I add `PosInfValue` and `NegInfValue` in mshadow. Then I fixed 
the bug.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16195: [mkldnn-v1.0] Add MKL-DNN activation

2019-09-20 Thread GitBox
pengzhao-intel commented on issue #16195: [mkldnn-v1.0] Add MKL-DNN activation
URL: https://github.com/apache/incubator-mxnet/pull/16195#issuecomment-533495711
 
 
   @TaoLv please merge the related 1.0 PR when the branch is good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16075: Integrate MKL-DNN leakyrelu

2019-09-20 Thread GitBox
pengzhao-intel commented on issue #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#issuecomment-533495090
 
 
   + @ZhennanQin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xinyu-intel commented on issue #16075: Integrate MKL-DNN leakyrelu

2019-09-20 Thread GitBox
xinyu-intel commented on issue #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#issuecomment-533489831
 
 
   @pengzhao-intel I'll start another PR to enable gelu after this PR merged:)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] matteosal edited a comment on issue #16143: Failure of MKL-DNN Convolution from C API

2019-09-20 Thread GitBox
matteosal edited a comment on issue #16143: Failure of MKL-DNN Convolution from 
C API
URL: 
https://github.com/apache/incubator-mxnet/issues/16143#issuecomment-533484928
 
 
   Any news about this? 
   Not sure it should be labeled C API anymore, because one 1 example out of 2 
(the second above here) can also be reproduced from Python.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] matteosal commented on issue #16143: Failure of MKL-DNN Convolution from C API

2019-09-20 Thread GitBox
matteosal commented on issue #16143: Failure of MKL-DNN Convolution from C API
URL: 
https://github.com/apache/incubator-mxnet/issues/16143#issuecomment-533484928
 
 
   Any news about this? 
   Not sure it should be labeled C API anymore, because one 1 example out of 2 
(the one above here) can also be reproduced from Python.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gasgallo opened a new pull request #16225: Add support for UpSampling op in mx2onnx

2019-09-20 Thread GitBox
gasgallo opened a new pull request #16225: Add support for UpSampling op in 
mx2onnx
URL: https://github.com/apache/incubator-mxnet/pull/16225
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Sumanth-393 opened a new issue #16224: mxnet.base.MXNetError: [13:43:35] C:\Jenkins\workspace\mxnet-tag\mxnet\src\ndarray\ndarray.cc:1755: Check failed: fi->Read(data) Inval

2019-09-20 Thread GitBox
Sumanth-393 opened a new issue #16224: mxnet.base.MXNetError: [13:43:35] 
C:\Jenkins\workspace\mxnet-tag\mxnet\src\ndarray\ndarray.cc:1755: Check failed: 
fi->Read(data) Invalid NDArray file format while loading resnet-18
URL: https://github.com/apache/incubator-mxnet/issues/16224
 
 
   got Error while loading checkpoint from model.load_checkpoint
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   path='http://data.mxnet.io/models/imagenet/'
   [
   mx.test_utils.download(path+'resnet/18-layers/resnet-18-.params'),
mx.test_utils.download(path+'resnet/18-layers/resnet-18-symbol.json'),
mx.test_utils.download(path+'synset.txt')]
   ctx = mx.cpu()
   sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-18', 0)
   
   error is as below:
   
   E:\my_projects\mxnet_work>python Mxnet_imageClaify.py
   [13:43:35] 
C:\Jenkins\workspace\mxnet-tag\mxnet\src\nnvm\legacy_json_util.cc:209: Loading 
symbol saved by previous version v0.8.0. Attempting to upgrade...
   [13:43:35] 
C:\Jenkins\workspace\mxnet-tag\mxnet\src\nnvm\legacy_json_util.cc:217: Symbol 
successfully upgraded!
   Traceback (most recent call last):
 File "Mxnet_imageClaify.py", line 14, in 
   sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-18', 0)
 File 
"C:\Users\bunny\AppData\Local\Programs\Python\Python36\lib\site-packages\mxnet\model.py",
 line 439, in load_checkpoint
   save_dict = nd.load('%s-%04d.params' % (prefix, epoch))
 File 
"C:\Users\bunny\AppData\Local\Programs\Python\Python36\lib\site-packages\mxnet\ndarray\utils.py",
 line 175, in load
   ctypes.byref(names)))
 File 
"C:\Users\bunny\AppData\Local\Programs\Python\Python36\lib\site-packages\mxnet\base.py",
 line 252, in check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [13:43:35] 
C:\Jenkins\workspace\mxnet-tag\mxnet\src\ndarray\ndarray.cc:1755: Check failed: 
fi->Read(data) Invalid NDArray file format
   
   
   kindly help i'm new to mxnet
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] JiaoPaner commented on issue #15632: Building MxNet with CPP_PACKAGE on Windows10 (2019-07-23)

2019-09-20 Thread GitBox
JiaoPaner commented on issue #15632: Building MxNet with CPP_PACKAGE on 
Windows10 (2019-07-23)
URL: 
https://github.com/apache/incubator-mxnet/issues/15632#issuecomment-533443667
 
 
   @QueensGambit 
   I read the #15144 ,modified the line " #if !defined(_MSC_VER)" which is 744 
to "#if !(defined(_MSC_VER) && _MSC_VER < 1900)" .Then, compile again that 
config includes USE_CPP_PACKAGE, my operating system is win10 x64 and IDE is 
Visual Studio 2015.
   After that the compile result has no any errors and no any above errors.
   And the examples can run without error , the solution does work. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] JiaoPaner edited a comment on issue #15143: dmlc::type_name_helper specialization of mxnet::tuple should not be disabled for MSVC

2019-09-20 Thread GitBox
JiaoPaner edited a comment on issue #15143: dmlc::type_name_helper 
specialization of mxnet::tuple should not be disabled for MSVC
URL: 
https://github.com/apache/incubator-mxnet/issues/15143#issuecomment-533442504
 
 
   I read the #15144 ,modified the line " #if !defined(_MSC_VER)" which is 744 
to "#if !(defined(_MSC_VER) && _MSC_VER < 1900)" .Then, compile again that 
config includes USE_CPP_PACKAGE, my operating system is win10 x64 and IDE is 
Visual Studio 2015.
   After that the compile result has no any errors and no any above errors.
   And the examples can run without error , the solution does work. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16075: Integrate MKL-DNN leakyrelu

2019-09-20 Thread GitBox
pengzhao-intel commented on issue #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#issuecomment-533443566
 
 
   @xinyu-intel the MKL-DNN is updated to 0.21 now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (7126438 -> 986cecd)

2019-09-20 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 7126438  New Website: New Pipeline [3/3] (#15883)
 add 986cecd  Update MKL-DNN dependency (#16073)

No new revisions were added by this update.

Summary of changes:
 3rdparty/mkldnn   | 2 +-
 ci/docker/install/ubuntu_mklml.sh | 2 +-
 cmake/DownloadMKLML.cmake | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)



[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16073: Update MKL-DNN dependency

2019-09-20 Thread GitBox
pengzhao-intel merged pull request #16073: Update MKL-DNN dependency
URL: https://github.com/apache/incubator-mxnet/pull/16073
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] JiaoPaner commented on issue #15143: dmlc::type_name_helper specialization of mxnet::tuple should not be disabled for MSVC

2019-09-20 Thread GitBox
JiaoPaner commented on issue #15143: dmlc::type_name_helper specialization 
of mxnet::tuple should not be disabled for MSVC
URL: 
https://github.com/apache/incubator-mxnet/issues/15143#issuecomment-533442504
 
 
   I read the #15144 ,modified the line " #if !defined(_MSC_VER)" which is 744  
to "#if !(defined(_MSC_VER) && _MSC_VER < 1900)" .Then, compile again that 
config includes USE_CPP_PACKAGE.After that the compile result has no any errors 
 and no any above errors.
   And the examples can run without error , the solution does work. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rongzha1 opened a new pull request #16223: [mkldnn-v1.0] Add MKL-DNN LRN

2019-09-20 Thread GitBox
rongzha1 opened a new pull request #16223: [mkldnn-v1.0] Add MKL-DNN LRN
URL: https://github.com/apache/incubator-mxnet/pull/16223
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   add mkldnn lrn
   
   @TaoLv @ciyongch @ZhennanQin @PatricZhao 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu edited a comment on issue #15857: [Numpy] Added operator logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-09-20 Thread GitBox
xidulu edited a comment on issue #15857: [Numpy] Added operator logaddexp; 
added support for zero-size tensor in BinaryBroadcastBackwardUseIn 
URL: https://github.com/apache/incubator-mxnet/pull/15857#issuecomment-533434271
 
 
   @szha Conflicts resolved (for now)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on issue #15857: [Numpy] Added operator logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-09-20 Thread GitBox
xidulu commented on issue #15857: [Numpy] Added operator logaddexp; added 
support for zero-size tensor in BinaryBroadcastBackwardUseIn 
URL: https://github.com/apache/incubator-mxnet/pull/15857#issuecomment-533434271
 
 
   @szha Conflicts solved (for now)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services