[GitHub] ZiyueHuang commented on issue #11182: fix propagation of cpu shared context, issue #11160

2018-06-06 Thread GitBox
ZiyueHuang commented on issue #11182: fix propagation of cpu shared context, 
issue #11160
URL: https://github.com/apache/incubator-mxnet/pull/11182#issuecomment-395301375
 
 
   cc @piiswrong @szha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-06-06 Thread GitBox
larroy commented on issue #11094: [MXNET-115] USE_LAPACK is forced on all 
platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#issuecomment-395301232
 
 
   ping


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang opened a new pull request #11182: fix propagation of cpu shared context, issue #11160

2018-06-06 Thread GitBox
ZiyueHuang opened a new pull request #11182: fix propagation of cpu shared 
context, issue #11160
URL: https://github.com/apache/incubator-mxnet/pull/11182
 
 
   ## Description ##
   fix https://github.com/apache/incubator-mxnet/issues/11160
   
   ```
   >>> import mxnet as mx
   >>> a = mx.nd.zeros((1,2,3), ctx=mx.Context('cpu_shared', 0))
   >>> a.context
   cpu_shared(0)
   ```
   
   Where should I add unittest?
   
   cc @zhreshold 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #11055: Added support android64

2018-06-06 Thread GitBox
larroy commented on a change in pull request #11055: Added support android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#discussion_r193629330
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -15,9 +15,9 @@ mxnet_option(USE_NCCL "Use NVidia NCCL with 
CUDA" OFF)
 mxnet_option(USE_OPENCV   "Build with OpenCV support" ON)
 mxnet_option(USE_OPENMP   "Build with Openmp support" ON)
 mxnet_option(USE_CUDNN"Build with cudnn support"  ON) # one could 
set CUDNN_ROOT for search path
-mxnet_option(USE_SSE  "Build with x86 SSE instruction support" ON)
+mxnet_option(USE_SSE  "Build with x86 SSE instruction support" ON 
IF NOT ARM)
 mxnet_option(USE_F16C "Build with x86 F16C instruction support" 
ON) # autodetects support if ON
-mxnet_option(USE_LAPACK   "Build with lapack support" ON IF NOT MSVC)
 
 Review comment:
   this logic is buggy, so tries to always use LAPACK, and if I recall 
correctly is not available atm in android. See also:  
https://github.com/apache/incubator-mxnet/pull/11094
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #11055: Added support android64

2018-06-06 Thread GitBox
szha commented on a change in pull request #11055: Added support android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#discussion_r193628416
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -15,9 +15,9 @@ mxnet_option(USE_NCCL "Use NVidia NCCL with 
CUDA" OFF)
 mxnet_option(USE_OPENCV   "Build with OpenCV support" ON)
 mxnet_option(USE_OPENMP   "Build with Openmp support" ON)
 mxnet_option(USE_CUDNN"Build with cudnn support"  ON) # one could 
set CUDNN_ROOT for search path
-mxnet_option(USE_SSE  "Build with x86 SSE instruction support" ON)
+mxnet_option(USE_SSE  "Build with x86 SSE instruction support" ON 
IF NOT ARM)
 mxnet_option(USE_F16C "Build with x86 F16C instruction support" 
ON) # autodetects support if ON
-mxnet_option(USE_LAPACK   "Build with lapack support" ON IF NOT MSVC)
 
 Review comment:
   why is this relevant to android64?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #11055: Added support android64

2018-06-06 Thread GitBox
szha commented on a change in pull request #11055: Added support android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#discussion_r193628349
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -15,9 +15,9 @@ mxnet_option(USE_NCCL "Use NVidia NCCL with 
CUDA" OFF)
 mxnet_option(USE_OPENCV   "Build with OpenCV support" ON)
 mxnet_option(USE_OPENMP   "Build with Openmp support" ON)
 mxnet_option(USE_CUDNN"Build with cudnn support"  ON) # one could 
set CUDNN_ROOT for search path
-mxnet_option(USE_SSE  "Build with x86 SSE instruction support" ON)
+mxnet_option(USE_SSE  "Build with x86 SSE instruction support" ON 
IF NOT ARM)
 mxnet_option(USE_F16C "Build with x86 F16C instruction support" 
ON) # autodetects support if ON
 
 Review comment:
   does ARM/android support F16C instruction?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #11166: Update rnn_cell.py

2018-06-06 Thread GitBox
szha commented on issue #11166: Update rnn_cell.py
URL: https://github.com/apache/incubator-mxnet/pull/11166#issuecomment-395295478
 
 
   how is it triggered?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on issue #10462: [MXNET-62] add test against spark integration

2018-06-06 Thread GitBox
CodingCat commented on issue #10462: [MXNET-62] add test against spark 
integration
URL: https://github.com/apache/incubator-mxnet/pull/10462#issuecomment-395295022
 
 
   @yzhliu it passed now


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-433] Tutorial on saving and loading gluon models (#11002)

2018-06-06 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 6ef7a0f  [MXNET-433] Tutorial on saving and loading gluon models 
(#11002)
6ef7a0f is described below

commit 6ef7a0ff6bdd0677c7ee08ddde4028bd4354f974
Author: Indu Bharathi 
AuthorDate: Wed Jun 6 21:52:48 2018 -0700

[MXNET-433] Tutorial on saving and loading gluon models (#11002)

* Add tutorial to save and load parameters

* Add outputs in markdown

* Add image. Fix some formatting.

* Add tutorial to index. Add to tests.

* Minor language changes

* Add download notebook button

* Absorb suggestions for review

* Add as alternate link

* Use Symbol.load instead of model.load_checkpoint

* Add a note discouraging the use of Block.collect_params().save() if 
parameters need to be loaded with Block.load_params()

* Fix a bug. Also some language corrections.
---
 docs/tutorials/gluon/save_load_params.md | 269 +++
 docs/tutorials/index.md  |   2 +-
 tests/tutorials/test_tutorials.py|   3 +
 3 files changed, 273 insertions(+), 1 deletion(-)

diff --git a/docs/tutorials/gluon/save_load_params.md 
b/docs/tutorials/gluon/save_load_params.md
new file mode 100644
index 000..cd87680
--- /dev/null
+++ b/docs/tutorials/gluon/save_load_params.md
@@ -0,0 +1,269 @@
+# Saving and Loading Gluon Models
+
+Training large models take a lot of time and it is a good idea to save the 
trained models to files to avoid training them again and again. There are a 
number of reasons to do this. For example, you might want to do inference on a 
machine that is different from the one where the model was trained. Sometimes 
model's performance on validation set decreases towards the end of the training 
because of overfitting. If you saved your model parameters after every epoch, 
at the end you can decide  [...]
+
+In this tutorial, we will learn ways to save and load Gluon models. There are 
two ways to save/load Gluon models:
+
+**1. Save/load model parameters only**
+
+Parameters of any Gluon model can be saved using the `save_params` and 
`load_params` method. This does not save model architecture. This method is 
used to save parameters of dynamic (non-hybrid) models. Model architecture 
cannot be saved for dynamic models because model architecture changes during 
execution.
+
+**2. Save/load model parameters AND architecture**
+
+The Model architecture of `Hybrid` models stays static and don't change during 
execution. Therefore both model parameters AND architecture can be saved and 
loaded using `export`, `load_checkpoint` and `load` methods.
+
+Let's look at the above methods in more detail. Let's start by importing the 
modules we'll need.
+
+```python
+from __future__ import print_function
+
+import mxnet as mx
+import mxnet.ndarray as nd
+from mxnet import nd, autograd, gluon
+from mxnet.gluon.data.vision import transforms
+
+import numpy as np
+```
+
+## Setup: build and train a simple model
+
+We need a trained model before we can save it to a file. So let's go ahead and 
build a very simple convolutional network and train it on MNIST data.
+
+Let's define a helper function to build a LeNet model and another helper to 
train LeNet with MNIST.
+
+```python
+# Use GPU if one exists, else use CPU
+ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
+
+# MNIST images are 28x28. Total pixels in input layer is 28x28 = 784
+num_inputs = 784
+# Clasify the images into one of the 10 digits
+num_outputs = 10
+# 64 images in a batch
+batch_size = 64
+
+# Load the training data
+train_data = 
gluon.data.DataLoader(gluon.data.vision.MNIST(train=True).transform_first(transforms.ToTensor()),
+   batch_size, shuffle=True)
+
+# Build a simple convolutional network
+def build_lenet(net):
+with net.name_scope():
+# First convolution
+net.add(gluon.nn.Conv2D(channels=20, kernel_size=5, activation='relu'))
+net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))
+# Second convolution
+net.add(gluon.nn.Conv2D(channels=50, kernel_size=5, activation='relu'))
+net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))
+# Flatten the output before the fully connected layers
+net.add(gluon.nn.Flatten())
+# First fully connected layers with 512 neurons
+net.add(gluon.nn.Dense(512, activation="relu"))
+# Second fully connected layer with as many neurons as the number of 
classes
+net.add(gluon.nn.Dense(num_outputs))
+
+return net
+
+# Train a given model using MNIST data
+def train_model(model):
+# Initialize the parameters with Xavier initializer
+model.collect_params().initialize(mx.init.Xavier(), 

[GitHub] sandeep-krishnamurthy closed pull request #11002: [MXNET-433] Tutorial on saving and loading gluon models

2018-06-06 Thread GitBox
sandeep-krishnamurthy closed pull request #11002: [MXNET-433] Tutorial on 
saving and loading gluon models
URL: https://github.com/apache/incubator-mxnet/pull/11002
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/gluon/save_load_params.md 
b/docs/tutorials/gluon/save_load_params.md
new file mode 100644
index 000..cd876808a86
--- /dev/null
+++ b/docs/tutorials/gluon/save_load_params.md
@@ -0,0 +1,269 @@
+# Saving and Loading Gluon Models
+
+Training large models take a lot of time and it is a good idea to save the 
trained models to files to avoid training them again and again. There are a 
number of reasons to do this. For example, you might want to do inference on a 
machine that is different from the one where the model was trained. Sometimes 
model's performance on validation set decreases towards the end of the training 
because of overfitting. If you saved your model parameters after every epoch, 
at the end you can decide to use the model that performs best on the validation 
set. Another reason would be to train your model using one language (like 
Python that has a lot of tools for training) and run inference using a 
different language (like Scala probably because your application is built on 
Scala).
+
+In this tutorial, we will learn ways to save and load Gluon models. There are 
two ways to save/load Gluon models:
+
+**1. Save/load model parameters only**
+
+Parameters of any Gluon model can be saved using the `save_params` and 
`load_params` method. This does not save model architecture. This method is 
used to save parameters of dynamic (non-hybrid) models. Model architecture 
cannot be saved for dynamic models because model architecture changes during 
execution.
+
+**2. Save/load model parameters AND architecture**
+
+The Model architecture of `Hybrid` models stays static and don't change during 
execution. Therefore both model parameters AND architecture can be saved and 
loaded using `export`, `load_checkpoint` and `load` methods.
+
+Let's look at the above methods in more detail. Let's start by importing the 
modules we'll need.
+
+```python
+from __future__ import print_function
+
+import mxnet as mx
+import mxnet.ndarray as nd
+from mxnet import nd, autograd, gluon
+from mxnet.gluon.data.vision import transforms
+
+import numpy as np
+```
+
+## Setup: build and train a simple model
+
+We need a trained model before we can save it to a file. So let's go ahead and 
build a very simple convolutional network and train it on MNIST data.
+
+Let's define a helper function to build a LeNet model and another helper to 
train LeNet with MNIST.
+
+```python
+# Use GPU if one exists, else use CPU
+ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
+
+# MNIST images are 28x28. Total pixels in input layer is 28x28 = 784
+num_inputs = 784
+# Clasify the images into one of the 10 digits
+num_outputs = 10
+# 64 images in a batch
+batch_size = 64
+
+# Load the training data
+train_data = 
gluon.data.DataLoader(gluon.data.vision.MNIST(train=True).transform_first(transforms.ToTensor()),
+   batch_size, shuffle=True)
+
+# Build a simple convolutional network
+def build_lenet(net):
+with net.name_scope():
+# First convolution
+net.add(gluon.nn.Conv2D(channels=20, kernel_size=5, activation='relu'))
+net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))
+# Second convolution
+net.add(gluon.nn.Conv2D(channels=50, kernel_size=5, activation='relu'))
+net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))
+# Flatten the output before the fully connected layers
+net.add(gluon.nn.Flatten())
+# First fully connected layers with 512 neurons
+net.add(gluon.nn.Dense(512, activation="relu"))
+# Second fully connected layer with as many neurons as the number of 
classes
+net.add(gluon.nn.Dense(num_outputs))
+
+return net
+
+# Train a given model using MNIST data
+def train_model(model):
+# Initialize the parameters with Xavier initializer
+model.collect_params().initialize(mx.init.Xavier(), ctx=ctx)
+# Use cross entropy loss
+softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
+# Use Adam optimizer
+trainer = gluon.Trainer(model.collect_params(), 'adam', {'learning_rate': 
.001})
+
+# Train for one epoch
+for epoch in range(1):
+# Iterate through the images and labels in the training data
+for batch_num, (data, label) in enumerate(train_data):
+# get the images and labels
+data = data.as_in_context(ctx)
+label = label.as_in_context(ctx)
+# Ask autograd to record the forward pass
+with 

[GitHub] larroy commented on a change in pull request #11055: Added support android64

2018-06-06 Thread GitBox
larroy commented on a change in pull request #11055: Added support android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#discussion_r193625202
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -163,6 +163,30 @@ build_android_arm64() {
 cp dist/*.whl /work/build
 }
 
+build_android_arm64() {
+set -ex
+cd /work/build
+#-DCMAKE_ANDROID_NDK=${CROSS_ROOT} \
+#-DCMAKE_SYSTEM_VERSION=${ANDROID_NDK_REVISION} \
+#-DCMAKE_SYSTEM_NAME=Android \
+cmake\
+-DANDROID=ON \
+-DUSE_CUDA=OFF\
+-DUSE_SSE=OFF\
+-DUSE_LAPACK=OFF\
+-DUSE_OPENCV=OFF\
+-DUSE_OPENMP=OFF\
+-DUSE_SIGNAL_HANDLER=ON\
+-DCMAKE_BUILD_TYPE=RelWithDebInfo\
+-DUSE_MKL_IF_AVAILABLE=OFF\
+-G Ninja /work/mxnet
+ninja -v
+export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
+#cd /work/mxnet/python
+#python setup.py bdist_wheel --universal
+#cp dist/*.whl /work/build
 
 Review comment:
   there will be more work to build a binary so these comments will go away, 
having another run of CI will take many hours and with flaky tests potentially 
days. I would suggest to merge this as it is.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #11055: Added support android64

2018-06-06 Thread GitBox
larroy commented on a change in pull request #11055: Added support android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#discussion_r193625241
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -163,6 +163,30 @@ build_android_arm64() {
 cp dist/*.whl /work/build
 }
 
+build_android_arm64() {
+set -ex
+cd /work/build
+#-DCMAKE_ANDROID_NDK=${CROSS_ROOT} \
+#-DCMAKE_SYSTEM_VERSION=${ANDROID_NDK_REVISION} \
+#-DCMAKE_SYSTEM_NAME=Android \
+cmake\
+-DANDROID=ON \
+-DUSE_CUDA=OFF\
+-DUSE_SSE=OFF\
+-DUSE_LAPACK=OFF\
+-DUSE_OPENCV=OFF\
+-DUSE_OPENMP=OFF\
+-DUSE_SIGNAL_HANDLER=ON\
+-DCMAKE_BUILD_TYPE=RelWithDebInfo\
 
 Review comment:
   I want symbols to diagnose initial runs and crashes, there's a reason for 
this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #11055: Added support android64

2018-06-06 Thread GitBox
larroy commented on a change in pull request #11055: Added support android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#discussion_r193625118
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -163,6 +163,30 @@ build_android_arm64() {
 cp dist/*.whl /work/build
 }
 
+build_android_arm64() {
+set -ex
+cd /work/build
+#-DCMAKE_ANDROID_NDK=${CROSS_ROOT} \
+#-DCMAKE_SYSTEM_VERSION=${ANDROID_NDK_REVISION} \
+#-DCMAKE_SYSTEM_NAME=Android \
+cmake\
+-DANDROID=ON \
+-DUSE_CUDA=OFF\
+-DUSE_SSE=OFF\
+-DUSE_LAPACK=OFF\
+-DUSE_OPENCV=OFF\
+-DUSE_OPENMP=OFF\
 
 Review comment:
   first we make a baseline version run, don't propose to overcomplicate 
things. Feel free to work on making openmp work in android to use all your 
cores if you have time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gautam1858 commented on issue #11166: Update rnn_cell.py

2018-06-06 Thread GitBox
gautam1858 commented on issue #11166: Update rnn_cell.py
URL: https://github.com/apache/incubator-mxnet/pull/11166#issuecomment-395288322
 
 
   yup its a bug


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch unroll-patch deleted (was b5e4aa2)

2018-06-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch unroll-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


 was b5e4aa2  Update rnn_cell.py

This change permanently discards the following revisions:

 discard b5e4aa2  Update rnn_cell.py

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[GitHub] szha closed pull request #11105: [MXNET-501] Navbar community fix

2018-06-06 Thread GitBox
szha closed pull request #11105: [MXNET-501] Navbar community fix
URL: https://github.com/apache/incubator-mxnet/pull/11105
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/_static/js/navbar.js b/docs/_static/js/navbar.js
index 27191723f82..2a27c50bbc0 100644
--- a/docs/_static/js/navbar.js
+++ b/docs/_static/js/navbar.js
@@ -1,13 +1,17 @@
 var searchBox = $("#search-input-wrap");
 var TITLE = ['/install/', '/gluon/', '/api/', '/docs/', '/community/' ];
 var DOC_TITLE = ['/faq/', '/tutorials/', '/architecture/', '/model_zoo/'];
-var APISubmenu, versionSubmenu, docSubmenu;
+var APISubmenu, versionSubmenu, docSubmenu, communitySubmenu;
 $("#burgerMenu").children().each(function () {
 if($(this).children().first().html() == 'API') APISubmenu = 
$(this).clone();
 if($(this).children().first().html().startsWith('Versions')) 
versionSubmenu = $(this).clone();
+if($(this).children().first().html().startsWith('Community')) 
communitySubmenu = $(this).clone();
 if($(this).children().first().html() == 'Docs') docSubmenu= 
$(this).clone();
 });
 
+$('.burger-link').on('click', function(e) { e.stopPropagation() });
+$('.burger-link').on('touchstart', function(e) { e.stopPropagation() });
+
 function navbar() {
 var leftOffset = 40;
 var plusMenuList = [];
@@ -50,6 +54,9 @@ function navbar() {
 else if(plusMenuList[i].attr('id') == 
'dropdown-menu-position-anchor-docs') {
 $("#plusMenu").append(docSubmenu);
 }
+else if(plusMenuList[i].attr('id') == 
'dropdown-menu-position-anchor-community') {
+$("#plusMenu").append(communitySubmenu);
+}
 else {
 $("#plusMenu").append("");
 plusMenuList[i].removeClass("main-nav-link");
diff --git a/docs/_static/js/options.js b/docs/_static/js/options.js
index 77ef94074c5..8fe74ee1904 100644
--- a/docs/_static/js/options.js
+++ b/docs/_static/js/options.js
@@ -1,3 +1,6 @@
+//$('.burger-link').on('click', function(e) { e.stopPropagation() });
+//$('.burger-link').on('touchstart', function(e) { e.stopPropagation() });
+
 $(document).ready(function () {
 function label(lbl) {
 return lbl.replace(/[ .]/g, '-').toLowerCase();
diff --git a/docs/_static/mxnet-theme/navbar.html 
b/docs/_static/mxnet-theme/navbar.html
index 8ea2f9f2161..05ec5c42ce3 100644
--- a/docs/_static/mxnet-theme/navbar.html
+++ b/docs/_static/mxnet-theme/navbar.html
@@ -55,17 +55,18 @@ 
   
   Install
   Tutorials
-  
-Community
+  
+Community
 
-  Community
+  http://discuss.mxnet.io;>Forum
+  https://github.com/apache/incubator-mxnet;>Github
   Contribute
   Powered By
 
   
   {% for name in ['API'] %}
   
-{{name}}
+{{name}}
 
   {% for lang in ['Python', 'Scala', 'R', 'Julia', 'C++', 
'Perl'] %}
 {{lang}}
@@ -75,7 +76,7 @@ 
   
   {% endfor %}
   
-Docs
+Docs
 
   Tutorials
   FAQ


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-501] Navbar community fix (#11105)

2018-06-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 2e10857  [MXNET-501] Navbar community fix (#11105)
2e10857 is described below

commit 2e108571ac431104b31c602be0c57443562b9441
Author: kpmurali <37911926+kpmur...@users.noreply.github.com>
AuthorDate: Wed Jun 6 21:09:45 2018 -0700

[MXNET-501] Navbar community fix (#11105)

* Fixing the dropdown for community in the navbar

* Initial hacky fix to the community drop down issue

* Fixing the burger menu drop-down issue for mobile

* Final fix to the burger dropdowns issue
---
 docs/_static/js/navbar.js|  9 -
 docs/_static/js/options.js   |  3 +++
 docs/_static/mxnet-theme/navbar.html | 11 ++-
 3 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/docs/_static/js/navbar.js b/docs/_static/js/navbar.js
index 2719172..2a27c50 100644
--- a/docs/_static/js/navbar.js
+++ b/docs/_static/js/navbar.js
@@ -1,13 +1,17 @@
 var searchBox = $("#search-input-wrap");
 var TITLE = ['/install/', '/gluon/', '/api/', '/docs/', '/community/' ];
 var DOC_TITLE = ['/faq/', '/tutorials/', '/architecture/', '/model_zoo/'];
-var APISubmenu, versionSubmenu, docSubmenu;
+var APISubmenu, versionSubmenu, docSubmenu, communitySubmenu;
 $("#burgerMenu").children().each(function () {
 if($(this).children().first().html() == 'API') APISubmenu = 
$(this).clone();
 if($(this).children().first().html().startsWith('Versions')) 
versionSubmenu = $(this).clone();
+if($(this).children().first().html().startsWith('Community')) 
communitySubmenu = $(this).clone();
 if($(this).children().first().html() == 'Docs') docSubmenu= 
$(this).clone();
 });
 
+$('.burger-link').on('click', function(e) { e.stopPropagation() });
+$('.burger-link').on('touchstart', function(e) { e.stopPropagation() });
+
 function navbar() {
 var leftOffset = 40;
 var plusMenuList = [];
@@ -50,6 +54,9 @@ function navbar() {
 else if(plusMenuList[i].attr('id') == 
'dropdown-menu-position-anchor-docs') {
 $("#plusMenu").append(docSubmenu);
 }
+else if(plusMenuList[i].attr('id') == 
'dropdown-menu-position-anchor-community') {
+$("#plusMenu").append(communitySubmenu);
+}
 else {
 $("#plusMenu").append("");
 plusMenuList[i].removeClass("main-nav-link");
diff --git a/docs/_static/js/options.js b/docs/_static/js/options.js
index 77ef940..8fe74ee 100644
--- a/docs/_static/js/options.js
+++ b/docs/_static/js/options.js
@@ -1,3 +1,6 @@
+//$('.burger-link').on('click', function(e) { e.stopPropagation() });
+//$('.burger-link').on('touchstart', function(e) { e.stopPropagation() });
+
 $(document).ready(function () {
 function label(lbl) {
 return lbl.replace(/[ .]/g, '-').toLowerCase();
diff --git a/docs/_static/mxnet-theme/navbar.html 
b/docs/_static/mxnet-theme/navbar.html
index 8ea2f9f..05ec5c4 100644
--- a/docs/_static/mxnet-theme/navbar.html
+++ b/docs/_static/mxnet-theme/navbar.html
@@ -55,17 +55,18 @@
   
   Install
   Tutorials
-  
-Community
+  
+Community
 
-  Community
+  http://discuss.mxnet.io;>Forum
+  https://github.com/apache/incubator-mxnet;>Github
   Contribute
   Powered By
 
   
   {% for name in ['API'] %}
   
-{{name}}
+{{name}}
 
   {% for lang in ['Python', 'Scala', 'R', 'Julia', 'C++', 
'Perl'] %}
 {{lang}}
@@ -75,7 +76,7 @@
   
   {% endfor %}
   
-Docs
+Docs
 
   Tutorials
   FAQ

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] szha commented on issue #11105: [MXNET-501] Navbar community fix

2018-06-06 Thread GitBox
szha commented on issue #11105: [MXNET-501] Navbar community fix
URL: https://github.com/apache/incubator-mxnet/pull/11105#issuecomment-395286716
 
 
   Preview at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-11105/5/index.html.
 Looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #10995: Some mxnet ctc_loss bug~

2018-06-06 Thread GitBox
szha commented on issue #10995: Some mxnet ctc_loss bug~
URL: 
https://github.com/apache/incubator-mxnet/issues/10995#issuecomment-395280807
 
 
   ctc_loss was added so that there's no need to install the WarpCTC plugin. 
I've detailed why cudnn CTC is not usable for us in #7445


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #11181: [MXNET-525] Add retry logic to download functions to fix flaky tests

2018-06-06 Thread GitBox
szha commented on a change in pull request #11181: [MXNET-525] Add retry logic 
to download functions to fix flaky tests
URL: https://github.com/apache/incubator-mxnet/pull/11181#discussion_r193616509
 
 

 ##
 File path: python/mxnet/gluon/utils.py
 ##
 @@ -200,26 +202,35 @@ def download(url, path=None, overwrite=False, 
sha1_hash=None):
 fname = os.path.join(path, url.split('/')[-1])
 else:
 fname = path
+assert retries >= 0, "Number of retries should be at least 0"
 
 if overwrite or not os.path.exists(fname) or (sha1_hash and not 
check_sha1(fname, sha1_hash)):
 dirname = os.path.dirname(os.path.abspath(os.path.expanduser(fname)))
 if not os.path.exists(dirname):
 os.makedirs(dirname)
-
-print('Downloading %s from %s...'%(fname, url))
-r = requests.get(url, stream=True)
-if r.status_code != 200:
-raise RuntimeError("Failed downloading url %s"%url)
-with open(fname, 'wb') as f:
-for chunk in r.iter_content(chunk_size=1024):
-if chunk: # filter out keep-alive new chunks
-f.write(chunk)
-
-if sha1_hash and not check_sha1(fname, sha1_hash):
-raise UserWarning('File {} is downloaded but the content hash does 
not match. ' \
-  'The repo may be outdated or download may be 
incomplete. ' \
-  'If the "repo_url" is overridden, consider 
switching to ' \
-  'the default repo.'.format(fname))
+while (retries+1 > 0):
+try:
+print('Downloading %s from %s...'%(fname, url))
+r = requests.get(url, stream=True)
+if r.status_code != 200:
+raise RuntimeError("Failed downloading url %s"%url)
+with open(fname, 'wb') as f:
+for chunk in r.iter_content(chunk_size=1024):
+if chunk: # filter out keep-alive new chunks
+f.write(chunk)
+
+if sha1_hash and not check_sha1(fname, sha1_hash):
+raise UserWarning('File {} is downloaded but the content 
hash does not match. ' \
+  'The repo may be outdated or download 
may be incomplete. ' \
+  'If the "repo_url" is overridden, 
consider switching to ' \
+  'the default repo.'.format(fname))
+break
+except Exception as e:
 
 Review comment:
   should backoff be added too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Add tqdm to support The Straight Dope CI (#11153)

2018-06-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 6fe9387  Add tqdm to support The Straight Dope CI (#11153)
6fe9387 is described below

commit 6fe9387e83af98f34a8a98effeae1e38584616e8
Author: Sergey Sokolov 
AuthorDate: Wed Jun 6 20:20:52 2018 -0700

Add tqdm to support The Straight Dope CI (#11153)

* Add tqdm to support The Straight Dope CI

* Empty commit to force CI
---
 ci/docker/install/ubuntu_tutorials.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/ci/docker/install/ubuntu_tutorials.sh 
b/ci/docker/install/ubuntu_tutorials.sh
index 886ce93..c8d238c 100755
--- a/ci/docker/install/ubuntu_tutorials.sh
+++ b/ci/docker/install/ubuntu_tutorials.sh
@@ -22,5 +22,5 @@
 
 set -ex
 apt-get install graphviz python-opencv
-pip2 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz
-pip3 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz
\ No newline at end of file
+pip2 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm
+pip3 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] szha closed pull request #11153: Add tqdm to support The Straight Dope CI

2018-06-06 Thread GitBox
szha closed pull request #11153: Add tqdm to support The Straight Dope CI
URL: https://github.com/apache/incubator-mxnet/pull/11153
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/ci/docker/install/ubuntu_tutorials.sh 
b/ci/docker/install/ubuntu_tutorials.sh
index 886ce93c94c..c8d238cbc5b 100755
--- a/ci/docker/install/ubuntu_tutorials.sh
+++ b/ci/docker/install/ubuntu_tutorials.sh
@@ -22,5 +22,5 @@
 
 set -ex
 apt-get install graphviz python-opencv
-pip2 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz
-pip3 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz
\ No newline at end of file
+pip2 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm
+pip3 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #11181: [MXNET-525] Add retry logic to download functions to fix flaky tests

2018-06-06 Thread GitBox
eric-haibin-lin commented on a change in pull request #11181: [MXNET-525] Add 
retry logic to download functions to fix flaky tests
URL: https://github.com/apache/incubator-mxnet/pull/11181#discussion_r193616273
 
 

 ##
 File path: python/mxnet/test_utils.py
 ##
 @@ -1411,12 +1416,21 @@ def download(url, fname=None, dirname=None, 
overwrite=False):
 logging.info("%s exists, skipping download", fname)
 return fname
 
-r = requests.get(url, stream=True)
-assert r.status_code == 200, "failed to open %s" % url
-with open(fname, 'wb') as f:
-for chunk in r.iter_content(chunk_size=1024):
-if chunk: # filter out keep-alive new chunks
-f.write(chunk)
+while (retries+1 > 0):
+try:
+r = requests.get(url, stream=True)
+assert r.status_code == 200, "failed to open %s" % url
+with open(fname, 'wb') as f:
+for chunk in r.iter_content(chunk_size=1024):
+if chunk: # filter out keep-alive new chunks
+f.write(chunk)
+break
+except Exception as e:
+retries -= 1
+if (retries <= 0):
 
 Review comment:
   no need for parentheses in python. pls remove


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya closed pull request #11178: [MXNET-379] L1 Norm operator

2018-06-06 Thread GitBox
anirudhacharya closed pull request #11178: [MXNET-379] L1 Norm operator
URL: https://github.com/apache/incubator-mxnet/pull/11178
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/l1_normalization-inl.h 
b/src/operator/l1_normalization-inl.h
new file mode 100644
index 000..8481c05c737
--- /dev/null
+++ b/src/operator/l1_normalization-inl.h
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2016 by Contributors
+ * \file l1_normalization_op-inl.h
+ * \brief instance l1 Normalization op
+*/
+#ifndef MXNET_OPERATOR_L1_NORMALIZATION_INL_H_
+#define MXNET_OPERATOR_L1_NORMALIZATION_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "./operator_common.h"
+#include "./mshadow_op.h"
+
+namespace mxnet {
+namespace op {
+
+namespace l1_normalization {
+enum L1NormalizationOpInputs {kData};
+enum L1NormalizationOpOutputs {kOut, kNorm};
+enum L1NormalizationOpType {kInstance, kChannel, kSpatial};
+enum L1NormalizationBackResource {kTempSpace};
+}  // l1_normalization
+
+struct L1NormalizationParam : public dmlc::Parameter {
+  float eps;
+  int mode;
+  DMLC_DECLARE_PARAMETER(L1NormalizationParam) {
+DMLC_DECLARE_FIELD(eps).set_default(1e-10f)
+.describe("A small constant for numerical stability.");
+DMLC_DECLARE_FIELD(mode)
+.add_enum("instance", l1_normalization::kInstance)
+.add_enum("spatial", l1_normalization::kSpatial)
+.add_enum("channel", l1_normalization::kChannel)
+.set_default(l1_normalization::kInstance)
+.describe("Specify the dimension along which to compute L1 norm.");
+  }
+};
+
+/**
+ * \brief This is the implementation of l1 normalization operator.
+ * \tparam xpu The device that the operator will be executed on.
+ */
+template
+class L1NormalizationOp : public Operator {
+public:
+  explicit L1NormalizationOp(L1NormalizationParam p) {
+this->param_ = p;
+  }
+
+  virtual void Forward(const OpContext ,
+   const std::vector _data,
+   const std::vector ,
+   const std::vector _data,
+   const std::vector _args) {
+using namespace mshadow;
+using namespace mshadow::expr;
+if (req[l1_normalization::kOut] == kNullOp) return;
+CHECK_EQ(req[l1_normalization::kOut], kWriteTo);
+CHECK_EQ(in_data.size(), 1U);
+CHECK_EQ(out_data.size(), 2U);
+Stream *s = ctx.get_stream();
+TShape orig_shape = in_data[l1_normalization::kData].shape_;
+if (param_.mode == l1_normalization::kInstance) {
+  Shape<2> dshape = Shape2(orig_shape[0],
+orig_shape.ProdShape(1, orig_shape.ndim()));
+  Tensor data = in_data[l1_normalization::kData]
+.get_with_shape(dshape, s);
+  Tensor out = out_data[l1_normalization::kOut]
+.get_with_shape(dshape, s);
+  Tensor norm = out_data[l1_normalization::kNorm].get(s);
+  norm = sumall_except_dim<0>(F(data));
+  MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+mxnet_op::Kernel, xpu>::Launch(
+  s, norm.size(0), norm.dptr_, norm.dptr_, DType(param_.eps));
+  });
+  out = data / broadcast<0>(norm, out.shape_);
+} else if (param_.mode == l1_normalization::kChannel) {
+  CHECK_GE(orig_shape.ndim(), 3U);
+  Shape<3> dshape = Shape3(orig_shape[0], orig_shape[1],
+orig_shape.ProdShape(2, orig_shape.ndim()));
+  Tensor data = in_data[l1_normalization::kData]
+.get_with_shape(dshape, s);
+  Tensor out = out_data[l1_normalization::kOut]
+.get_with_shape(dshape, s);
+  Shape<2> norm_shape = Shape2(dshape[0], dshape[2]);
+  Tensor norm = out_data[l1_normalization::kNorm]
+.get_with_shape(norm_shape, s);
+  norm = reduce_with_axis(F(data), 1);
+  MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+mxnet_op::Kernel, xpu>::Launch(
+  s, norm.size(0) * norm.size(1), norm.dptr_, norm.dptr_, 
DType(param_.eps));
+  });
+  

[GitHub] anirudhacharya commented on issue #11178: [MXNET-379] L1 Norm operator

2018-06-06 Thread GitBox
anirudhacharya commented on issue #11178: [MXNET-379] L1 Norm operator
URL: https://github.com/apache/incubator-mxnet/pull/11178#issuecomment-395267912
 
 
   This PR has a legacy implementation of L1 norm. I will close this and raise 
another PR with the l1norm operator registered with nnvm


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Nightly build

2018-06-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a15914d  Nightly build
a15914d is described below

commit a15914df62a0a43cc81451b32879404f21ae6173
Author: mxnet-ci 
AuthorDate: Thu Jun 7 01:37:08 2018 +

Nightly build
---
 date.txt | 1 -
 1 file changed, 1 deletion(-)

diff --git a/date.txt b/date.txt
deleted file mode 100644
index debd056..000
--- a/date.txt
+++ /dev/null
@@ -1 +0,0 @@
-Wed Jun  6 01:37:54 UTC 2018

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] CodingCat commented on issue #10462: [MXNET-62] add test against spark integration

2018-06-06 Thread GitBox
CodingCat commented on issue #10462: [MXNET-62] add test against spark 
integration
URL: https://github.com/apache/incubator-mxnet/pull/10462#issuecomment-395263356
 
 
   scala test was killed in the middle...not sure why


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpmurali commented on issue #11069: [MXNET-480] New version select for Install page

2018-06-06 Thread GitBox
kpmurali commented on issue #11069: [MXNET-480] New version select for Install 
page
URL: https://github.com/apache/incubator-mxnet/pull/11069#issuecomment-395260700
 
 
   Closed in favor of https://github.com/apache/incubator-mxnet/pull/11128


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpmurali commented on issue #11069: [MXNET-480] New version select for Install page

2018-06-06 Thread GitBox
kpmurali commented on issue #11069: [MXNET-480] New version select for Install 
page
URL: https://github.com/apache/incubator-mxnet/pull/11069#issuecomment-395260700
 
 
   Closed in favor of https://github.com/apache/incubator-mxnet/pull/11105


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpmurali closed pull request #11069: [MXNET-480] New version select for Install page

2018-06-06 Thread GitBox
kpmurali closed pull request #11069: [MXNET-480] New version select for Install 
page
URL: https://github.com/apache/incubator-mxnet/pull/11069
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/_static/js/options.js b/docs/_static/js/options.js
index 77ef94074c5..d2d44b07848 100644
--- a/docs/_static/js/options.js
+++ b/docs/_static/js/options.js
@@ -1,7 +1,47 @@
+var versionSelect   = defaultVersion = 'v1.2.0';
+var deviceSelect= 'Linux';
+var languageSelect  = 'Python';
+var processorSelect = 'CPU';
+var environSelect   = 'Pip';
+
 $(document).ready(function () {
 function label(lbl) {
 return lbl.replace(/[ .]/g, '-').toLowerCase();
 }
+
+function setSelects(){
+let urlParams = new URLSearchParams(window.location.search);
+if (urlParams.get('version'))
+versionSelect = urlParams.get('version');
+$('li a:contains(' + versionSelect + 
')').parent().siblings().removeClass('active');
+$('li a:contains(' + versionSelect + ')').parent().addClass('active');
+$('.current-version').html( versionSelect + ' ' );
+if (urlParams.get('device'))
+deviceSelect = urlParams.get('device');
+$('button:contains(' + deviceSelect + 
')').siblings().removeClass('active');
+$('button:contains(' + deviceSelect + ')').addClass('active');
+if (urlParams.get('language'))
+languageSelect = urlParams.get('language');
+$('button:contains(' + languageSelect + 
')').siblings().removeClass('active');
+$('button:contains(' + languageSelect + ')').addClass('active');
+if (urlParams.get('processor'))
+processorSelect = urlParams.get('processor');
+$('button:contains(' + processorSelect + 
')').siblings().removeClass('active');
+$('button:contains(' + processorSelect + ')').addClass('active');
+if (urlParams.get('environ'))
+environSelect = urlParams.get('environ');
+$('button:contains(' + environSelect + 
')').siblings().removeClass('active');
+$('button:contains(' + environSelect + ')').addClass('active');
+showContent();
+if (window.location.href.includes("/install/index.html")) {
+if (versionSelect.includes(defaultVersion)) {
+history.pushState(null, null, '/install/index.html?device=' + 
deviceSelect + '=' + languageSelect + '=' + processorSelect);
+} else {
+history.pushState(null, null, '/install/index.html?version=' + 
versionSelect + '=' + deviceSelect + '=' + languageSelect + 
'=' + processorSelect);
+}
+} 
+}
+
 function showContent() {
 $('.opt-group .opt').each(function(){
 $('.'+label($(this).text())).hide();
@@ -13,11 +53,35 @@ $(document).ready(function () {
 });
 }
 showContent();
+setSelects();
 function setContent() {
 var el = $(this);
+let urlParams = new URLSearchParams(window.location.search);
 el.siblings().removeClass('active');
 el.addClass('active');
+if ($(this).hasClass("versions")) {
+$('.current-version').html( $(this).text() + ' ' );
+if (!$(this).text().includes(defaultVersion)) {
+if (!window.location.search.includes("version")) {
+history.pushState(null, null, '/install/index.html' + 
window.location.search.concat( '=' + $(this).text() ));
+} else {
+history.pushState(null, null, '/install/index.html' + 
window.location.search.replace( urlParams.get('version'), $(this).text() ));
+}
+} else if (window.location.search.includes("version")) {
+  history.pushState(null, null, '/install/index.html' + 
window.location.search.replace( 'version', 'prev' ));
+  }
+}
+else if ($(this).hasClass("Devices")) {
+history.pushState(null, null, '/install/index.html' + 
window.location.search.replace( urlParams.get('device'), $(this).text() ));
+}
+else if ($(this).hasClass("languages")) {
+history.pushState(null, null, '/install/index.html' + 
window.location.search.replace( urlParams.get('language'), $(this).text() ));
+}
+else if ($(this).hasClass("processors")) {
+history.pushState(null, null, '/install/index.html' + 
window.location.search.replace( urlParams.get('processor'), $(this).text() ));
+}
 showContent();
+//window.location.search = window.location.search.replace( 
urlParams.get('version'), $(this).text() );
 }
 $('.opt-group').on('click', '.opt', setContent);
 });
diff --git a/docs/install/index.md 

[GitHub] aaronmarkham commented on issue #11105: [MXNET-501] Navbar community fix

2018-06-06 Thread GitBox
aaronmarkham commented on issue #11105: [MXNET-501] Navbar community fix
URL: https://github.com/apache/incubator-mxnet/pull/11105#issuecomment-395260100
 
 
   @szha Please merge.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #11181: [MXNET-525] Add retry logic to download functions to fix flaky tests

2018-06-06 Thread GitBox
marcoabreu commented on a change in pull request #11181: [MXNET-525] Add retry 
logic to download functions to fix flaky tests
URL: https://github.com/apache/incubator-mxnet/pull/11181#discussion_r193600762
 
 

 ##
 File path: tests/python/unittest/test_gluon_utils.py
 ##
 @@ -0,0 +1,34 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import os
+import tempfile
+
+import mxnet as mx
+from nose.tools import *
+
+
+@raises(Exception)
+def test_download_retries():
+mx.gluon.utils.download("http://doesnotexist.notfound;)
+
+def test_download_successful():
+tmp = tempfile.gettempdir()
+tmpfile = os.path.join(tmp, 'README.md')
+
mx.gluon.utils.download("https://raw.githubusercontent.com/apache/incubator-mxnet/master/README.md;,
 path=tmpfile)
+assert os.path.getsize(tmpfile) > 100
+
 
 Review comment:
   Can you use a temp dir instead of the root temp?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #11041: gpu mem pool strategy

2018-06-06 Thread GitBox
piiswrong commented on a change in pull request #11041: gpu mem pool strategy
URL: https://github.com/apache/incubator-mxnet/pull/11041#discussion_r193599496
 
 

 ##
 File path: amalgamation/amalgamation.py
 ##
 @@ -23,7 +23,7 @@
 import platform
 
 blacklist = [
-'Windows.h', 'cublas_v2.h', 'cuda/tensor_gpu-inl.cuh',
+'Windows.h', 'intrin.h', 'cublas_v2.h', 'cuda/tensor_gpu-inl.cuh',
 
 Review comment:
   revert


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ashokei commented on issue #11090: Define build target for mkldnn lib build to fix 'make clean USE_MKMLDNN=1' issue

2018-06-06 Thread GitBox
ashokei commented on issue #11090: Define build target for mkldnn lib build to 
fix 'make clean USE_MKMLDNN=1' issue
URL: https://github.com/apache/incubator-mxnet/pull/11090#issuecomment-395257535
 
 
   @jinhuang415 If a user does not want to build with MKLDNN , but they still 
want to use mklml lib for BLAS/GEMM apis (USE_BLAS=mkl) , the prepare_mkl.sh 
script will help. 
   
   If it USE_BLAS=mkl is not working with prepare_mkl.sh script in mxnet repo, 
we need to fix it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #10995: Some mxnet ctc_loss bug~

2018-06-06 Thread GitBox
chinakook commented on issue #10995: Some mxnet ctc_loss bug~
URL: 
https://github.com/apache/incubator-mxnet/issues/10995#issuecomment-395256521
 
 
   Yes, that's a good idea. Cudnnctc should be added too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil opened a new pull request #11181: [MXNET-525] Add retry logic to download functions to fix flaky tests

2018-06-06 Thread GitBox
ThomasDelteil opened a new pull request #11181: [MXNET-525] Add retry logic to 
download functions to fix flaky tests
URL: https://github.com/apache/incubator-mxnet/pull/11181
 
 
   A lots of "flaky" tests are because some data download or model download is 
resetting the connection. 
   
   This is a crude implementation of a retry logic, @KellenSunderland is 
working on a more involved version but this should act as a stop gap measure 
that should solve most issues we are encountering atm.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on issue #11048: Build MXNet with opencv error on MacOS

2018-06-06 Thread GitBox
xinyu-intel commented on issue #11048: Build MXNet with opencv error on MacOS
URL: 
https://github.com/apache/incubator-mxnet/issues/11048#issuecomment-395255293
 
 
   @lanking520 Hi, here is the build log: `g++-5 -std=c++11 
-DMXNET_USE_OPENCV=1 -I/usr/local/Cellar/opencv/3.4.1_5/include/opencv 
-I/usr/local/Cellar/opencv/3.4.1_5/include`, seems opencv 3.4 comes from 
homebrew.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #10931: [MXNET-349] Histogram Operator

2018-06-06 Thread GitBox
haojin2 commented on issue #10931: [MXNET-349] Histogram Operator
URL: https://github.com/apache/incubator-mxnet/pull/10931#issuecomment-395253134
 
 
   @piiswrong Should be ready for merge, please take a look when you have time, 
thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11177: mxnet.base.MXNetError

2018-06-06 Thread GitBox
marcoabreu commented on issue #11177: mxnet.base.MXNetError
URL: 
https://github.com/apache/incubator-mxnet/issues/11177#issuecomment-395250346
 
 
   Right at the top you are using 2D pooling but the network expects at least 
3D.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] RoacherM commented on issue #11177: mxnet.base.MXNetError

2018-06-06 Thread GitBox
RoacherM commented on issue #11177: mxnet.base.MXNetError
URL: 
https://github.com/apache/incubator-mxnet/issues/11177#issuecomment-395248719
 
 
   hello,@marcoabreu  .My network design is here:
   class ConcatNet(nn.HybridBlock):
   def __init__(self,net1,net2,**kwargs):
   super(ConcatNet,self).__init__(**kwargs)
   self.net1 = nn.HybridSequential()
   self.net1.add(net1)
   self.net1.add(nn.GlobalAvgPool2D())
   self.net2 = nn.HybridSequential()
   self.net2.add(net2)
   self.net2.add(nn.GlobalAvgPool2D())
   def hybrid_forward(self,F,x1,x2):
   return F.concat(*[self.net1(x1),self.net2(x2)])
   
   class OneNet(nn.HybridBlock):
   def __init__(self,features,output,**kwargs):
   super(OneNet,self).__init__(**kwargs)
   self.features = features
   self.output = output
   def hybrid_forward(self,F,x1,x2):
   return self.output(self.features(x1,x2))
   class Net():
   def __init__(self,ctx,nameparams=None):
   #(255,299,299)
   inception = vision.inception_v3(pretrained=True,ctx=ctx).features
   #(255,224,224)
   resnet = vision.densnet121(pretrained=True,ctx=ctx).features
   self.features = ConcatNet(resnet,inception)
   self.output = self.__get_output(ctx,nameparams)
   self.net = OneNet(self.features,self.output)
   def __get_output(self,ctx,ParamsName=None):
   net = nn.HybridSequential("output")
   with net.name_scope():
   #可以添加batchnorm等层
   net.add(nn.Dense(256,activation='relu'))
   net.add(nn.Dropout(.5))
   #分类
   net.add(nn.Dense(2))
   if ParamsName is not None:
   net.collect_params().load(ParamsName,ctx)
   else:
   net.initialize(init = init.Xavier(),ctx=ctx)
   return net
   
   class Pre():
   def __init__(self,nameparams,idx,ctx=0):
   self.idx = idx
   if ctx == 0:
   self.ctx = mx.cpu()
   if ctx == 1:
   self.ctx = mx.gpu()
   self.net = Net(self.ctx,nameparams=nameparams).net
   self.Timg = transform_test
   def PreImg(self,img):
   imgs = self.Timg(img,None)
   out = 
nd.softmax(self.net(nd.reshape(imgs[0],(1,3,224,224)).as_in_context(self.ctx),nd.reshape(imgs[1],(1,3,299,299)).as_in_context(self.ctx))).asnumpy()
   return self.idx[np.where(out == out.max())[1][0]]
   def PreName(self,Name):
   img = image.imread(Name)
   return self.PreImg(img)
   
   def transform_train(data, label):
   im1 = image.imresize(data,224,224).astype('float32')
   im2 = image.imresize(data,299,299).astype('float32')
   # im1 = image.imresize(data.astype('float32') / 255, 224, 224)
   # im2 = image.imresize(data.astype('float32')/255,299,299)
   auglist1 = image.CreateAugmenter(data_shape=(3, 224, 224), resize=0,
   rand_crop=True, rand_resize=True, rand_mirror=True,
   mean=np.array([0.485, 0.456, 0.406]), 
std=np.array([0.229, 0.224, 0.225]),
   brightness=0.125, contrast=0.5,
   saturation=0, hue=0,
   pca_noise=0, rand_gray=1, inter_method=2)
   auglist2 = image.CreateAugmenter(data_shape=(3,299,299), resize=0,
   rand_crop=True, rand_resize=True, rand_mirror=True,
   mean=np.array([0.485, 0.456, 0.406]), 
std=np.array([0.229, 0.224, 0.225]),
   brightness=0.125, contrast=0.5,
   saturation=0, hue=0,
   pca_noise=0, rand_gray=1, inter_method=2)
   for aug in auglist1:
   im1 = aug(im1)
   for aug in auglist2:
   im2 = aug(im2)
   # (0,1,2)-(2,0,1)
   im1 = nd.transpose(im1, (2,0,1))
   im2 = nd.transpose(im2, (2,0,1))
   return (im1,im2,nd.array([label]).asscalar().astype('float32'))
   def transform_test(data, label):
   im1 = image.imresize(data,224,224).astype('float32')
   im2 = image.imresize(data,299,299).astype('float32')
   # im1 = image.imresize(data.astype('float32') / 255, 224, 224)
   # im2 = image.imresize(data.astype('float32') / 255, 299, 299)
   auglist1 = image.CreateAugmenter(data_shape=(3,224,224),
   mean=np.array([0.485, 0.456, 0.406]),
   std=np.array([0.229, 0.224, 0.225]))
   auglist2 = image.CreateAugmenter(data_shape=(3,299,299),
   mean=np.array([0.485, 0.456, 0.406]),
   std=np.array([0.229, 0.224, 0.225]))
   for aug in auglist1:
   im1 = aug(im1)
   for aug in auglist2:
   im2 = aug(im2)
   im1 = nd.transpose(im1, (2,0,1))
   im2 = nd.transpose(im2, (2,0,1))
   return 

[GitHub] indhub commented on a change in pull request #11002: [MXNET-433] Tutorial on saving and loading gluon models

2018-06-06 Thread GitBox
indhub commented on a change in pull request #11002: [MXNET-433] Tutorial on 
saving and loading gluon models
URL: https://github.com/apache/incubator-mxnet/pull/11002#discussion_r193589063
 
 

 ##
 File path: docs/tutorials/gluon/save_load_params.md
 ##
 @@ -0,0 +1,269 @@
+# Saving and Loading Gluon Models
+
+Training large models take a lot of time and it is a good idea to save the 
trained models to files to avoid training them again and again. There is a 
number of reasons to do this. For example, you might want to do inference on a 
machine that is different from the one where the model was trained. Sometimes 
model's performance on validation set decreases towards the end of the training 
because of overfitting. If you saved your model parameters after every epoch, 
at the end you can decide to use the model that performs best on the validation 
set.
+
+In this tutorial we will learn ways to save and load Gluon models. There are 
two ways to save/load Gluon models:
+
+**1. Save/load model parameters only**
+
+Parameters of any Gluon model can be saved using the `save_params` and 
`load_params` method. This does not save model architecture. This method is 
used to save parameters of dynamic (non Hybrid) models. Model architecture 
cannot be saved for dynamic models because model architecture changes during 
execution.
+
+**2. Save/load model parameters AND architecture**
+
+Model architecture of `Hybrid` models stays static and don't change during 
execution. Therefore both model parameters AND architecture can be saved and 
loaded using `export`, `load_checkpoint` and `load` methods.
+
+Let's look at the above methods in more detail. Let's start by importing the 
modules we'll need.
+
+```python
+from __future__ import print_function
+
+import mxnet as mx
+import mxnet.ndarray as nd
+from mxnet import nd, autograd, gluon
+from mxnet.gluon.data.vision import transforms
+
+import numpy as np
+```
+
+## Setup: build and train a simple model
+
+We need a trained model before we can save it to a file. So let's go ahead and 
build a very simple convolutional network and train it on MNIST data.
+
+Let's define a helper function to build a LeNet model and another helper to 
train LeNet with MNIST.
+
+```python
+# Use GPU if one exists, else use CPU
+ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
+
+# MNIST images are 28x28. Total pixels in input layer is 28x28 = 784
+num_inputs = 784
+# Clasify the images into one of the 10 digits
+num_outputs = 10
+# 64 images in a batch
+batch_size = 64
+
+# Load the training data
+train_data = 
gluon.data.DataLoader(gluon.data.vision.MNIST(train=True).transform_first(transforms.ToTensor()),
+   batch_size, shuffle=True)
+
+# Build a simple convolutional network
+def build_lenet(net):
+with net.name_scope():
+# First convolution
+net.add(gluon.nn.Conv2D(channels=20, kernel_size=5, activation='relu'))
+net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))
+# Second convolution
+net.add(gluon.nn.Conv2D(channels=50, kernel_size=5, activation='relu'))
+net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))
+# Flatten the output before the fully connected layers
+net.add(gluon.nn.Flatten())
+# First fully connected layers with 512 neurons
+net.add(gluon.nn.Dense(512, activation="relu"))
+# Second fully connected layer with as many neurons as the number of 
classes
+net.add(gluon.nn.Dense(num_outputs))
+
+return net
+
+# Train a given model using MNIST data
+def train_model(model):
+# Initialize the parameters with Xavier initializer
+net.collect_params().initialize(mx.init.Xavier(), ctx=ctx)
 
 Review comment:
   Good catch. thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #11128: [MXNET-504] Add version select + queryString capabilities + C++ instructions to install page

2018-06-06 Thread GitBox
aaronmarkham commented on issue #11128: [MXNET-504] Add version select + 
queryString capabilities + C++ instructions to install page
URL: https://github.com/apache/incubator-mxnet/pull/11128#issuecomment-395245148
 
 
   Fixes #10212 
   @szha Please merge.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new pull request #11180: [MXNET-503] Website landing page for MMS, PR II

2018-06-06 Thread GitBox
aaronmarkham opened a new pull request #11180: [MXNET-503] Website landing page 
for MMS, PR II
URL: https://github.com/apache/incubator-mxnet/pull/11180
 
 
   ## Description ##
   This is a redo of:
   https://github.com/apache/incubator-mxnet/pull/11037
   
   As it was rolled back by:
   https://github.com/apache/incubator-mxnet/pull/11154
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #11169: Fix ci/build.py using remote cache locally

2018-06-06 Thread GitBox
marcoabreu commented on a change in pull request #11169: Fix ci/build.py using 
remote cache locally
URL: https://github.com/apache/incubator-mxnet/pull/11169#discussion_r193584590
 
 

 ##
 File path: ci/build.py
 ##
 @@ -221,56 +221,62 @@ def script_name() -> str:
 help="go in a shell inside the container",
 action='store_true')
 
-parser.add_argument("--docker-registry",
-help="Dockerhub registry name to retrieve cache from",
+parser.add_argument("-d", "--docker-registry",
+help="Dockerhub registry name to retrieve cache from. 
Default is 'mxnetci'",
 default='mxnetci',
 type=str)
 
+parser.add_argument("-c", "--cache", action="store_true",
 
 Review comment:
   No, it is intended to be used locally as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #11055: Added support android64

2018-06-06 Thread GitBox
marcoabreu commented on a change in pull request #11055: Added support 
android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#discussion_r193584969
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -163,6 +163,30 @@ build_android_arm64() {
 cp dist/*.whl /work/build
 }
 
+build_android_arm64() {
+set -ex
+cd /work/build
+#-DCMAKE_ANDROID_NDK=${CROSS_ROOT} \
+#-DCMAKE_SYSTEM_VERSION=${ANDROID_NDK_REVISION} \
+#-DCMAKE_SYSTEM_NAME=Android \
+cmake\
+-DANDROID=ON \
+-DUSE_CUDA=OFF\
+-DUSE_SSE=OFF\
+-DUSE_LAPACK=OFF\
+-DUSE_OPENCV=OFF\
+-DUSE_OPENMP=OFF\
+-DUSE_SIGNAL_HANDLER=ON\
+-DCMAKE_BUILD_TYPE=RelWithDebInfo\
+-DUSE_MKL_IF_AVAILABLE=OFF\
+-G Ninja /work/mxnet
+ninja -v
+export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
+#cd /work/mxnet/python
+#python setup.py bdist_wheel --universal
+#cp dist/*.whl /work/build
 
 Review comment:
   Remove commented code


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #11055: Added support android64

2018-06-06 Thread GitBox
marcoabreu commented on a change in pull request #11055: Added support 
android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#discussion_r193584834
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -163,6 +163,30 @@ build_android_arm64() {
 cp dist/*.whl /work/build
 }
 
+build_android_arm64() {
+set -ex
+cd /work/build
+#-DCMAKE_ANDROID_NDK=${CROSS_ROOT} \
+#-DCMAKE_SYSTEM_VERSION=${ANDROID_NDK_REVISION} \
+#-DCMAKE_SYSTEM_NAME=Android \
+cmake\
+-DANDROID=ON \
+-DUSE_CUDA=OFF\
+-DUSE_SSE=OFF\
+-DUSE_LAPACK=OFF\
+-DUSE_OPENCV=OFF\
+-DUSE_OPENMP=OFF\
+-DUSE_SIGNAL_HANDLER=ON\
+-DCMAKE_BUILD_TYPE=RelWithDebInfo\
 
 Review comment:
   To reduce the size, I'd propose we build without debug info for now


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #11055: Added support android64

2018-06-06 Thread GitBox
marcoabreu commented on a change in pull request #11055: Added support 
android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#discussion_r193584891
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -163,6 +163,30 @@ build_android_arm64() {
 cp dist/*.whl /work/build
 }
 
+build_android_arm64() {
+set -ex
+cd /work/build
+#-DCMAKE_ANDROID_NDK=${CROSS_ROOT} \
+#-DCMAKE_SYSTEM_VERSION=${ANDROID_NDK_REVISION} \
+#-DCMAKE_SYSTEM_NAME=Android \
+cmake\
+-DANDROID=ON \
+-DUSE_CUDA=OFF\
+-DUSE_SSE=OFF\
+-DUSE_LAPACK=OFF\
+-DUSE_OPENCV=OFF\
+-DUSE_OPENMP=OFF\
 
 Review comment:
   No open mp? Android phones usually have around 4 to 8 cores, so we should 
have it enabled


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #11169: Fix ci/build.py using remote cache locally

2018-06-06 Thread GitBox
larroy commented on a change in pull request #11169: Fix ci/build.py using 
remote cache locally
URL: https://github.com/apache/incubator-mxnet/pull/11169#discussion_r193583993
 
 

 ##
 File path: ci/build.py
 ##
 @@ -221,56 +221,62 @@ def script_name() -> str:
 help="go in a shell inside the container",
 action='store_true')
 
-parser.add_argument("--docker-registry",
-help="Dockerhub registry name to retrieve cache from",
+parser.add_argument("-d", "--docker-registry",
+help="Dockerhub registry name to retrieve cache from. 
Default is 'mxnetci'",
 default='mxnetci',
 type=str)
 
+parser.add_argument("-c", "--cache", action="store_true",
 
 Review comment:
   disagree. Locally we don't want to use cache, this is just an artifact for 
CI. please merge this PR. This is working and should work as intented in CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #11169: Fix ci/build.py using remote cache locally

2018-06-06 Thread GitBox
larroy commented on a change in pull request #11169: Fix ci/build.py using 
remote cache locally
URL: https://github.com/apache/incubator-mxnet/pull/11169#discussion_r193583993
 
 

 ##
 File path: ci/build.py
 ##
 @@ -221,56 +221,62 @@ def script_name() -> str:
 help="go in a shell inside the container",
 action='store_true')
 
-parser.add_argument("--docker-registry",
-help="Dockerhub registry name to retrieve cache from",
+parser.add_argument("-d", "--docker-registry",
+help="Dockerhub registry name to retrieve cache from. 
Default is 'mxnetci'",
 default='mxnetci',
 type=str)
 
+parser.add_argument("-c", "--cache", action="store_true",
 
 Review comment:
   disagree. Locally we don't want to use cache, this is just an artifact for 
CI. please merge this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #11055: Added support android64

2018-06-06 Thread GitBox
larroy commented on issue #11055: Added support android64 
URL: https://github.com/apache/incubator-mxnet/pull/11055#issuecomment-395239502
 
 
   @marcoabreu @szha @piiswrong  please merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #11155: [MXNET-521] Add Facebook open-graph tag integration

2018-06-06 Thread GitBox
aaronmarkham commented on issue #11155: [MXNET-521] Add Facebook open-graph tag 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11155#issuecomment-395236336
 
 
   @szha Can you please merge this? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on issue #10462: [MXNET-62] add test against spark integration

2018-06-06 Thread GitBox
yzhliu commented on issue #10462: [MXNET-62] add test against spark integration
URL: https://github.com/apache/incubator-mxnet/pull/10462#issuecomment-395234629
 
 
   @CodingCat Can you check whether the failed test is expected? Looks like it 
is related to what you modified.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #11179: [MXNET-404] elemwise_add/sub between rsp and rsp on GPU

2018-06-06 Thread GitBox
haojin2 commented on a change in pull request #11179: [MXNET-404] 
elemwise_add/sub between rsp and rsp on GPU
URL: https://github.com/apache/incubator-mxnet/pull/11179#discussion_r193574058
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -156,7 +156,7 @@ class NDArray {
   }
 
   /* \brief Check whether the two arrays are the same array */
-  inline bool IsSame(const NDArray& other) {
+  inline bool IsSame(const NDArray& other) const {
 
 Review comment:
   @piiswrong I made the change here so that I can also call this function when 
I have a ``` const NDArray``` object.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11179: [MXNET-404] elemwise_add/sub between rsp and rsp on GPU

2018-06-06 Thread GitBox
haojin2 commented on issue #11179: [MXNET-404] elemwise_add/sub between rsp and 
rsp on GPU
URL: https://github.com/apache/incubator-mxnet/pull/11179#issuecomment-395226493
 
 
   @eric-haibin-lin Please give a review when you have time, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11179: [MXNET-404] elemwise_add/sub between rsp and rsp on GPU

2018-06-06 Thread GitBox
haojin2 commented on issue #11179: [MXNET-404] elemwise_add/sub between rsp and 
rsp on GPU
URL: https://github.com/apache/incubator-mxnet/pull/11179#issuecomment-395226376
 
 
   Benchmark results:
   kWriteTo: (lhs_density rhs_density speedup)
   1.00 % 1.00 % 25.124997331340733
   1.00 % 0.50 % 31.238362675588835
   1.00 % 0.10 % 39.14725913534424
   1.00 % 0.05 % 40.186331497357656
   1.00 % 0.01 % 41.54522092845207
   1.00 % 0.00 % 115.1436676461348
   0.50 % 1.00 % 30.684299577090243
   0.50 % 0.50 % 41.066164904788266
   0.50 % 0.10 % 55.053740609087725
   0.50 % 0.05 % 57.572661839483324
   0.50 % 0.01 % 59.64072908956329
   0.50 % 0.00 % 173.54969421069572
   0.10 % 1.00 % 38.829209311971795
   0.10 % 0.50 % 55.40661678968209
   0.10 % 0.10 % 82.4112095641801
   0.10 % 0.05 % 87.61740457731939
   0.10 % 0.01 % 93.37161339877105
   0.10 % 0.00 % 306.6551990025265
   0.05 % 1.00 % 39.83933545753898
   0.05 % 0.50 % 57.84548585858899
   0.05 % 0.10 % 87.84864769131102
   0.05 % 0.05 % 95.30841559924941
   0.05 % 0.01 % 101.20527355911096
   0.05 % 0.00 % 334.61053120754985
   0.01 % 1.00 % 41.291302756016655
   0.01 % 0.50 % 59.7229061419413
   0.01 % 0.10 % 95.01911403455347
   0.01 % 0.05 % 101.25847977888479
   0.01 % 0.01 % 109.54648495558651
   0.01 % 0.00 % 365.9305234720645
   0.00 % 1.00 % 119.17642326889458
   0.00 % 0.50 % 181.91244221692375
   0.00 % 0.10 % 302.7410802494129
   0.00 % 0.05 % 318.57223936052355
   0.00 % 0.01 % 360.25671221787485
   0.00 % 0.00 % 556.9824540639702
   kWriteInplace on lhs: (lhs_density rhs_density speedup)
   100.00 % 100.00 % 0.9877658633734423
   100.00 % 1.00 % 70.37739238060738
   100.00 % 0.50 % 119.9069169140413
   100.00 % 0.10 % 291.33264582259096
   100.00 % 0.05 % 351.9332843469742
   100.00 % 0.01 % 428.11531043350476
   100.00 % 0.00 % 568.4419591440868
   kWriteInplace on rhs: (lhs_density rhs_density speedup)
   100.00 % 100.00 % 0.9823963498050752
   1.00 % 100.00 % 69.63479862099362
   0.50 % 100.00 % 118.04205950892886
   0.10 % 100.00 % 294.0972686031126
   0.05 % 100.00 % 358.42532087562114
   0.01 % 100.00 % 429.2050067814533
   0.00 % 100.00 % 592.4116369131955


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11179: [MXNET-404] elemwise_add/sub between rsp and rsp on GPU

2018-06-06 Thread GitBox
haojin2 commented on issue #11179: [MXNET-404] elemwise_add/sub between rsp and 
rsp on GPU
URL: https://github.com/apache/incubator-mxnet/pull/11179#issuecomment-395225648
 
 
   Benchmark script:
   ```Python
   import mxnet as mx
   import sys
   import os
   import scipy
   import numpy as np
   from mxnet.test_utils import rand_ndarray, assert_almost_equal
   import time
   
   def measure_cost(repeat, a, b, out=None):
   # start bench
   start = time.time()
   results = []
   for i in range(repeat):
   results.append(mx.nd.elemwise_add(a, b, out=out))
   for result in results:
   result.wait_to_read()
   end = time.time()
   diff = end - start
   return diff / repeat
   
   def measure_fallback(repeat, a):
   # start bench
   start = time.time()
   results = []
   for i in range(repeat):
   results.append(a.tostype('default'))
   for result in results:
   result.wait_to_read()
   end = time.time()
   diff = end - start
   return diff / repeat
   
   def main():
   shape = (100, 512)
   context = mx.gpu(0)
   # context = mx.cpu()
   for lhs_density in [0.01, 0.005, 0.001, 0.0005, 0.0001, 0.000]:
   mx_lhs = rand_ndarray(shape, stype='row_sparse', 
density=lhs_density).as_in_context(context)
   mx_lhs_dns = mx_lhs.tostype('default')
   for rhs_density in [0.01, 0.005, 0.001, 0.0005, 0.0001, 0.000]:
   mx_rhs = rand_ndarray(shape=shape, stype='row_sparse', 
density=rhs_density).as_in_context(context)
   mx_rhs_dns = mx_rhs.tostype('default')
   #warmup
   sparse_cost = 0.0
   dns_cost = 0.0
   np_lhs = mx_lhs_dns.asnumpy()
   check = mx.nd.elemwise_add(mx_lhs, mx_rhs)
   np_lhs = np_lhs + mx_rhs.asnumpy()
   assert_almost_equal(check.asnumpy(), np_lhs, atol=1e-5, 
rtol=1e-4)
   mx.nd.waitall()
   for i in range(100):
   sparse_cost += measure_cost(1, mx_lhs, mx_rhs)
   dns_cost += measure_cost(1, mx_lhs_dns, mx_rhs_dns)
   print("%.2f %% %.2f %%" % (lhs_density*100, rhs_density*100), 
dns_cost / sparse_cost)
   
   for rhs_density in [1.000, 0.01, 0.005, 0.001, 0.0005, 0.0001, 0.000]:
   mx_lhs_dns = mx.nd.ones(shape, ctx=context)
   mx_lhs = mx_lhs_dns.tostype('row_sparse')
   mx_rhs = rand_ndarray(shape=shape, stype='row_sparse', 
density=rhs_density).as_in_context(context)
   mx_rhs_dns = mx_rhs.tostype('default')
   #warmup
   sparse_cost = 0.0
   dns_cost = 0.0
   np_lhs = mx_lhs_dns.asnumpy()
   mx.nd.elemwise_add(mx_lhs, mx_rhs, out=mx_lhs)
   np_lhs = np_lhs + mx_rhs.asnumpy()
   assert_almost_equal(mx_lhs.asnumpy(), np_lhs, atol=1e-5, rtol=1e-4)
   mx.nd.waitall()
   for i in range(100):
   sparse_cost += measure_cost(1, mx_lhs, mx_rhs, out=mx_lhs)
   dns_cost += measure_cost(1, mx_lhs_dns, mx_rhs_dns, 
out=mx_lhs_dns)
   print("%.2f %% %.2f %%" % (1.0*100, rhs_density*100), dns_cost / 
sparse_cost)
   
   for lhs_density in [1.000, 0.01, 0.005, 0.001, 0.0005, 0.0001, 0.000]:
   mx_rhs_dns = mx.nd.ones(shape, ctx=context)
   mx_rhs = mx_rhs_dns.tostype('row_sparse')
   mx_lhs = rand_ndarray(shape=shape, stype='row_sparse', 
density=lhs_density).as_in_context(context)
   mx_lhs_dns = mx_lhs.tostype('default')
   #warmup
   sparse_cost = 0.0
   dns_cost = 0.0
   np_rhs = mx_rhs_dns.asnumpy()
   mx.nd.elemwise_add(mx_lhs, mx_rhs, out=mx_rhs)
   np_rhs = np_rhs + mx_lhs.asnumpy()
   assert_almost_equal(mx_rhs.asnumpy(), np_rhs, atol=1e-5, rtol=1e-4)
   mx.nd.waitall()
   for i in range(100):
   sparse_cost += measure_cost(1, mx_lhs, mx_rhs, out=mx_rhs)
   dns_cost += measure_cost(1, mx_lhs_dns, mx_rhs_dns, 
out=mx_rhs_dns)
   print("%.2f %% %.2f %%" % (1.0*100, lhs_density*100), dns_cost / 
sparse_cost)
   
   
   if __name__ == "__main__":
   main()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new pull request #11179: [MXNET-404] elemwise_add/sub between rsp and rsp on GPU

2018-06-06 Thread GitBox
haojin2 opened a new pull request #11179: [MXNET-404] elemwise_add/sub between 
rsp and rsp on GPU
URL: https://github.com/apache/incubator-mxnet/pull/11179
 
 
   ## Description ##
   As title
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Add support for elemwise_add/sub between rsp and rsp on GPU
   - [x] Optimization for in-place case
   
   ## Comments ##
   For performance benchmark results please see comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #10824: Segmentation Fault when using as_in_context

2018-06-06 Thread GitBox
zhreshold commented on issue #10824: Segmentation Fault when using as_in_context
URL: 
https://github.com/apache/incubator-mxnet/issues/10824#issuecomment-395223041
 
 
   The script ran on latest master with no problem. 
   But I don't think it's related to gluon dataloader, but mkl in 1.1.0? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11177: mxnet.base.MXNetError

2018-06-06 Thread GitBox
marcoabreu commented on issue #11177: mxnet.base.MXNetError
URL: 
https://github.com/apache/incubator-mxnet/issues/11177#issuecomment-395221756
 
 
   Hello @RoacherM, could you please paste your code? It seems there seems to 
be a problem with your network design.
   
   Additionally, I'd like to refer you to our user discussion forum at 
https://discuss.mxnet.io/ where you'll find additional help from other users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed pull request #11175: [1.2.0] Fix test_sparse_mathematical_core sensitivity to scipy v1.1 (#10961)

2018-06-06 Thread GitBox
marcoabreu closed pull request #11175: [1.2.0] Fix 
test_sparse_mathematical_core sensitivity to scipy v1.1 (#10961)
URL: https://github.com/apache/incubator-mxnet/pull/11175
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/python/unittest/test_sparse_operator.py 
b/tests/python/unittest/test_sparse_operator.py
index 2c9bedde5ea..e2a040a0681 100644
--- a/tests/python/unittest/test_sparse_operator.py
+++ b/tests/python/unittest/test_sparse_operator.py
@@ -1035,12 +1035,16 @@ def check_mathematical_core(stype, 
output_grad_stype=None,
 
 try:
 from scipy import special as scipy_special
-import_succeeded = True
+# On scipy v1.0, psi([0, -1, -2, -3, ...]) = [ inf, inf, inf, 
inf, ...]
+# On scipy v1.1, psi([0, -1, -2, -3, ...]) = [-inf, nan, nan, 
nan, ...]
+# Map the behavior of v1.1 psi() to that of v1.0 for ints <= 0 
for consistency
+scipy_psi = np.vectorize(lambda x: np.inf if 
float(x).is_integer() and x <= 0 else
+ scipy_special.psi(x))
 # gamma
 check_sparse_mathematical_core("gamma", stype,
lambda x: 
mx.sym.sparse.gamma(x),
lambda x: 
scipy_special.gamma(x),
-   lambda x: 
scipy_special.gamma(x) * scipy_special.psi(x),
+   lambda x: 
scipy_special.gamma(x) * scipy_psi(x),

output_grad_stype=output_grad_stype,

input_grad_stype=input_grad_stype,
force_overlap=force_overlap,
@@ -1049,17 +1053,14 @@ def check_mathematical_core(stype, 
output_grad_stype=None,
 check_sparse_mathematical_core("gammaln", stype,
lambda x: 
mx.sym.sparse.gammaln(x),
lambda x: 
scipy_special.gammaln(x),
-   lambda x: scipy_special.psi(x),
+   lambda x: scipy_psi(x),

output_grad_stype=output_grad_stype,

input_grad_stype=input_grad_stype,
force_overlap=force_overlap,
density=density, 
ograd_density=ograd_density)
 
-except:
-if import_succeeded == False:
-print("Could not import scipy. Skipping unit tests for 
special functions")
-else:
-raise
+except ImportError:
+print("Could not import scipy. Skipping unit tests for special 
functions")
 
 for i in range(1):
 print("pass", i)


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.2.0 updated: Fix test_sparse_mathematical_core sensitivity to scipy v1.1 (#10961) (#11175)

2018-06-06 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch v1.2.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.2.0 by this push:
 new c2bfcf0  Fix test_sparse_mathematical_core sensitivity to scipy v1.1 
(#10961) (#11175)
c2bfcf0 is described below

commit c2bfcf093100081816a34fe25aa794a4567d5e27
Author: Marco de Abreu 
AuthorDate: Wed Jun 6 23:34:32 2018 +0200

Fix test_sparse_mathematical_core sensitivity to scipy v1.1 (#10961) 
(#11175)
---
 tests/python/unittest/test_sparse_operator.py | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/tests/python/unittest/test_sparse_operator.py 
b/tests/python/unittest/test_sparse_operator.py
index 2c9bedd..e2a040a 100644
--- a/tests/python/unittest/test_sparse_operator.py
+++ b/tests/python/unittest/test_sparse_operator.py
@@ -1035,12 +1035,16 @@ def test_sparse_mathematical_core():
 
 try:
 from scipy import special as scipy_special
-import_succeeded = True
+# On scipy v1.0, psi([0, -1, -2, -3, ...]) = [ inf, inf, inf, 
inf, ...]
+# On scipy v1.1, psi([0, -1, -2, -3, ...]) = [-inf, nan, nan, 
nan, ...]
+# Map the behavior of v1.1 psi() to that of v1.0 for ints <= 0 
for consistency
+scipy_psi = np.vectorize(lambda x: np.inf if 
float(x).is_integer() and x <= 0 else
+ scipy_special.psi(x))
 # gamma
 check_sparse_mathematical_core("gamma", stype,
lambda x: 
mx.sym.sparse.gamma(x),
lambda x: 
scipy_special.gamma(x),
-   lambda x: 
scipy_special.gamma(x) * scipy_special.psi(x),
+   lambda x: 
scipy_special.gamma(x) * scipy_psi(x),

output_grad_stype=output_grad_stype,

input_grad_stype=input_grad_stype,
force_overlap=force_overlap,
@@ -1049,17 +1053,14 @@ def test_sparse_mathematical_core():
 check_sparse_mathematical_core("gammaln", stype,
lambda x: 
mx.sym.sparse.gammaln(x),
lambda x: 
scipy_special.gammaln(x),
-   lambda x: scipy_special.psi(x),
+   lambda x: scipy_psi(x),

output_grad_stype=output_grad_stype,

input_grad_stype=input_grad_stype,
force_overlap=force_overlap,
density=density, 
ograd_density=ograd_density)
 
-except:
-if import_succeeded == False:
-print("Could not import scipy. Skipping unit tests for 
special functions")
-else:
-raise
+except ImportError:
+print("Could not import scipy. Skipping unit tests for special 
functions")
 
 for i in range(1):
 print("pass", i)

-- 
To stop receiving notification emails like this one, please contact
marcoab...@apache.org.


[GitHub] marcoabreu commented on issue #11175: [1.2.0] Fix test_sparse_mathematical_core sensitivity to scipy v1.1 (#10961)

2018-06-06 Thread GitBox
marcoabreu commented on issue #11175: [1.2.0] Fix test_sparse_mathematical_core 
sensitivity to scipy v1.1 (#10961)
URL: https://github.com/apache/incubator-mxnet/pull/11175#issuecomment-395221316
 
 
   Thank you, @anirudh2290 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #11138: minor fixes to example/ssd

2018-06-06 Thread GitBox
zhreshold commented on issue #11138: minor fixes to example/ssd
URL: https://github.com/apache/incubator-mxnet/pull/11138#issuecomment-395220791
 
 
   @larroy chime?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11175: [1.2.0] Fix test_sparse_mathematical_core sensitivity to scipy v1.1 (#10961)

2018-06-06 Thread GitBox
marcoabreu commented on issue #11175: [1.2.0] Fix test_sparse_mathematical_core 
sensitivity to scipy v1.1 (#10961)
URL: https://github.com/apache/incubator-mxnet/pull/11175#issuecomment-395220751
 
 
   CI passed for everything besides Scala. It's good to merge IMO.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold closed pull request #11138: minor fixes to example/ssd

2018-06-06 Thread GitBox
zhreshold closed pull request #11138: minor fixes to example/ssd
URL: https://github.com/apache/incubator-mxnet/pull/11138
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/example/ssd/tools/prepare_dataset.py 
b/example/ssd/tools/prepare_dataset.py
old mode 100644
new mode 100755
index 55c95bea95c..c031f04d4fe
--- a/example/ssd/tools/prepare_dataset.py
+++ b/example/ssd/tools/prepare_dataset.py
@@ -1,3 +1,6 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
diff --git a/example/ssd/tools/prepare_pascal.sh 
b/example/ssd/tools/prepare_pascal.sh
old mode 100644
new mode 100755
index b55a2a546e7..97eea262ac4
--- a/example/ssd/tools/prepare_pascal.sh
+++ b/example/ssd/tools/prepare_pascal.sh
@@ -18,5 +18,5 @@
 # under the License.
 
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-python $DIR/prepare_dataset.py --dataset pascal --year 2007,2012 --set 
trainval --target $DIR/../data/train.lst
-python $DIR/prepare_dataset.py --dataset pascal --year 2007 --set test 
--target $DIR/../data/val.lst --no-shuffle
+$DIR/prepare_dataset.py --dataset pascal --year 2007,2012 --set trainval 
--target $DIR/../data/train.lst
+$DIR/prepare_dataset.py --dataset pascal --year 2007 --set test --target 
$DIR/../data/val.lst --no-shuffle
diff --git a/example/ssd/train.py b/example/ssd/train.py
old mode 100644
new mode 100755
index 1ad70bd4ea1..09c618a9642
--- a/example/ssd/train.py
+++ b/example/ssd/train.py
@@ -1,3 +1,6 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -16,12 +19,11 @@
 # under the License.
 
 import argparse
-import tools.find_mxnet
 import mxnet as mx
 import os
-import sys
 from train.train_net import train_net
 
+
 def parse_args():
 parser = argparse.ArgumentParser(description='Train a Single-shot 
detection network')
 parser.add_argument('--train-path', dest='train_path', help='train record 
to use',


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: minor fixes to example/ssd (#11138)

2018-06-06 Thread zhreshold
This is an automated email from the ASF dual-hosted git repository.

zhreshold pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 3f30b44  minor fixes to example/ssd (#11138)
3f30b44 is described below

commit 3f30b4423646f2fe16bd5fb1234b6f9d0d4e52fc
Author: Pedro Larroy <928489+lar...@users.noreply.github.com>
AuthorDate: Wed Jun 6 23:31:40 2018 +0200

minor fixes to example/ssd (#11138)
---
 example/ssd/tools/prepare_dataset.py | 3 +++
 example/ssd/tools/prepare_pascal.sh  | 4 ++--
 example/ssd/train.py | 6 --
 3 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/example/ssd/tools/prepare_dataset.py 
b/example/ssd/tools/prepare_dataset.py
old mode 100644
new mode 100755
index 55c95be..c031f04
--- a/example/ssd/tools/prepare_dataset.py
+++ b/example/ssd/tools/prepare_dataset.py
@@ -1,3 +1,6 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
diff --git a/example/ssd/tools/prepare_pascal.sh 
b/example/ssd/tools/prepare_pascal.sh
old mode 100644
new mode 100755
index b55a2a5..97eea26
--- a/example/ssd/tools/prepare_pascal.sh
+++ b/example/ssd/tools/prepare_pascal.sh
@@ -18,5 +18,5 @@
 # under the License.
 
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-python $DIR/prepare_dataset.py --dataset pascal --year 2007,2012 --set 
trainval --target $DIR/../data/train.lst
-python $DIR/prepare_dataset.py --dataset pascal --year 2007 --set test 
--target $DIR/../data/val.lst --no-shuffle
+$DIR/prepare_dataset.py --dataset pascal --year 2007,2012 --set trainval 
--target $DIR/../data/train.lst
+$DIR/prepare_dataset.py --dataset pascal --year 2007 --set test --target 
$DIR/../data/val.lst --no-shuffle
diff --git a/example/ssd/train.py b/example/ssd/train.py
old mode 100644
new mode 100755
index 1ad70bd..09c618a
--- a/example/ssd/train.py
+++ b/example/ssd/train.py
@@ -1,3 +1,6 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -16,12 +19,11 @@
 # under the License.
 
 import argparse
-import tools.find_mxnet
 import mxnet as mx
 import os
-import sys
 from train.train_net import train_net
 
+
 def parse_args():
 parser = argparse.ArgumentParser(description='Train a Single-shot 
detection network')
 parser.add_argument('--train-path', dest='train_path', help='train record 
to use',

-- 
To stop receiving notification emails like this one, please contact
zhresh...@apache.org.


[GitHub] marcoabreu commented on issue #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-06 Thread GitBox
marcoabreu commented on issue #10827: [MXNET-405][WIP] Add 2 new pipelines to 
the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#issuecomment-395220488
 
 
   I'm currently working on a system that is going to track all jobs that are 
run on CI and automatically generate reports - including statistics about flaky 
tests. This will be applied to all branches as well as nightly.
   
   I think we're going an even more hardcore way after my system is in place: 
We block all releases until the tests are in a good state. At the moment, we 
don't have hard data (impact of each test, failure rate, environments, etc). 
After that data is available, we can take hard actions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 closed pull request #10815: [MXNET-402] add integer type for pad

2018-06-06 Thread GitBox
zhanghang1989 closed pull request #10815: [MXNET-402] add integer type for pad
URL: https://github.com/apache/incubator-mxnet/pull/10815
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/pad.cc b/src/operator/pad.cc
index 2332c93b8d5..6d28b47e3b6 100644
--- a/src/operator/pad.cc
+++ b/src/operator/pad.cc
@@ -50,7 +50,7 @@ void single_image_edge(const Tensor dst,
   int oStartX = std::max(0, pad_l);
   int oStartY = std::max(0, pad_t);
 
-  int k, ip_x, ip_y;
+  size_t k, ip_x, ip_y;
 #pragma omp parallel for private(k, ip_x, ip_y)
   for (k = 0; k < nslices; k++) {
 int i, j;
@@ -99,7 +99,7 @@ void single_image_edge_grad(const Tensor 
_in,
   int oStartX = std::max(0, pad_l);
   int oStartY = std::max(0, pad_t);
 
-  int k, ip_x, ip_y;
+  size_t k, ip_x, ip_y;
 #pragma omp parallel for private(k, ip_x, ip_y)
   for (k = 0; k < nslices; k++) {
 int i, j;
@@ -200,7 +200,7 @@ void single_image_reflect(const Tensor ,
   int oStartX = std::max(0, pad_l);
   int oStartY = std::max(0, pad_t);
 
-  int k, ip_x, ip_y;
+  size_t k, ip_x, ip_y;
 #pragma omp parallel for private(k, ip_x, ip_y)
 
   for (k = 0; k < nslices; k++) {
@@ -251,7 +251,7 @@ void single_image_reflect_grad(const Tensor 
_in,
   int oStartX = std::max(0, pad_l);
   int oStartY = std::max(0, pad_t);
 
-  int k, ip_x, ip_y;
+  size_t k, ip_x, ip_y;
 #pragma omp parallel for private(k, ip_x, ip_y)
 
   for (k = 0; k < nslices; k++) {
@@ -312,7 +312,7 @@ void single_image_edge(const Tensor dst,
   int oStartY = std::max(0, pad_t);
   int oStartZ = std::max(0, pad_f);
 
-  int k, ip_x, ip_y, ip_z;
+  size_t k, ip_x, ip_y, ip_z;
 #pragma omp parallel for private(k, ip_x, ip_y, ip_z)
   for (k = 0; k < nslices; k++) {
 int i, j, z;
@@ -380,7 +380,7 @@ void single_image_edge_grad(const Tensor 
_in,
   int oStartY = std::max(0, pad_t);
   int oStartZ = std::max(0, pad_f);
 
-  int k, ip_x, ip_y, ip_z;
+  size_t k, ip_x, ip_y, ip_z;
 #pragma omp parallel for private(k, ip_x, ip_y, ip_z)
   for (k = 0; k < nslices; k++) {
 int i, j, z;
@@ -508,10 +508,10 @@ void single_image_reflect(const Tensor 
,
   int oStartY = std::max(0, pad_t);
   int oStartZ = std::max(0, pad_f);
 
-  int l, ip_x, ip_y, ip_z;
+  size_t l, ip_x, ip_y, ip_z;
 #pragma omp parallel for private(l, ip_x, ip_y, ip_z)
   for (l = 0; l < nslices; l++) {
-int i, j, k;
+size_t i, j, k;
 for (k = 0; k < odepth; k++) {
   for (i = 0; i < oheight; i++) {
 for (j = 0; j < owidth; j++) {
@@ -576,10 +576,10 @@ void single_image_reflect_grad(const Tensor _in,
   int oStartY = std::max(0, pad_t);
   int oStartZ = std::max(0, pad_f);
 
-  int l, ip_x, ip_y, ip_z;
+  size_t l, ip_x, ip_y, ip_z;
 /*#pragma omp parallel for private(l, ip_x, ip_y, ip_z)*/
   for (l = 0; l < nslices; l++) {
-int i, j, k;
+size_t i, j, k;
 for (k = 0; k < odepth; k++) {
   for (i = 0; i < oheight; i++) {
 for (j = 0; j < owidth; j++) {
@@ -669,7 +669,7 @@ namespace op {
 template <>
 Operator *CreateOp(PadParam param, int dtype) {
   Operator *op = NULL;
-  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { op = new PadOp(param); 
})
+  MSHADOW_TYPE_SWITCH(dtype, DType, { op = new PadOp(param); })
   return op;
 }
 
diff --git a/src/operator/pad.cu b/src/operator/pad.cu
index 372683a2be8..ff612b0d746 100644
--- a/src/operator/pad.cu
+++ b/src/operator/pad.cu
@@ -729,7 +729,7 @@ namespace op {
 template <>
 Operator *CreateOp(PadParam param, int dtype) {
   Operator *op = NULL;
-  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { op = new PadOp(param); 
})
+  MSHADOW_TYPE_SWITCH(dtype, DType, { op = new PadOp(param); })
   return op;
 }
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #11170: Removing tutorial tests

2018-06-06 Thread GitBox
piiswrong closed pull request #11170: Removing tutorial tests
URL: https://github.com/apache/incubator-mxnet/pull/11170
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/Jenkinsfile b/Jenkinsfile
index 288f9a4a317..28edda00959 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -824,28 +824,6 @@ try {
   }
 }
   }
-},
-'tutorial tests Python 2 GPU': {
-  node('mxnetlinux-gpu') {
-ws('workspace/it-tutorials-py2') {
-  timeout(time: max_time, unit: 'MINUTES') {
-init_git()
-unpack_lib('gpu')
-docker_run('ubuntu_gpu', 'tutorialtest_ubuntu_python2_gpu', true, 
'3g')
-  }
-}
-  }
-},
-'tutorial tests Python 3 GPU': {
-  node('mxnetlinux-gpu') {
-ws('workspace/it-tutorials-py3') {
-  timeout(time: max_time, unit: 'MINUTES') {
-init_git()
-unpack_lib('gpu')
-docker_run('ubuntu_gpu', 'tutorialtest_ubuntu_python3_gpu', true, 
'3g')
-  }
-}
-  }
 }
   }
 
diff --git a/tests/tutorials/test_tutorials.py 
b/tests/tutorials/test_tutorials.py
index 4c19a8ef6ed..507036409f6 100644
--- a/tests/tutorials/test_tutorials.py
+++ b/tests/tutorials/test_tutorials.py
@@ -79,7 +79,10 @@ def _test_tutorial_nb(tutorial):
 os.makedirs(working_dir)
 try:
 notebook = nbformat.read(tutorial_path + '.ipynb', 
as_version=IPYTHON_VERSION)
-time.sleep(0.5) # Adding a small delay to allow time for sockets to be 
freed
+# Adding a small delay to allow time for sockets to be freed
+# stop-gap measure to battle the 1000ms linger of socket hard coded
+# in the kernel API code
+time.sleep(1.1) 
 if kernel is not None:
 eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
 else:


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Removing tutorial tests (#11170)

2018-06-06 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 7511045  Removing tutorial tests (#11170)
7511045 is described below

commit 751104554046dc30f02889fe72fbab05bde0cb5c
Author: Thomas Delteil 
AuthorDate: Wed Jun 6 13:55:22 2018 -0700

Removing tutorial tests (#11170)

* removing tutorial tests

Removing tutorial tests for now until we figure out why they started 
failing so much

* extend sleep time to > 1s
---
 Jenkinsfile   | 22 --
 tests/tutorials/test_tutorials.py |  5 -
 2 files changed, 4 insertions(+), 23 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 288f9a4..28edda0 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -824,28 +824,6 @@ try {
   }
 }
   }
-},
-'tutorial tests Python 2 GPU': {
-  node('mxnetlinux-gpu') {
-ws('workspace/it-tutorials-py2') {
-  timeout(time: max_time, unit: 'MINUTES') {
-init_git()
-unpack_lib('gpu')
-docker_run('ubuntu_gpu', 'tutorialtest_ubuntu_python2_gpu', true, 
'3g')
-  }
-}
-  }
-},
-'tutorial tests Python 3 GPU': {
-  node('mxnetlinux-gpu') {
-ws('workspace/it-tutorials-py3') {
-  timeout(time: max_time, unit: 'MINUTES') {
-init_git()
-unpack_lib('gpu')
-docker_run('ubuntu_gpu', 'tutorialtest_ubuntu_python3_gpu', true, 
'3g')
-  }
-}
-  }
 }
   }
 
diff --git a/tests/tutorials/test_tutorials.py 
b/tests/tutorials/test_tutorials.py
index 4c19a8e..5070364 100644
--- a/tests/tutorials/test_tutorials.py
+++ b/tests/tutorials/test_tutorials.py
@@ -79,7 +79,10 @@ def _test_tutorial_nb(tutorial):
 os.makedirs(working_dir)
 try:
 notebook = nbformat.read(tutorial_path + '.ipynb', 
as_version=IPYTHON_VERSION)
-time.sleep(0.5) # Adding a small delay to allow time for sockets to be 
freed
+# Adding a small delay to allow time for sockets to be freed
+# stop-gap measure to battle the 1000ms linger of socket hard coded
+# in the kernel API code
+time.sleep(1.1) 
 if kernel is not None:
 eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
 else:

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.


[GitHub] anirudh2290 commented on issue #11142: [MXNET-408] inplace ReLU activation (#10847)

2018-06-06 Thread GitBox
anirudh2290 commented on issue #11142: [MXNET-408] inplace ReLU activation 
(#10847)
URL: https://github.com/apache/incubator-mxnet/pull/11142#issuecomment-395210095
 
 
   @nswamy can you please make the other changes for scala package related to 
the patch version bump.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #11155: [MXNET-521] Add Facebook open-graph tag integration

2018-06-06 Thread GitBox
aaronmarkham commented on issue #11155: [MXNET-521] Add Facebook open-graph tag 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11155#issuecomment-394894911
 
 
   @thomelane @ThomasDelteil - any thoughts on this? Sharing should be 
better... I made the image according to the minimum requirements FB suggests.
   
https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/og-logo.png
   
   Here's the debug console... it shows the new image:
   
https://developers.facebook.com/tools/debug/sharing/?q=http%3A%2F%2F54.210.6.225%2Fapi%2Fpython%2Findex.html
   
   Much better than what is there now!:
   
https://developers.facebook.com/tools/debug/sharing/?q=https%3A%2F%2Fmxnet.incubator.apache.org%2F
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on a change in pull request #11162: Add valid_thresh to contrib.box_nms

2018-06-06 Thread GitBox
zhreshold commented on a change in pull request #11162: Add valid_thresh to 
contrib.box_nms
URL: https://github.com/apache/incubator-mxnet/pull/11162#discussion_r193547733
 
 

 ##
 File path: src/operator/contrib/bounding_box-inl.h
 ##
 @@ -145,6 +152,60 @@ inline uint32_t BoxNMSNumVisibleOutputs(const NodeAttrs& 
attrs) {
   return static_cast(1);
 }
 
+template
+int FilterScores(mshadow::Tensor out_scores,
+ mshadow::Tensor out_sorted_index,
+ mshadow::Tensor scores,
+ mshadow::Tensor sorted_index,
+ float valid_thresh) {
+  index_t j = 0;
+  for (index_t i = 0; i < scores.size(0); i++) {
+if (scores[i] > valid_thresh) {
+  out_scores[j] = scores[i];
+  out_sorted_index[j] = sorted_index[i];
+  j++;
+}
+  }
+  return j;
+}
+
+#ifdef __CUDACC__
 
 Review comment:
   and the thrust headers as well


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #11162: Add valid_thresh to contrib.box_nms

2018-06-06 Thread GitBox
zhreshold commented on issue #11162: Add valid_thresh to contrib.box_nms
URL: https://github.com/apache/incubator-mxnet/pull/11162#issuecomment-395204631
 
 
   Essentially looks good, please see comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on a change in pull request #11162: Add valid_thresh to contrib.box_nms

2018-06-06 Thread GitBox
zhreshold commented on a change in pull request #11162: Add valid_thresh to 
contrib.box_nms
URL: https://github.com/apache/incubator-mxnet/pull/11162#discussion_r193547326
 
 

 ##
 File path: src/operator/contrib/bounding_box-inl.h
 ##
 @@ -145,6 +152,60 @@ inline uint32_t BoxNMSNumVisibleOutputs(const NodeAttrs& 
attrs) {
   return static_cast(1);
 }
 
+template
+int FilterScores(mshadow::Tensor out_scores,
+ mshadow::Tensor out_sorted_index,
+ mshadow::Tensor scores,
+ mshadow::Tensor sorted_index,
+ float valid_thresh) {
+  index_t j = 0;
+  for (index_t i = 0; i < scores.size(0); i++) {
+if (scores[i] > valid_thresh) {
+  out_scores[j] = scores[i];
+  out_sorted_index[j] = sorted_index[i];
+  j++;
+}
+  }
+  return j;
+}
+
+#ifdef __CUDACC__
 
 Review comment:
   move to bounding_box-inl.cuh and include


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10435: [MXNET-289] Fix bugs in image classification example

2018-06-06 Thread GitBox
rahul003 commented on issue #10435: [MXNET-289] Fix bugs in image 
classification example 
URL: https://github.com/apache/incubator-mxnet/pull/10435#issuecomment-395203501
 
 
   @eric-haibin-lin The CI passed now, please take a look. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #11041: gpu mem pool strategy

2018-06-06 Thread GitBox
szha commented on issue #11041: gpu mem pool strategy
URL: https://github.com/apache/incubator-mxnet/pull/11041#issuecomment-395202867
 
 
   I've simplified the implementation to exclude optimization using intrinsics 
and bit scans. They are backed up in 
https://github.com/szha/mxnet/tree/mem_strategy_backup


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on issue #11027: Add standard ResNet data augmentation for ImageRecordIter

2018-06-06 Thread GitBox
hetong007 commented on issue #11027: Add standard ResNet data augmentation for 
ImageRecordIter
URL: https://github.com/apache/incubator-mxnet/pull/11027#issuecomment-395201777
 
 
   @piiswrong can you please review again? The CI is fragile and a successful 
build may take one to two restart and cost over 5 hours. I'd like to include as 
many modifications as possible in one push, if needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #11093: Some wrong with mxnet on spark: params.jars = jars.split(", |:")

2018-06-06 Thread GitBox
lanking520 commented on issue #11093: Some wrong with mxnet on spark: 
params.jars = jars.split(",|:")
URL: 
https://github.com/apache/incubator-mxnet/issues/11093#issuecomment-395199597
 
 
   Hi @liuzx32 , the jars will be used in this way:
   ```Scala
   jars.map(jar => SparkFiles.get(new File(jar).getName)).mkString(":")
   ```
   It seemed we cannot directly place S3 or HDFS paths in here. So It's not 
necessary to initiate this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #10995: Some mxnet ctc_loss bug~

2018-06-06 Thread GitBox
lanking520 commented on issue #10995: Some mxnet ctc_loss bug~
URL: 
https://github.com/apache/incubator-mxnet/issues/10995#issuecomment-395188659
 
 
   @chinakook do you think it is a good idea for us to depreciate 
contrib.ctc_loss and use WarpCTC since these two are similiar?
   @eric-haibin-lin , can you check this 
[document](https://github.com/apache/incubator-mxnet/blob/5106985ca9c885724cacedbb2d670c9f0feba45f/example/ctc/README.md#ctc-loss-in-mxnet)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #11166: Update rnn_cell.py

2018-06-06 Thread GitBox
szha commented on a change in pull request #11166: Update rnn_cell.py
URL: https://github.com/apache/incubator-mxnet/pull/11166#discussion_r193536164
 
 

 ##
 File path: python/mxnet/rnn/rnn_cell.py
 ##
 @@ -39,11 +39,13 @@ def _cells_begin_state(cells, **kwargs):
 return sum([c.begin_state(**kwargs) for c in cells], [])
 
 def _cells_unpack_weights(cells, args):
+cells = [cells]
 
 Review comment:
   why? is this a bug? is there a test case that triggers this bug?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #10995: Some mxnet ctc_loss bug~

2018-06-06 Thread GitBox
lanking520 commented on issue #10995: Some mxnet ctc_loss bug~
URL: 
https://github.com/apache/incubator-mxnet/issues/10995#issuecomment-395188659
 
 
   @chinakook do you think it is a good idea for us to depreciate 
contrib.ctc_loss and use WarpCTC since these two are similiar?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #11144: Save/Load Model?

2018-06-06 Thread GitBox
ThomasDelteil commented on issue #11144: Save/Load Model?
URL: 
https://github.com/apache/incubator-mxnet/issues/11144#issuecomment-395182660
 
 
   If your model is hybridizable, and you want to shelf it for a while, the 
recommended way would be to use `.export('prefix', epoch=0)`.
   That way you can reload it later in a symbol block (right now it takes 3 
lines, but soon a new API will let you do that directly 
https://github.com/apache/incubator-mxnet/pull/11127), or in a different 
language binding altogether.
   
   If you are iterating with your model, the recommend way is 
`save_params/load_params` for the parameters, and save the model definition 
file in a `.py` file or simply pickle it.
   
   You can check this upcoming tutorial. 
https://github.com/indhub/mxnet/blob/ef1a5fb8ecfaf9f85c26a468cdbcfbfbab58c423/docs/tutorials/gluon/save_load_params.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #11144: Save/Load Model?

2018-06-06 Thread GitBox
ThomasDelteil commented on issue #11144: Save/Load Model?
URL: 
https://github.com/apache/incubator-mxnet/issues/11144#issuecomment-395182660
 
 
   If your model is hybridizable, and you want to shelf it for a while, the 
recommended way would be to use `.export('prefix', epoch=0)`.
   That way you can reload it later in a symbol block (right now it takes 3 
lines, but soon a new API will let you do that directly 
https://github.com/apache/incubator-mxnet/pull/11127#discussion_r192873630), or 
in a different language binding altogether.
   
   If you are iterating with your model, the recommend way is 
`save_params/load_params` for the parameters, and save the model definition 
file in a `.py` file or simply pickle it.
   
   You can check this upcoming tutorial. 
https://github.com/indhub/mxnet/blob/ef1a5fb8ecfaf9f85c26a468cdbcfbfbab58c423/docs/tutorials/gluon/save_load_params.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #11144: Save/Load Model?

2018-06-06 Thread GitBox
ThomasDelteil commented on issue #11144: Save/Load Model?
URL: 
https://github.com/apache/incubator-mxnet/issues/11144#issuecomment-395182660
 
 
   If your model is hybridizable, and you want to shelf it for a while, the 
recommended way would be to use `.export('prefix', epoch=0)`.
   That way you can reload it later in a symbol block (right now it takes 3 
lines, but soon a new API will let you do that directly 
https://github.com/apache/incubator-mxnet/pull/11127#discussion_r192873630), or 
in a different language binding altogether.
   
   If you are iterating with your model, the recommend way is 
`save_params/load_params` for the parameters, and save the model definition 
file in a `.py` file or simply pickle it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #11144: Save/Load Model?

2018-06-06 Thread GitBox
ThomasDelteil commented on issue #11144: Save/Load Model?
URL: 
https://github.com/apache/incubator-mxnet/issues/11144#issuecomment-395182660
 
 
   If your model is hybridizable, and you want to shelf it for a while, the 
recommended way would be to use `.export('prefix', epoch=0)`.
   That way you can reload it later in a symbol block, or in a different 
language binding altogether.
   
   If you are iterating with your model, the recommend way is 
`save_params/load_params` for the parameters, and save the model definition 
file in a `.py` file or simply pickle it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya opened a new pull request #11178: {WIP} L1 Norm operator

2018-06-06 Thread GitBox
anirudhacharya opened a new pull request #11178: {WIP} L1 Norm operator
URL: https://github.com/apache/incubator-mxnet/pull/11178
 
 
   ## Description ##
   Adds L1 Norm operator.
   - [ ] The backward function is currently wrong. Will fix it.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - L1Norm
   
   ## Comments ##
   - Should this have a sparse implementation? L1 norms are usually sparse when 
compared to l2.
   - @haojin2 @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #11023: No Windows Support for Scala

2018-06-06 Thread GitBox
lanking520 commented on issue #11023: No Windows Support for Scala
URL: 
https://github.com/apache/incubator-mxnet/issues/11023#issuecomment-395174277
 
 
   Any updates?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-107]Fused GRU implementation for CPU (#10311)

2018-06-06 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 069026a  [MXNET-107]Fused GRU implementation for CPU (#10311)
069026a is described below

commit 069026ab1a9924fd870a625558e000b19b9b9507
Author: Hao Li 
AuthorDate: Thu Jun 7 02:38:03 2018 +0800

[MXNET-107]Fused GRU implementation for CPU (#10311)

* Add GRU Support and Test Case

* skip the gpu test case that has nothing to do with RNN GRU

* fix robust bug for gru backward

* fix bug for unifying weight parameter

* add GRU multiple layer and bidirection support with test case

* fix test case bug

* fix test case bug

* fix bug for memory issue

* fix bug for bidirection

* rebase code and fix bug for memory corruption issue

* fix gpu compile issue

* fix bug and enable some test cases

* fix robust bug

* trigger the build to check if quantize-gpu case is covered

* trigger the build to check if MKLDNN+GPU case is covered

* disable failed gpu test case of MKLDNN_UTIL_FUNC-MemFormat because it has 
nothing to do with this PR and will recover it once the issue is passed

* skip failed test_reduce test case temporarily as it has nothing to do 
with RNN

* enable several test cases

* retrigger the build

* rebase code from lstm

* rebase code for resolve conflict

* add gru code after resolve conflict

* fix bug for resolve conflict

* add Fused GRU code with test case

* retrigger the build

* add GetRecommendedOMPThreadCount for omp

* fix conflict issue

* add gru relate code

* fix bug for code

* update code for gru

* retrigger the build

* fix code about gru condition

* enhance test case to test gradient weights and bias

* fix bug for test case

* fix bug for test case

* fix bug about dropout condition and test case

* fix bug for test case

* fix bug for test case

* retrigger the build

* rebase code

* add gru code

* fix issues about namespace, removing define and memcpy

* retrigger the build

* fix issues and add cudnn_gru_bucketing.py test case

* retrigger the build

* update cudnn_rnn_bucketing.py test case

* update cudnn_rnn_bucketing.py test case

* update cudnn_rnn_bucketing.py test case

* add check for req[kParams] and kAddTo from cudnn_rnn-inl.h

* retrigger the build

* retrigger the build

* retrigger the build

* add kNullOp check

* retrigger the build

* update kNullOp support and test case for both GRU and LSTM

* update kAddToOp support for both GRU and LSTM
---
 ...nn_lstm_bucketing.py => cudnn_rnn_bucketing.py} |  33 +-
 python/mxnet/gluon/rnn/rnn_layer.py|   2 +-
 src/operator/rnn-inl.h |  57 +-
 src/operator/rnn_impl.h| 955 -
 tests/python/unittest/test_operator.py |  63 +-
 5 files changed, 1060 insertions(+), 50 deletions(-)

diff --git a/example/rnn/bucketing/cudnn_lstm_bucketing.py 
b/example/rnn/bucketing/cudnn_rnn_bucketing.py
similarity index 87%
rename from example/rnn/bucketing/cudnn_lstm_bucketing.py
rename to example/rnn/bucketing/cudnn_rnn_bucketing.py
index 84cfc9d..29a66a8 100644
--- a/example/rnn/bucketing/cudnn_lstm_bucketing.py
+++ b/example/rnn/bucketing/cudnn_rnn_bucketing.py
@@ -65,6 +65,8 @@ parser.add_argument('--stack-rnn', default=False,
 help='stack fused RNN cells to reduce communication 
overhead')
 parser.add_argument('--dropout', type=float, default='0.0',
 help='dropout probability (1.0 - keep probability)')
+parser.add_argument('--rnntype', type=str, default='lstm',
+help='rnn type: gru and lstm are supported')
 
 #buckets = [32]
 buckets = [10, 20, 30, 40, 50, 60]
@@ -97,13 +99,13 @@ def train(args):
 cell = mx.rnn.SequentialRNNCell()
 for i in range(args.num_layers):
 cell.add(mx.rnn.FusedRNNCell(args.num_hidden, num_layers=1,
- mode='lstm', prefix='lstm_l%d'%i,
+ mode=args.rnntype, 
prefix='%s_l%d'%(args.rnntype,i),
  bidirectional=args.bidirectional))
-if args.dropout > 0 and i < args.num_layers - 1:
-cell.add(mx.rnn.DropoutCell(args.dropout, prefix='lstm_d%d'%i))
+if args.dropout > 0 and i < args.num_layers - 1 and args.rnntype 
== 'lstm':
+

[GitHub] lanking520 commented on issue #11056: [Feature Request] broadcast_like operator

2018-06-06 Thread GitBox
lanking520 commented on issue #11056: [Feature Request] broadcast_like operator
URL: 
https://github.com/apache/incubator-mxnet/issues/11056#issuecomment-395171372
 
 
   I have created a JIRA ticket for this: 
https://issues.apache.org/jira/browse/MXNET-524.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-06 Thread GitBox
larroy commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 new 
pipelines to the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#discussion_r193510935
 
 

 ##
 File path: ci/docker/Dockerfile.build.ubuntu_emscripten
 ##
 @@ -0,0 +1,58 @@
+# -*- mode: dockerfile -*-
 
 Review comment:
   Why do we even need emscripten? what's the use case?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-06 Thread GitBox
larroy commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 new 
pipelines to the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#discussion_r193510715
 
 

 ##
 File path: ci/docker/Dockerfile.build.ubuntu_base
 ##
 @@ -0,0 +1,39 @@
+# -*- mode: dockerfile -*-
 
 Review comment:
   Can we call this ubuntu base gpu?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-06 Thread GitBox
ThomasDelteil commented on issue #10827: [MXNET-405][WIP] Add 2 new pipelines 
to the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#issuecomment-395162833
 
 
   I'm sure the tests in the main CI were all passing the first time they were 
introduced too , but now we are in a situation were the master CI run is 
failing more often than not. I guess what I am asking is, what plan do we have 
to make sure this doesn't happen again with the nightly tests, especially since 
they will get so much less visibility?
   
   ideas:
   - Send an email to the CI email alias with the failed build
   - Automatically create an issue in github and assigned to the people who 
committed code since the last nightly build
   - Hardcore: Block all new commits until nightly build is fixed ?
   
   bonus question: how do we fix the current CI flaky tests? Just had a quick 
look through the last failed builds and found 3 new flaky tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11176: Failed to find any forward convolution algorithm.

2018-06-06 Thread GitBox
marcoabreu commented on issue #11176: Failed to find any forward convolution 
algorithm.
URL: 
https://github.com/apache/incubator-mxnet/issues/11176#issuecomment-395162094
 
 
   @DickJC123 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11164: Python test failure on newly-introduced python test cases introduced in test_gluon.py

2018-06-06 Thread GitBox
marcoabreu commented on issue #11164: Python test failure on newly-introduced 
python test cases introduced in test_gluon.py
URL: 
https://github.com/apache/incubator-mxnet/issues/11164#issuecomment-395161547
 
 
   Please add a link to the PR we were talking about to give some context


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-06 Thread GitBox
marcoabreu commented on issue #10827: [MXNET-405][WIP] Add 2 new pipelines to 
the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#issuecomment-395159945
 
 
   The nightly builds are all passing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11053: [MXNET-244] Fixed armv7 wheel

2018-06-06 Thread GitBox
marcoabreu commented on issue #11053: [MXNET-244] Fixed armv7 wheel
URL: https://github.com/apache/incubator-mxnet/pull/11053#issuecomment-395159579
 
 
   Huh, how come we're adding Makefile configuration as well now? I think one 
buildsystem is enough, right? Also, it's not tested so I'd prefer if you remove 
it again


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11053: [MXNET-244] Fixed armv7 wheel

2018-06-06 Thread GitBox
marcoabreu commented on issue #11053: [MXNET-244] Fixed armv7 wheel
URL: https://github.com/apache/incubator-mxnet/pull/11053#issuecomment-395159579
 
 
   Huh, how come we're adding Makefile configuration as well now? I think one 
buildsystem is enough, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #11166: Update rnn_cell.py

2018-06-06 Thread GitBox
piiswrong commented on issue #11166: Update rnn_cell.py
URL: https://github.com/apache/incubator-mxnet/pull/11166#issuecomment-395159251
 
 
   @szha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #11167: add input argument in warpctc layer

2018-06-06 Thread GitBox
piiswrong closed pull request #11167: add input argument in warpctc layer
URL: https://github.com/apache/incubator-mxnet/pull/11167
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/plugin/warpctc/warpctc.cc b/plugin/warpctc/warpctc.cc
index 055a6d645d1..aac36a375a9 100644
--- a/plugin/warpctc/warpctc.cc
+++ b/plugin/warpctc/warpctc.cc
@@ -41,6 +41,8 @@ Operator *WarpCTCProp::CreateOperator(Context ctx) const {
 DMLC_REGISTER_PARAMETER(WarpCTCParam);
 
 MXNET_REGISTER_OP_PROPERTY(WarpCTC, WarpCTCProp)
+.add_argument("data", "NDArray-or-Symbol", "Input data.")
+.add_argument("label", "NDArray-or-Symbol", "Input label.")
 .describe("warp ctc.")
 .add_arguments(WarpCTCParam::__FIELDS__());
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >