[GitHub] MonsterPeng opened a new issue #8772: Build source code problem cannot convert from 'mshadow::Stream *' to 'mshadow::Stream *'

2017-11-22 Thread GitBox
MonsterPeng opened a new issue #8772: Build source code problem  cannot convert 
from 'mshadow::Stream *' to 'mshadow::Stream *'   
URL: https://github.com/apache/incubator-mxnet/issues/8772
 
 
   I try to build source code on windows server2012 R2. I have got this error.
   error C2440: 'default argument' : cannot convert from 
'mshadow::Stream *' to 'mshadow::Stream *'   
   
   what does it mean?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] YujiOshima commented on a change in pull request #8722: Profiler: set cpu/gpu num during execution

2017-11-22 Thread GitBox
YujiOshima commented on a change in pull request #8722: Profiler: set cpu/gpu 
num during execution
URL: https://github.com/apache/incubator-mxnet/pull/8722#discussion_r152521064
 
 

 ##
 File path: src/engine/profiler.cc
 ##
 @@ -45,12 +50,11 @@ Profiler::Profiler()
   : state_(kNotRunning), enable_output_(false), filename_("profile.json") {
   this->init_time_ = NowInUsec();
 
-  // TODO(ziheng) get device number during execution
-  int kMaxNumCpus = 64;
-  this->cpu_num_ = kMaxNumCpus;
+  this->cpu_num_ = std::thread::hardware_concurrency();
 #if MXNET_USE_CUDA
-  int kMaxNumGpus = 32;
-  this->gpu_num_ = kMaxNumGpus;
+  int gpu_num = 0;
+  CUDA_CALL(cudaGetDeviceCount(_num));
 
 Review comment:
   @piiswrong Thank you for your comment.
   I understood the purpose of using the constant number.
   How about dynamically setting only the number of CPUs?
   More than 64 CPUs are not ordinary but not impossible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MonsterPeng opened a new issue #8771: Build source

2017-11-22 Thread GitBox
MonsterPeng opened a new issue #8771: Build source
URL: https://github.com/apache/incubator-mxnet/issues/8771
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1.
   2.
   
   ## What have you tried to solve it?
   
   1.
   2.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch piiswrong-patch-1-1 updated (4903f42 -> e6d4d47)

2017-11-22 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a change to branch piiswrong-patch-1-1
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 4903f42  Update index.md
 add e6d4d47  Update index.md

No new revisions were added by this update.

Summary of changes:
 docs/tutorials/index.md | 1 +
 1 file changed, 1 insertion(+)

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] cjolivier01 commented on a change in pull request #8737: Use RAII and fix Coverity resource leaks #10371 and others

2017-11-22 Thread GitBox
cjolivier01 commented on a change in pull request #8737: Use RAII and fix 
Coverity resource leaks #10371 and others
URL: https://github.com/apache/incubator-mxnet/pull/8737#discussion_r152698364
 
 

 ##
 File path: cpp-package/example/alexnet.cpp
 ##
 @@ -215,7 +215,7 @@ int main(int argc, char const *argv[]) {
   args_map["label"] = NDArray(Shape(batch_size), ctx);
 
   /*with data and label, executor can be generated automatically*/
-  auto *exec = Net.SimpleBind(ctx, args_map);
+  auto exec = Net.SimpleBind(ctx, args_map);
 
 Review comment:
   You may want to point out on the thread that this is an API change for call 
X (might as well make it for both Bind calls) etc


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx opened a new issue #8780: Failure in unittest test_operator.py:test_reduce for some seeds

2017-11-22 Thread GitBox
ptrendx opened a new issue #8780: Failure in unittest 
test_operator.py:test_reduce for some seeds
URL: https://github.com/apache/incubator-mxnet/issues/8780
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   Setting both MXNet and Numpy seeds to 2098177962 at the beginning of 
`test_reduce`  in `tests/python/unittest/test_operator.py` results in failure - 
the difference between tensors expected to be the same is over 0.5, so it is 
not an issue with test tolerance.
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   89d2a71
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   ```
   test_repro.test_reduce ... FAIL
   
   ==
   FAIL: test_repro.test_reduce
   --
   Traceback (most recent call last):
 File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line 197, in 
runTest
   self.test(*self.arg)
 File "/opt/mxnet/tests/python/unittest/test_operator.py", line 1578, in 
test_reduce
   mx.symbol.max)
 File "/opt/mxnet/tests/python/unittest/test_operator.py", line 1553, in 
test_reduce_inner
   assert equal_backward
   AssertionError: 
   ```
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   
   1. put 
   ```
   mx.random.seed(2098177962)
   np.random.seed(2098177962)
   ```
   at the beginning of `test_reduce` function in 
`tests/python/unittest/test_operator.py`
   2. `nosetests --verbose test_operator.py:test_reduce`
   
   ## What have you tried to solve it?
   
   1. Tried different seeds, could not get it to fail for ~100 tries.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8779: [Image] add random lighting

2017-11-22 Thread GitBox
piiswrong commented on a change in pull request #8779: [Image] add random 
lighting
URL: https://github.com/apache/incubator-mxnet/pull/8779#discussion_r152695074
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -290,11 +291,107 @@ static void RandomColorJitter(const nnvm::NodeAttrs 
,
   const std::vector ) {
 }
 
+struct AdjustLightingParam : public dmlc::Parameter {
+  nnvm::Tuple alpha_rgb;
+  nnvm::Tuple eigval;
+  nnvm::Tuple eigvec;
+  DMLC_DECLARE_PARAMETER(AdjustLightingParam) {
+DMLC_DECLARE_FIELD(alpha_rgb)
+.set_default({0, 0, 0})
+.describe("The lighting alphas for the R, G, B channels.");
+DMLC_DECLARE_FIELD(eigval)
+.describe("Eigen value.")
+.set_default({ 55.46, 4.794, 1.148 });
+DMLC_DECLARE_FIELD(eigvec)
+.describe("Eigen vector.")
+.set_default({ -0.5675,  0.7192,  0.4009,
+   -0.5808, -0.0045, -0.8140,
+   -0.5808, -0.0045, -0.8140 });
+  }
+};
+
+struct RandomLightingParam : public dmlc::Parameter {
+  float alpha_std;
+  nnvm::Tuple eigval;
+  nnvm::Tuple eigvec;
+  DMLC_DECLARE_PARAMETER(RandomLightingParam) {
+DMLC_DECLARE_FIELD(alpha_std)
+.set_default(0.05)
+.describe("Level of the lighting noise.");
+DMLC_DECLARE_FIELD(eigval)
+.describe("Eigen value.")
+.set_default({ 55.46, 4.794, 1.148 });
+DMLC_DECLARE_FIELD(eigvec)
+.describe("Eigen vector.")
+.set_default({ -0.5675,  0.7192,  0.4009,
+   -0.5808, -0.0045, -0.8140,
+   -0.5808, -0.0045, -0.8140 });
+  }
+};
+
+void AdjustLightingImpl(uint8_t* dst, const uint8_t* src,
+float alpha_r, float alpha_g, float alpha_b,
+const nnvm::Tuple eigval, const 
nnvm::Tuple eigvec,
+int H, int W) {
+alpha_r *= eigval[0];
+alpha_g *= eigval[1];
+alpha_b *= eigval[2];
+float pca_r = alpha_r * eigvec[0] + alpha_g * eigvec[1] + alpha_b * 
eigvec[2];
+float pca_g = alpha_r * eigvec[3] + alpha_g * eigvec[4] + alpha_b * 
eigvec[5];
+float pca_b = alpha_r * eigvec[6] + alpha_g * eigvec[7] + alpha_b * 
eigvec[8];
+for (int i = 0; i < H; i++) {
+for (int j = 0; j < W; j++) {
 
 Review comment:
   merge to one loop


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8779: [Image] add random lighting

2017-11-22 Thread GitBox
piiswrong commented on a change in pull request #8779: [Image] add random 
lighting
URL: https://github.com/apache/incubator-mxnet/pull/8779#discussion_r152695030
 
 

 ##
 File path: python/mxnet/gluon/data/vision/transforms.py
 ##
 @@ -151,3 +151,23 @@ def __init__(self, max_brightness=0, max_contrast=0, 
max_saturation=0, max_hue=0
 
 def hybrid_forward(self, F, x):
 return F.image.random_color_jitter(x, *self._args)
+
+
+class AdjustLighting(HybridBlock):
+def __init__(self, alpha_rgb=(0.0, 0.0, 0.0), eigval=(55.46, 4.794, 1.148),
 
 Review comment:
   what's alpha_rgb?
   use base._Null for default arguments to avoid parsing


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal commented on issue #8766: NDArray Indexing tutorial and Gradient Compression FAQ

2017-11-22 Thread GitBox
mbaijal commented on issue #8766: NDArray Indexing tutorial and Gradient 
Compression FAQ
URL: https://github.com/apache/incubator-mxnet/pull/8766#issuecomment-346511411
 
 
   @eric-haibin-lin 
   Aaron says that they were. It should be ok if there are minor doc errors. Do 
you think there is any chance this could cause code issues?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub opened a new pull request #8782: Caffe to MXNet code translator

2017-11-22 Thread GitBox
indhub opened a new pull request #8782: Caffe to MXNet code translator
URL: https://github.com/apache/incubator-mxnet/pull/8782
 
 
   ## Description ##
   CaffeTranslator is a tool to translate Caffe code (training prototxt) to 
MXNet python code. 
   
   More info here: 
https://github.com/indhub/mxnet/blob/caffe_translator/tools/caffe_translator/README.md
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [x] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [x] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.0.0 updated: [v1.0.0branch only] Final Changes for 1.0- NEWS.d and README.md (#8781)

2017-11-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
 new 4cdc85d  [v1.0.0branch only] Final Changes for 1.0- NEWS.d and 
README.md (#8781)
4cdc85d is described below

commit 4cdc85dfbc59fb6f96e81e7b0e1b527ebe1574f5
Author: mbaijal <30911248+mbai...@users.noreply.github.com>
AuthorDate: Wed Nov 22 17:31:59 2017 -0800

[v1.0.0branch only] Final Changes for 1.0- NEWS.d and README.md (#8781)

* Final Changes for 1.0- NEWS.d and README.md

* minor edits
---
 NEWS.md   | 52 
 README.md |  1 +
 2 files changed, 53 insertions(+)

diff --git a/NEWS.md b/NEWS.md
index 7406210..fc6b101 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -1,5 +1,57 @@
 MXNet Change Log
 
+## 1.0.0
+### Performance
+  - Enhanced the performance of `sparse.dot` operator.
+  - MXNet now automatically set OpenMP to use all available CPU cores to 
maximize CPU utilization when `NUM_OMP_THREADS` is not set.
+  - Unary and binary operators now avoid using OpenMP on small arrays if using 
OpenMP actually hurts performance due to multithreading overhead.
+  - Significantly improved performance of `broadcast_add`, `broadcast_mul`, 
etc on CPU.
+  - Added bulk execution to imperative mode. You can control segment size with 
`mxnet.engine.bulk`. As a result, the speed of Gluon in hybrid mode is 
improved, especially on small networks and multiple GPUs.
+  - Improved speed for `ctypes` invocation from Python frontend.
+### New Features - Gradient Compression [Experimental]
+  - Speed up multi-GPU and distributed training by compressing communication 
of gradients. This is especially effective when training networks with large 
fully-connected layers. In Gluon this can be activated with 
`compression_params` in Trainer.
+### New Features - Support of NVIDIA Collective Communication Library (NCCL) 
[Experimental]
+  - Use `kvstore=’nccl’` for (in some cases) faster training on multiple GPUs.
+  - Significantly faster than kvstore=’device’ when batch size is small.
+  - It is recommended to set environment variable `NCCL_LAUNCH_MODE` to 
`PARALLEL` when using NCCL version 2.1 or newer.
+### New Features - Advanced Indexing [General Availability]
+  - NDArray now supports advanced indexing (both slice and assign) as 
specified by the numpy standard: 
https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#combining-advanced-and-basic-indexing
 with the following restrictions:
+- if key is a list type, only a list of integers is supported, e.g. 
`key=[1, 2]` is supported, while not for `key=[[1, 2]]`.
+- Ellipsis (...) and np.newaxis are not supported.
+- `Boolean` array indexing is not supported.
+### New Features - Gluon [General Availability]
+  - Performance optimizations discussed above.
+  - Added support for loading data in parallel with multiple processes to 
`gluon.data.DataLoader`. The number of workers can be set with `num_worker`. 
Does not support windows yet.
+  - Added Block.cast to support networks with different data types, e.g. 
`float16`.
+  - Added Lambda block for wrapping a user defined function as a block.
+  - Generalized `gluon.data.ArrayDataset` to support arbitrary number of 
arrays.
+### New Features - ARM / Raspberry Pi support [Experimental]
+  - MXNet now compiles and runs on ARMv6, ARMv7, ARMv64 including Raspberry Pi 
devices. See 
https://github.com/apache/incubator-mxnet/tree/master/docker_multiarch for more 
information.
+### New Features - NVIDIA Jetson support [Experimental]
+  - MXNet now compiles and runs on NVIDIA Jetson TX2 boards with GPU 
acceleration.
+  - You can install the python MXNet package on a Jetson board by running - `$ 
pip install mxnet-jetson-tx2`.
+### New Features - Sparse Tensor Support [General Availability]
+  - Added more sparse operators: `contrib.SparseEmbedding`, `sparse.sum` and 
`sparse.mean`. 
+  - Added `asscipy()` for easier conversion to scipy.
+  - Added `check_format()` for sparse ndarrays to check if the array format is 
valid.
+### Bug-fixes  
+  - Fixed a[-1] indexing doesn't work on `NDArray`.
+  - Fixed `expand_dims` if axis < 0.
+  - Fixed a bug that causes topk to produce incorrect result on large arrays.
+  - Improved numerical precision of unary and binary operators for `float64` 
data.
+  - Fixed derivatives of log2 and log10. They used to be the same with log.
+  - Fixed a bug that causes MXNet to hang after fork. Note that you still 
cannot use GPU in child processes after fork due to limitations of CUDA.
+  - Fixed a bug that causes `CustomOp` to fail when using auxiliary states.
+  - Fixed a security bug that is causing MXNet to listen on all available 
interfaces when running training in distributed mode.
+### Doc Updates
+  - Added a security best practices 

[GitHub] szha closed pull request #8781: [v1.0.0branch only] Final Changes for 1.0- NEWS.d and README.md

2017-11-22 Thread GitBox
szha closed pull request #8781: [v1.0.0branch only] Final Changes for 1.0- 
NEWS.d and README.md
URL: https://github.com/apache/incubator-mxnet/pull/8781
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/NEWS.md b/NEWS.md
index 740621038d..fc6b10188f 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -1,5 +1,57 @@
 MXNet Change Log
 
+## 1.0.0
+### Performance
+  - Enhanced the performance of `sparse.dot` operator.
+  - MXNet now automatically set OpenMP to use all available CPU cores to 
maximize CPU utilization when `NUM_OMP_THREADS` is not set.
+  - Unary and binary operators now avoid using OpenMP on small arrays if using 
OpenMP actually hurts performance due to multithreading overhead.
+  - Significantly improved performance of `broadcast_add`, `broadcast_mul`, 
etc on CPU.
+  - Added bulk execution to imperative mode. You can control segment size with 
`mxnet.engine.bulk`. As a result, the speed of Gluon in hybrid mode is 
improved, especially on small networks and multiple GPUs.
+  - Improved speed for `ctypes` invocation from Python frontend.
+### New Features - Gradient Compression [Experimental]
+  - Speed up multi-GPU and distributed training by compressing communication 
of gradients. This is especially effective when training networks with large 
fully-connected layers. In Gluon this can be activated with 
`compression_params` in Trainer.
+### New Features - Support of NVIDIA Collective Communication Library (NCCL) 
[Experimental]
+  - Use `kvstore=?nccl?` for (in some cases) faster training on multiple GPUs.
+  - Significantly faster than kvstore=?device? when batch size is small.
+  - It is recommended to set environment variable `NCCL_LAUNCH_MODE` to 
`PARALLEL` when using NCCL version 2.1 or newer.
+### New Features - Advanced Indexing [General Availability]
+  - NDArray now supports advanced indexing (both slice and assign) as 
specified by the numpy standard: 
https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#combining-advanced-and-basic-indexing
 with the following restrictions:
+- if key is a list type, only a list of integers is supported, e.g. 
`key=[1, 2]` is supported, while not for `key=[[1, 2]]`.
+- Ellipsis (...) and np.newaxis are not supported.
+- `Boolean` array indexing is not supported.
+### New Features - Gluon [General Availability]
+  - Performance optimizations discussed above.
+  - Added support for loading data in parallel with multiple processes to 
`gluon.data.DataLoader`. The number of workers can be set with `num_worker`. 
Does not support windows yet.
+  - Added Block.cast to support networks with different data types, e.g. 
`float16`.
+  - Added Lambda block for wrapping a user defined function as a block.
+  - Generalized `gluon.data.ArrayDataset` to support arbitrary number of 
arrays.
+### New Features - ARM / Raspberry Pi support [Experimental]
+  - MXNet now compiles and runs on ARMv6, ARMv7, ARMv64 including Raspberry Pi 
devices. See 
https://github.com/apache/incubator-mxnet/tree/master/docker_multiarch for more 
information.
+### New Features - NVIDIA Jetson support [Experimental]
+  - MXNet now compiles and runs on NVIDIA Jetson TX2 boards with GPU 
acceleration.
+  - You can install the python MXNet package on a Jetson board by running - `$ 
pip install mxnet-jetson-tx2`.
+### New Features - Sparse Tensor Support [General Availability]
+  - Added more sparse operators: `contrib.SparseEmbedding`, `sparse.sum` and 
`sparse.mean`. 
+  - Added `asscipy()` for easier conversion to scipy.
+  - Added `check_format()` for sparse ndarrays to check if the array format is 
valid.
+### Bug-fixes  
+  - Fixed a[-1] indexing doesn't work on `NDArray`.
+  - Fixed `expand_dims` if axis < 0.
+  - Fixed a bug that causes topk to produce incorrect result on large arrays.
+  - Improved numerical precision of unary and binary operators for `float64` 
data.
+  - Fixed derivatives of log2 and log10. They used to be the same with log.
+  - Fixed a bug that causes MXNet to hang after fork. Note that you still 
cannot use GPU in child processes after fork due to limitations of CUDA.
+  - Fixed a bug that causes `CustomOp` to fail when using auxiliary states.
+  - Fixed a security bug that is causing MXNet to listen on all available 
interfaces when running training in distributed mode.
+### Doc Updates
+  - Added a security best practices document under FAQ section.
+  - Fixed License Headers including restoring copyright attributions.
+  - Documentation updates. 
+  - Links for viewing source.
+ 
+ For more information and examples, see [full release 
notes](https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.0+Release+Notes)
+
+
 ## 0.12.1
 ### Bug-fixes
   - Added GPU 

[GitHub] szha commented on issue #8766: NDArray Indexing tutorial and Gradient Compression FAQ

2017-11-22 Thread GitBox
szha commented on issue #8766: NDArray Indexing tutorial and Gradient 
Compression FAQ
URL: https://github.com/apache/incubator-mxnet/pull/8766#issuecomment-346514635
 
 
   @reminisce 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tanhm07 commented on issue #8777: Error: package or namespace load failed for ?mxnet?:

2017-11-22 Thread GitBox
tanhm07 commented on issue #8777: Error: package or namespace load failed for 
?mxnet?:
URL: 
https://github.com/apache/incubator-mxnet/issues/8777#issuecomment-346524703
 
 
   Yes all of them... 
   
   I did some googling and it's some error from depends.exe. I think we can 
ignore most except CUFFT64_80.DLL
   
   I suspect mxnet doesn't support cuda 9?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8784: Fix warning on meaningless return type qualifier

2017-11-22 Thread GitBox
ZiyueHuang commented on issue #8784: Fix warning on meaningless return type 
qualifier
URL: https://github.com/apache/incubator-mxnet/pull/8784#issuecomment-346526696
 
 
   hmm.. seems same as https://github.com/apache/incubator-mxnet/pull/8774 :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy opened a new issue #8786: Link failure on Deep learning AMI

2017-11-22 Thread GitBox
larroy opened a new issue #8786: Link failure on Deep learning AMI
URL: https://github.com/apache/incubator-mxnet/issues/8786
 
 
   ## Description
   Link failure
   
   ## Environment info (Required)
   Deep learning AMI:
   
   
https://aws.amazon.com/marketplace/pp/B077GCH38C?qid=1511406484267=0-2_=srh_res_product_title
   
   ami-1812bb61
   
   ## Build info (Required if built from source)
   
   time make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 
USE_CUDA_PATH=/usr/local/cuda USE_OPENMP=0 DEBUG=1
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   1264313183d35c86ed49e7ec708a076fe858325f
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   time make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 
USE_CUDA_PATH=/usr/local/cuda USE_OPENMP=0 DEBUG=1
   
   
   ## Error Message:
   ```
   opencv_video -lopencv_photo -lopencv_ml -lopencv_imgproc -lopencv_flann 
-lopencv_core -llapack  -lcuda -lcufft -lnvrtc
   build/src/common/rtc.o: In function 
`mxnet::rtc::CudaModule::Chunk::GetFunction(std::__cxx11::basic_string const&, mxnet:
   :Context const&)':
   /home/ubuntu/incubator-mxnet/src/common/rtc.cc:77: undefined reference to 
`mshadow::gpu::kDevMask'
   collect2: error: ld returned 1 exit status
   Makefile:421: recipe for target 'bin/im2rec' failed
   make: *** [bin/im2rec] Error 1
   make: *** Waiting for unfinished jobs
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #8779: [Image] add random lighting

2017-11-22 Thread GitBox
sxjscience commented on a change in pull request #8779: [Image] add random 
lighting
URL: https://github.com/apache/incubator-mxnet/pull/8779#discussion_r152695298
 
 

 ##
 File path: python/mxnet/gluon/data/vision/transforms.py
 ##
 @@ -151,3 +151,23 @@ def __init__(self, max_brightness=0, max_contrast=0, 
max_saturation=0, max_hue=0
 
 def hybrid_forward(self, F, x):
 return F.image.random_color_jitter(x, *self._args)
+
+
+class AdjustLighting(HybridBlock):
 
 Review comment:
   OK. Then I will directly call the operator in the testing code


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #8779: [Image] add random lighting

2017-11-22 Thread GitBox
piiswrong closed pull request #8779: [Image] add random lighting
URL: https://github.com/apache/incubator-mxnet/pull/8779
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/gluon/data/vision/transforms.py 
b/python/mxnet/gluon/data/vision/transforms.py
index e1deef631d..931d644b17 100644
--- a/python/mxnet/gluon/data/vision/transforms.py
+++ b/python/mxnet/gluon/data/vision/transforms.py
@@ -21,6 +21,7 @@
 from ...block import Block, HybridBlock
 from ...nn import Sequential, HybridSequential
 from  import ndarray, initializer
+from base import _Null
 
 
 class Compose(Sequential):
@@ -151,3 +152,21 @@ def __init__(self, max_brightness=0, max_contrast=0, 
max_saturation=0, max_hue=0
 
 def hybrid_forward(self, F, x):
 return F.image.random_color_jitter(x, *self._args)
+
+
+class AdjustLighting(HybridBlock):
+def __init__(self, alpha_rgb=_Null, eigval=_Null, eigvec=_Null):
+super(AdjustLighting, self).__init__()
+self._args = (alpha_rgb, eigval, eigvec)
+
+def hybrid_forward(self, F, x):
+return F.image.adjust_lighting(x, *self._args)
+
+
+class RandomLighting(HybridBlock):
+def __init__(self, alpha_std=_Null, eigval=_Null, eigvec=_Null):
+super(RandomLighting, self).__init__()
+self._args = (alpha_std, eigval, eigvec)
+
+def hybrid_forward(self, F, x):
+return F.image.random_lighting(x, *self._args)
\ No newline at end of file
diff --git a/src/operator/image/image_random-inl.h 
b/src/operator/image/image_random-inl.h
index f823c8ce06..ebbf60a0fe 100644
--- a/src/operator/image/image_random-inl.h
+++ b/src/operator/image/image_random-inl.h
@@ -26,6 +26,7 @@
 #define MXNET_OPERATOR_IMAGE_IMAGE_RANDOM_INL_H_
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -290,11 +291,105 @@ static void RandomColorJitter(const nnvm::NodeAttrs 
,
   const std::vector ) {
 }
 
+struct AdjustLightingParam : public dmlc::Parameter {
+  nnvm::Tuple alpha_rgb;
+  nnvm::Tuple eigval;
+  nnvm::Tuple eigvec;
+  DMLC_DECLARE_PARAMETER(AdjustLightingParam) {
+DMLC_DECLARE_FIELD(alpha_rgb)
+.set_default({0, 0, 0})
+.describe("The lighting alphas for the R, G, B channels.");
+DMLC_DECLARE_FIELD(eigval)
+.describe("Eigen value.")
+.set_default({ 55.46, 4.794, 1.148 });
+DMLC_DECLARE_FIELD(eigvec)
+.describe("Eigen vector.")
+.set_default({ -0.5675,  0.7192,  0.4009,
+   -0.5808, -0.0045, -0.8140,
+   -0.5808, -0.0045, -0.8140 });
+  }
+};
+
+struct RandomLightingParam : public dmlc::Parameter {
+  float alpha_std;
+  nnvm::Tuple eigval;
+  nnvm::Tuple eigvec;
+  DMLC_DECLARE_PARAMETER(RandomLightingParam) {
+DMLC_DECLARE_FIELD(alpha_std)
+.set_default(0.05)
+.describe("Level of the lighting noise.");
+DMLC_DECLARE_FIELD(eigval)
+.describe("Eigen value.")
+.set_default({ 55.46, 4.794, 1.148 });
+DMLC_DECLARE_FIELD(eigvec)
+.describe("Eigen vector.")
+.set_default({ -0.5675,  0.7192,  0.4009,
+   -0.5808, -0.0045, -0.8140,
+   -0.5808, -0.0045, -0.8140 });
+  }
+};
+
+void AdjustLightingImpl(uint8_t* dst, const uint8_t* src,
+float alpha_r, float alpha_g, float alpha_b,
+const nnvm::Tuple eigval, const 
nnvm::Tuple eigvec,
+int H, int W) {
+alpha_r *= eigval[0];
+alpha_g *= eigval[1];
+alpha_b *= eigval[2];
+float pca_r = alpha_r * eigvec[0] + alpha_g * eigvec[1] + alpha_b * 
eigvec[2];
+float pca_g = alpha_r * eigvec[3] + alpha_g * eigvec[4] + alpha_b * 
eigvec[5];
+float pca_b = alpha_r * eigvec[6] + alpha_g * eigvec[7] + alpha_b * 
eigvec[8];
+for (int i = 0; i < H * W; i++) {
+int base_ind = 3 * i;
+float in_r = static_cast(src[base_ind]);
+float in_g = static_cast(src[base_ind + 1]);
+float in_b = static_cast(src[base_ind + 2]);
+dst[base_ind] = std::min(255, std::max(0, static_cast(in_r + 
pca_r)));
+dst[base_ind + 1] = std::min(255, std::max(0, static_cast(in_g + 
pca_g)));
+dst[base_ind + 2] = std::min(255, std::max(0, static_cast(in_b + 
pca_b)));
+}
+}
+
+static void AdjustLighting(const nnvm::NodeAttrs ,
+   const OpContext ,
+   const std::vector ,
+   const std::vector ,
+   const std::vector ) {
+using namespace mshadow;
+const AdjustLightingParam  = 
nnvm::get(attrs.parsed);
+CHECK_EQ(param.eigval.ndim(), 3) << "There should be 3 numbers in the 
eigval.";
+CHECK_EQ(param.eigvec.ndim(), 9) << "There should be 9 numbers in the 
eigvec.";
+

[incubator-mxnet] branch vision updated: [Image] add random lighting (#8779)

2017-11-22 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch vision
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/vision by this push:
 new f33654b  [Image] add random lighting (#8779)
f33654b is described below

commit f33654b13b9cf16da42fe6fe6fe3d1d3e3cfe779
Author: Xingjian Shi 
AuthorDate: Wed Nov 22 15:53:37 2017 -0800

[Image] add random lighting (#8779)

* add random lighting

* fix
---
 python/mxnet/gluon/data/vision/transforms.py| 19 +
 src/operator/image/image_random-inl.h   | 95 +
 src/operator/image/image_random.cc  | 43 +--
 tests/python/unittest/test_gluon_data_vision.py | 40 +++
 4 files changed, 191 insertions(+), 6 deletions(-)

diff --git a/python/mxnet/gluon/data/vision/transforms.py 
b/python/mxnet/gluon/data/vision/transforms.py
index e1deef6..931d644 100644
--- a/python/mxnet/gluon/data/vision/transforms.py
+++ b/python/mxnet/gluon/data/vision/transforms.py
@@ -21,6 +21,7 @@ from .. import dataset
 from ...block import Block, HybridBlock
 from ...nn import Sequential, HybridSequential
 from  import ndarray, initializer
+from base import _Null
 
 
 class Compose(Sequential):
@@ -151,3 +152,21 @@ class RandomColorJitter(HybridBlock):
 
 def hybrid_forward(self, F, x):
 return F.image.random_color_jitter(x, *self._args)
+
+
+class AdjustLighting(HybridBlock):
+def __init__(self, alpha_rgb=_Null, eigval=_Null, eigvec=_Null):
+super(AdjustLighting, self).__init__()
+self._args = (alpha_rgb, eigval, eigvec)
+
+def hybrid_forward(self, F, x):
+return F.image.adjust_lighting(x, *self._args)
+
+
+class RandomLighting(HybridBlock):
+def __init__(self, alpha_std=_Null, eigval=_Null, eigvec=_Null):
+super(RandomLighting, self).__init__()
+self._args = (alpha_std, eigval, eigvec)
+
+def hybrid_forward(self, F, x):
+return F.image.random_lighting(x, *self._args)
\ No newline at end of file
diff --git a/src/operator/image/image_random-inl.h 
b/src/operator/image/image_random-inl.h
index f823c8c..ebbf60a 100644
--- a/src/operator/image/image_random-inl.h
+++ b/src/operator/image/image_random-inl.h
@@ -26,6 +26,7 @@
 #define MXNET_OPERATOR_IMAGE_IMAGE_RANDOM_INL_H_
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -290,11 +291,105 @@ static void RandomColorJitter(const nnvm::NodeAttrs 
,
   const std::vector ) {
 }
 
+struct AdjustLightingParam : public dmlc::Parameter {
+  nnvm::Tuple alpha_rgb;
+  nnvm::Tuple eigval;
+  nnvm::Tuple eigvec;
+  DMLC_DECLARE_PARAMETER(AdjustLightingParam) {
+DMLC_DECLARE_FIELD(alpha_rgb)
+.set_default({0, 0, 0})
+.describe("The lighting alphas for the R, G, B channels.");
+DMLC_DECLARE_FIELD(eigval)
+.describe("Eigen value.")
+.set_default({ 55.46, 4.794, 1.148 });
+DMLC_DECLARE_FIELD(eigvec)
+.describe("Eigen vector.")
+.set_default({ -0.5675,  0.7192,  0.4009,
+   -0.5808, -0.0045, -0.8140,
+   -0.5808, -0.0045, -0.8140 });
+  }
+};
+
+struct RandomLightingParam : public dmlc::Parameter {
+  float alpha_std;
+  nnvm::Tuple eigval;
+  nnvm::Tuple eigvec;
+  DMLC_DECLARE_PARAMETER(RandomLightingParam) {
+DMLC_DECLARE_FIELD(alpha_std)
+.set_default(0.05)
+.describe("Level of the lighting noise.");
+DMLC_DECLARE_FIELD(eigval)
+.describe("Eigen value.")
+.set_default({ 55.46, 4.794, 1.148 });
+DMLC_DECLARE_FIELD(eigvec)
+.describe("Eigen vector.")
+.set_default({ -0.5675,  0.7192,  0.4009,
+   -0.5808, -0.0045, -0.8140,
+   -0.5808, -0.0045, -0.8140 });
+  }
+};
+
+void AdjustLightingImpl(uint8_t* dst, const uint8_t* src,
+float alpha_r, float alpha_g, float alpha_b,
+const nnvm::Tuple eigval, const 
nnvm::Tuple eigvec,
+int H, int W) {
+alpha_r *= eigval[0];
+alpha_g *= eigval[1];
+alpha_b *= eigval[2];
+float pca_r = alpha_r * eigvec[0] + alpha_g * eigvec[1] + alpha_b * 
eigvec[2];
+float pca_g = alpha_r * eigvec[3] + alpha_g * eigvec[4] + alpha_b * 
eigvec[5];
+float pca_b = alpha_r * eigvec[6] + alpha_g * eigvec[7] + alpha_b * 
eigvec[8];
+for (int i = 0; i < H * W; i++) {
+int base_ind = 3 * i;
+float in_r = static_cast(src[base_ind]);
+float in_g = static_cast(src[base_ind + 1]);
+float in_b = static_cast(src[base_ind + 2]);
+dst[base_ind] = std::min(255, std::max(0, static_cast(in_r + 
pca_r)));
+dst[base_ind + 1] = std::min(255, std::max(0, static_cast(in_g + 
pca_g)));
+dst[base_ind + 2] = std::min(255, std::max(0, static_cast(in_b + 
pca_b)));
+}
+}
+
+static void AdjustLighting(const nnvm::NodeAttrs ,
+   

[GitHub] yian2271368 opened a new issue #8783: how to print each loss for every classes

2017-11-22 Thread GitBox
yian2271368 opened a new issue #8783: how to print each loss for every 
classes
URL: https://github.com/apache/incubator-mxnet/issues/8783
 
 
   i am doing a object detection project for 9 classes(including background), 
but in the end it prints out the loss and classfication score for overall 
classes. so i am wondering how could I print out loss and acc(ie, rnploss, 
rcnn loss) for each class? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tanhm07 commented on issue #8777: Error: package or namespace load failed for ?mxnet?:

2017-11-22 Thread GitBox
tanhm07 commented on issue #8777: Error: package or namespace load failed for 
?mxnet?:
URL: 
https://github.com/apache/incubator-mxnet/issues/8777#issuecomment-346521226
 
 
   I ran depends.ex on libmxnet.dll and these modules were found to be have 
errors.
   
   The error is : Error opening fule. The system cannot find the file specified 
(2).
   
   API-MS-WIN-CORE-APIQUERY-L1-1-0.DLL
   API-MS-WIN-CORE-APPCOMPAT-L1-1-1.DLL
   API-MS-WIN-CORE-APPINIT-L1-1-0.DLL
   API-MS-WIN-CORE-ATOMS-L1-1-0.DLL
   API-MS-WIN-CORE-COMM-L1-1-0.DLL
   API-MS-WIN-CORE-CONSOLE-L2-1-0.DLL
   API-MS-WIN-CORE-CONSOLE-L3-1-0.DLL
   API-MS-WIN-CORE-CRT-L1-1-0.DLL
   API-MS-WIN-CORE-CRT-L2-1-0.DLL
   API-MS-WIN-CORE-DATETIME-L1-1-1.DLL
   API-MS-WIN-CORE-DATETIME-L1-1-2.DLL
   API-MS-WIN-CORE-DEBUG-L1-1-1.DLL
   API-MS-WIN-CORE-DELAYLOAD-L1-1-0.DLL
   API-MS-WIN-CORE-DELAYLOAD-L1-1-1.DLL
   API-MS-WIN-CORE-ENCLAVE-L1-1-0.DLL
   API-MS-WIN-CORE-ERRORHANDLING-L1-1-1.DLL
   API-MS-WIN-CORE-ERRORHANDLING-L1-1-3.DLL
   API-MS-WIN-CORE-FIBERS-L1-1-1.DLL
   API-MS-WIN-CORE-FIBERS-L2-1-1.DLL
   API-MS-WIN-CORE-FILE-L1-2-1.DLL
   API-MS-WIN-CORE-FILE-L1-2-2.DLL
   API-MS-WIN-CORE-FILE-L2-1-1.DLL
   API-MS-WIN-CORE-FILE-L2-1-2.DLL
   API-MS-WIN-CORE-HEAP-L1-2-0.DLL
   API-MS-WIN-CORE-HEAP-L2-1-0.DLL
   API-MS-WIN-CORE-HEAP-OBSOLETE-L1-1-0.DLL
   API-MS-WIN-CORE-INTERLOCKED-L1-2-0.DLL
   API-MS-WIN-CORE-IO-L1-1-0.DLL
   API-MS-WIN-CORE-IO-L1-1-1.DLL
   API-MS-WIN-CORE-JOB-L1-1-0.DLL
   API-MS-WIN-CORE-JOB-L2-1-0.DLL
   API-MS-WIN-CORE-KERNEL32-LEGACY-L1-1-1.DLL
   API-MS-WIN-CORE-KERNEL32-LEGACY-L1-1-5.DLL
   API-MS-WIN-CORE-KERNEL32-PRIVATE-L1-1-1.DLL
   API-MS-WIN-CORE-KERNEL32-PRIVATE-L1-1-2.DLL
   API-MS-WIN-CORE-LARGEINTEGER-L1-1-0.DLL
   API-MS-WIN-CORE-LIBRARYLOADER-L1-2-0.DLL
   API-MS-WIN-CORE-LIBRARYLOADER-L1-2-2.DLL
   API-MS-WIN-CORE-LIBRARYLOADER-L2-1-0.DLL
   API-MS-WIN-CORE-LOCALIZATION-L1-1-0.DLL
   API-MS-WIN-CORE-LOCALIZATION-L1-2-1.DLL
   API-MS-WIN-CORE-LOCALIZATION-L1-2-2.DLL
   API-MS-WIN-CORE-LOCALIZATION-L2-1-0.DLL
   API-MS-WIN-CORE-LOCALIZATION-OBSOLETE-L1-3-0.DLL
   API-MS-WIN-CORE-LOCALIZATION-PRIVATE-L1-1-0.DLL
   API-MS-WIN-CORE-LOCALREGISTRY-L1-1-0.DLL
   API-MS-WIN-CORE-MEMORY-L1-1-2.DLL
   API-MS-WIN-CORE-MEMORY-L1-1-5.DLL
   API-MS-WIN-CORE-MISC-L1-1-0.DLL
   API-MS-WIN-CORE-NAMEDPIPE-L1-2-0.DLL
   API-MS-WIN-CORE-NAMEDPIPE-L1-2-2.DLL
   API-MS-WIN-CORE-NAMESPACE-L1-1-0.DLL
   API-MS-WIN-CORE-NORMALIZATION-L1-1-0.DLL
   API-MS-WIN-CORE-PATH-L1-1-0.DLL
   API-MS-WIN-CORE-PERFCOUNTERS-L1-1-0.DLL
   API-MS-WIN-CORE-PRIVATEPROFILE-L1-1-1.DLL
   API-MS-WIN-CORE-PROCESSENVIRONMENT-L1-2-0.DLL
   API-MS-WIN-CORE-PROCESSSNAPSHOT-L1-1-0.DLL
   API-MS-WIN-CORE-PROCESSTHREADS-L1-1-2.DLL
   API-MS-WIN-CORE-PROCESSTHREADS-L1-1-3.DLL
   API-MS-WIN-CORE-PROCESSTOPOLOGY-L1-2-0.DLL
   API-MS-WIN-CORE-PSAPI-ANSI-L1-1-0.DLL
   API-MS-WIN-CORE-PSAPI-L1-1-0.DLL
   API-MS-WIN-CORE-QUIRKS-L1-1-0.DLL
   API-MS-WIN-CORE-REALTIME-L1-1-0.DLL
   API-MS-WIN-CORE-REGISTRY-L1-1-0.DLL
   API-MS-WIN-CORE-REGISTRY-L1-1-1.DLL
   API-MS-WIN-CORE-REGISTRYUSERSPECIFIC-L1-1-0.DLL
   API-MS-WIN-CORE-RTLSUPPORT-L1-2-0.DLL
   API-MS-WIN-CORE-SHLWAPI-LEGACY-L1-1-0.DLL
   API-MS-WIN-CORE-SHLWAPI-OBSOLETE-L1-2-0.DLL
   API-MS-WIN-CORE-SIDEBYSIDE-L1-1-0.DLL
   API-MS-WIN-CORE-STRING-L2-1-0.DLL
   API-MS-WIN-CORE-STRING-L2-1-1.DLL
   API-MS-WIN-CORE-STRING-OBSOLETE-L1-1-0.DLL
   API-MS-WIN-CORE-STRINGANSI-L1-1-0.DLL
   API-MS-WIN-CORE-SYNCH-L1-2-1.DLL
   API-MS-WIN-CORE-SYSINFO-L1-2-1.DLL
   API-MS-WIN-CORE-SYSINFO-L1-2-3.DLL
   API-MS-WIN-CORE-SYSTEMTOPOLOGY-L1-1-0.DLL
   API-MS-WIN-CORE-SYSTEMTOPOLOGY-L1-1-1.DLL
   API-MS-WIN-CORE-THREADPOOL-L1-2-0.DLL
   API-MS-WIN-CORE-THREADPOOL-LEGACY-L1-1-0.DLL
   API-MS-WIN-CORE-THREADPOOL-PRIVATE-L1-1-0.DLL
   API-MS-WIN-CORE-URL-L1-1-0.DLL
   API-MS-WIN-CORE-VERSION-L1-1-0.DLL
   API-MS-WIN-CORE-VERSION-L1-1-1.DLL
   API-MS-WIN-CORE-VERSION-PRIVATE-L1-1-0.DLL
   API-MS-WIN-CORE-VERSIONANSI-L1-1-0.DLL
   API-MS-WIN-CORE-VERSIONANSI-L1-1-1.DLL
   API-MS-WIN-CORE-WINDOWSERRORREPORTING-L1-1-0.DLL
   API-MS-WIN-CORE-WINDOWSERRORREPORTING-L1-1-1.DLL
   API-MS-WIN-CORE-WINRT-ERROR-L1-1-1.DLL
   API-MS-WIN-CORE-WOW64-L1-1-0.DLL
   API-MS-WIN-CORE-WOW64-L1-1-1.DLL
   API-MS-WIN-CORE-XSTATE-L2-1-0.DLL
   API-MS-WIN-DEVICES-CONFIG-L1-1-1.DLL
   API-MS-WIN-EVENTING-CLASSICPROVIDER-L1-1-0.DLL
   API-MS-WIN-EVENTING-CONSUMER-L1-1-0.DLL
   API-MS-WIN-EVENTING-CONTROLLER-L1-1-0.DLL
   API-MS-WIN-EVENTING-OBSOLETE-L1-1-0.DLL
   API-MS-WIN-EVENTING-PROVIDER-L1-1-0.DLL
   API-MS-WIN-GDI-INTERNAL-UAP-L1-1-0.DLL
   API-MS-WIN-SECURITY-APPCONTAINER-L1-1-0.DLL
   API-MS-WIN-SECURITY-AUDIT-L1-1-1.DLL
   API-MS-WIN-SECURITY-BASE-L1-2-0.DLL
   API-MS-WIN-SECURITY-BASE-PRIVATE-L1-1-1.DLL
   API-MS-WIN-SECURITY-CAPABILITY-L1-1-0.DLL
   API-MS-WIN-SERVICE-CORE-L1-1-1.DLL
   API-MS-WIN-SERVICE-CORE-L1-1-2.DLL
   API-MS-WIN-SERVICE-MANAGEMENT-L1-1-0.DLL
   API-MS-WIN-SERVICE-MANAGEMENT-L2-1-0.DLL
   API-MS-WIN-SERVICE-PRIVATE-L1-1-0
   

[GitHub] ZiyueHuang commented on issue #8784: Fix warning on meaningless return type qualifier

2017-11-22 Thread GitBox
ZiyueHuang commented on issue #8784: Fix warning on meaningless return type 
qualifier
URL: https://github.com/apache/incubator-mxnet/pull/8784#issuecomment-346526696
 
 
   hmm.. seems same as https://github.com/apache/incubator-mxnet/pull/8774 :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8777: Error: package or namespace load failed for ?mxnet?:

2017-11-22 Thread GitBox
cjolivier01 commented on issue #8777: Error: package or namespace load failed 
for ?mxnet?:
URL: 
https://github.com/apache/incubator-mxnet/issues/8777#issuecomment-346528268
 
 
   So, that particular one is built against CUDA 8, so you?d need to install
   CUDA 8. You can still have CUDA 9 installed. Mxnet supports CUDA 9 if you
   build it yourself from master or 1.0 branch.
   
   Hope this helps,
   
   -Chris
   
   On Wed, Nov 22, 2017 at 7:29 PM Tan Hong Ming 
   wrote:
   
   > I did not build from source. Instead I downloaded the R package as I'm
   > using R.
   >
   > location of the .dll: D:\R\R-3.4.2\library\mxnet\libs\x64
   >
   > Perhaps i will try building from source later
   >
   > ?
   > You are receiving this because you commented.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8759: image flip op

2017-11-22 Thread GitBox
piiswrong commented on a change in pull request #8759: image flip op
URL: https://github.com/apache/incubator-mxnet/pull/8759#discussion_r152694849
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -144,6 +151,45 @@ static void Normalize(const nnvm::NodeAttrs ,
   });
 }
 
+inline static int FlipIndex(int idx, const int stride, const int trailing) {
+  const int low = idx % trailing;
+  int high = idx / trailing;
+  const int x = high % stride;
+  high /= stride;
+
+  return (high * stride + stride - 1 - x) * trailing + low;
+}
+
+template
+static void FlipImpl(const int size, DType *src, DType *dst,
+ const int stride, const int trailing) {
+  for (int idx = 0; idx < size; ++idx) {
+int new_idx = FlipIndex(idx, stride, trailing);
 
 Review comment:
   Doing this on every index is slow. Either use two loops explicitly or use 
`inc`, see broadcasting for example


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8759: image flip op

2017-11-22 Thread GitBox
piiswrong commented on a change in pull request #8759: image flip op
URL: https://github.com/apache/incubator-mxnet/pull/8759#discussion_r152694925
 
 

 ##
 File path: src/operator/image/image_random.cc
 ##
 @@ -66,6 +60,32 @@ NNVM_REGISTER_OP(_image_normalize)
 .add_arguments(NormalizeParam::__FIELDS__());
 
 
+NNVM_REGISTER_OP(_image_flip_left_right)
 
 Review comment:
   use one op, flip(axis) instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8779: [Image] add random lighting

2017-11-22 Thread GitBox
piiswrong commented on a change in pull request #8779: [Image] add random 
lighting
URL: https://github.com/apache/incubator-mxnet/pull/8779#discussion_r152694962
 
 

 ##
 File path: python/mxnet/gluon/data/vision/transforms.py
 ##
 @@ -151,3 +151,23 @@ def __init__(self, max_brightness=0, max_contrast=0, 
max_saturation=0, max_hue=0
 
 def hybrid_forward(self, F, x):
 return F.image.random_color_jitter(x, *self._args)
+
+
+class AdjustLighting(HybridBlock):
 
 Review comment:
   don't need this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #8737: Use RAII and fix Coverity resource leaks #10371 and others

2017-11-22 Thread GitBox
larroy commented on a change in pull request #8737: Use RAII and fix Coverity 
resource leaks #10371 and others
URL: https://github.com/apache/incubator-mxnet/pull/8737#discussion_r152696303
 
 

 ##
 File path: cpp-package/example/alexnet.cpp
 ##
 @@ -215,7 +215,7 @@ int main(int argc, char const *argv[]) {
   args_map["label"] = NDArray(Shape(batch_size), ctx);
 
   /*with data and label, executor can be generated automatically*/
-  auto *exec = Net.SimpleBind(ctx, args_map);
+  auto exec = Net.SimpleBind(ctx, args_map);
 
 Review comment:
   Thanks. I will bring it up on the list. About shared_ptr: since you can 
create it yourself by moving from the unique_ptr if you need a shared_ptr, 
shouldn't be a big issue. Actually faster to not create a shared_ptr if not 
needed, as it doesn't need the atomic refcount machinery. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed issue #7552: Networks for CIFAR-10.

2017-11-22 Thread GitBox
szha closed issue #7552: Networks for CIFAR-10.
URL: https://github.com/apache/incubator-mxnet/issues/7552
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed issue #7575: How to measure the time consumed by every batch in training?

2017-11-22 Thread GitBox
szha closed issue #7575: How to measure the time consumed by every batch in 
training?
URL: https://github.com/apache/incubator-mxnet/issues/7575
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.0.0 updated: cast scalar value in invoke to float (#8778)

2017-11-22 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
 new 413b196  cast scalar value in invoke to float (#8778)
413b196 is described below

commit 413b196a3ee508d106005ddea8de70513b90ff36
Author: Chris Olivier 
AuthorDate: Wed Nov 22 14:35:54 2017 -0800

cast scalar value in invoke to float (#8778)
---
 python/mxnet/optimizer.py | 4 ++--
 python/mxnet/symbol/symbol.py | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
index 5eb4f05..0134556 100644
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -793,9 +793,9 @@ class AdaGrad(Optimizer):
 srt = op.sqrt(adjusted_add)
 div = _internal._scatter_elemwise_div(grad, srt)
 retained_weight = sparse.retain(weight, grad.indices)
-to_add = sparse.elemwise_add(div, 
_internal._mul_scalar(retained_weight, wd))
+to_add = sparse.elemwise_add(div, 
_internal._mul_scalar(retained_weight, float(wd)))
 assert len(to_add.indices) == grad_indices_count
-weight[:] = sparse.elemwise_add(weight, 
_internal._mul_scalar(to_add, -lr))
+weight[:] = sparse.elemwise_add(weight, 
_internal._mul_scalar(to_add, float(-lr)))
 state[:] = history
 assert state.stype == save_history_stype
 assert len(history_indices) == grad_indices_count
diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py
index e2cf0ec..ce7776d 100644
--- a/python/mxnet/symbol/symbol.py
+++ b/python/mxnet/symbol/symbol.py
@@ -2759,7 +2759,7 @@ def full(shape, val, dtype=None, **kwargs):
 """
 if dtype is None:
 dtype = _numpy.float32
-return _internal._full(shape=shape, dtype=dtype, value=val, **kwargs)
+return _internal._full(shape=shape, dtype=dtype, value=float(val), 
**kwargs)
 
 # pylint: disable=redefined-outer-name
 def arange(start, stop=None, step=1.0, repeat=1, name=None, dtype=None):

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[incubator-mxnet] branch master updated: Add Intel openmp as a submodule and build for x86 architectures (#8730)

2017-11-22 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1264313  Add Intel openmp as a submodule and build for x86 
architectures (#8730)
1264313 is described below

commit 1264313183d35c86ed49e7ec708a076fe858325f
Author: Chris Olivier 
AuthorDate: Wed Nov 22 14:38:55 2017 -0800

Add Intel openmp as a submodule and build for x86 architectures (#8730)

* Refreshed branch intel_openmp

* Disable Intel OpenMP local build for Windows until a Windows user can fix

* Ignore 3rdparty license headers
---
 .gitmodules |  3 +++
 3rdparty/openmp |  1 +
 CMakeLists.txt  | 15 ++-
 LICENSE |  1 +
 tools/license_header.py |  1 +
 5 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/.gitmodules b/.gitmodules
index 7a76cba..f9b2ab6 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -16,3 +16,6 @@
 [submodule "cub"]
path = cub
url = https://github.com/dmlc/cub
+[submodule "3rdparty/openmp"]
+   path = 3rdparty/openmp
+   url = https://github.com/llvm-mirror/openmp
diff --git a/3rdparty/openmp b/3rdparty/openmp
new file mode 16
index 000..37c7212
--- /dev/null
+++ b/3rdparty/openmp
@@ -0,0 +1 @@
+Subproject commit 37c72127e90360a020f351f18d9cccfc30e5145a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index b6bb814..dd17917 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -54,10 +54,13 @@ if(EXISTS 
${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
   include(${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
 endif()
 
-set(CMAKE_MODULE_PATH 
"${PROJECT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
-
-
+if(MSVC)
+  set(SYSTEM_ARCHITECTURE x86_64)
+else()
+  EXECUTE_PROCESS( COMMAND uname -m COMMAND tr -d '\n' OUTPUT_VARIABLE 
SYSTEM_ARCHITECTURE)
+endif()
 
+set(CMAKE_MODULE_PATH 
"${PROJECT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
 
 SET(EXTRA_OPERATORS "" CACHE PATH "EXTRA OPERATORS PATH")
 
@@ -263,11 +266,13 @@ endif()
 # ---[ OpenMP
 if(USE_OPENMP)
   find_package(OpenMP REQUIRED)
-  if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/openmp/CMakeLists.txt)
+  # This should build on Windows, but there's some problem and I don;t have a 
Windows box, so
+  # could a Windows user please fix?
+  if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/openmp/CMakeLists.txt AND 
SYSTEM_ARCHITECTURE STREQUAL "x86_64" AND NOT MSVC)
 # Intel/llvm OpenMP: https://github.com/llvm-mirror/openmp
 set(OPENMP_STANDALONE_BUILD TRUE)
 set(LIBOMP_ENABLE_SHARED FALSE)
-add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/openmp)
+add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/openmp)
 list(REMOVE_ITEM mxnet_LINKER_LIBS iomp5)
 list(APPEND mxnet_LINKER_LIBS omp)
 set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
diff --git a/LICENSE b/LICENSE
index 01dfcf4..4173ec8 100644
--- a/LICENSE
+++ b/LICENSE
@@ -234,6 +234,7 @@
 1. Fast R-CNN  - For details, see example/rcnn/LICENSE
 2. Faster R-CNN - For details, see example/rcnn/LICENSE
 3. tree_lstm - For details, see example/gluon/tree_lstm/LICENSE
+4. OpenMP - For details, see 3rdparty/openmp/LICENSE.txt
 
 
 
diff --git a/tools/license_header.py b/tools/license_header.py
index e26fd2b..29538d1 100644
--- a/tools/license_header.py
+++ b/tools/license_header.py
@@ -61,6 +61,7 @@ _WHITE_LIST = ['R-package/',
'dmlc-core/',
'mshadow/',
'nnvm',
+   '3rdparty',   
'ps-lite',
'src/operator/mkl/',
'src/operator/contrib/ctc_include/']

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] piiswrong closed pull request #8730: Add Intel openmp as a submodule and build for x86 architectures

2017-11-22 Thread GitBox
piiswrong closed pull request #8730: Add Intel openmp as a submodule and build 
for x86 architectures
URL: https://github.com/apache/incubator-mxnet/pull/8730
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/.gitmodules b/.gitmodules
index 7a76cbaf78..f9b2ab68f4 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -16,3 +16,6 @@
 [submodule "cub"]
path = cub
url = https://github.com/dmlc/cub
+[submodule "3rdparty/openmp"]
+   path = 3rdparty/openmp
+   url = https://github.com/llvm-mirror/openmp
diff --git a/3rdparty/openmp b/3rdparty/openmp
new file mode 16
index 00..37c72127e9
--- /dev/null
+++ b/3rdparty/openmp
@@ -0,0 +1 @@
+Subproject commit 37c72127e90360a020f351f18d9cccfc30e5145a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index b6bb814182..dd17917154 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -54,10 +54,13 @@ if(EXISTS 
${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
   include(${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
 endif()
 
-set(CMAKE_MODULE_PATH 
"${PROJECT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
-
-
+if(MSVC)
+  set(SYSTEM_ARCHITECTURE x86_64)
+else()
+  EXECUTE_PROCESS( COMMAND uname -m COMMAND tr -d '\n' OUTPUT_VARIABLE 
SYSTEM_ARCHITECTURE)
+endif()
 
+set(CMAKE_MODULE_PATH 
"${PROJECT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
 
 SET(EXTRA_OPERATORS "" CACHE PATH "EXTRA OPERATORS PATH")
 
@@ -263,11 +266,13 @@ endif()
 # ---[ OpenMP
 if(USE_OPENMP)
   find_package(OpenMP REQUIRED)
-  if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/openmp/CMakeLists.txt)
+  # This should build on Windows, but there's some problem and I don;t have a 
Windows box, so
+  # could a Windows user please fix?
+  if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/openmp/CMakeLists.txt AND 
SYSTEM_ARCHITECTURE STREQUAL "x86_64" AND NOT MSVC)
 # Intel/llvm OpenMP: https://github.com/llvm-mirror/openmp
 set(OPENMP_STANDALONE_BUILD TRUE)
 set(LIBOMP_ENABLE_SHARED FALSE)
-add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/openmp)
+add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/openmp)
 list(REMOVE_ITEM mxnet_LINKER_LIBS iomp5)
 list(APPEND mxnet_LINKER_LIBS omp)
 set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
diff --git a/LICENSE b/LICENSE
index 01dfcf4679..4173ec8045 100644
--- a/LICENSE
+++ b/LICENSE
@@ -234,6 +234,7 @@
 1. Fast R-CNN  - For details, see example/rcnn/LICENSE
 2. Faster R-CNN - For details, see example/rcnn/LICENSE
 3. tree_lstm - For details, see example/gluon/tree_lstm/LICENSE
+4. OpenMP - For details, see 3rdparty/openmp/LICENSE.txt
 
 
 
diff --git a/tools/license_header.py b/tools/license_header.py
index e26fd2beca..29538d13da 100644
--- a/tools/license_header.py
+++ b/tools/license_header.py
@@ -61,6 +61,7 @@
'dmlc-core/',
'mshadow/',
'nnvm',
+   '3rdparty',   
'ps-lite',
'src/operator/mkl/',
'src/operator/contrib/ctc_include/']


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8777: Error: package or namespace load failed for ?mxnet?:

2017-11-22 Thread GitBox
cjolivier01 commented on issue #8777: Error: package or namespace load failed 
for ?mxnet?:
URL: 
https://github.com/apache/incubator-mxnet/issues/8777#issuecomment-346524559
 
 
   Sorry, which one wasn?t found? All of them?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tanhm07 commented on issue #8777: Error: package or namespace load failed for ?mxnet?:

2017-11-22 Thread GitBox
tanhm07 commented on issue #8777: Error: package or namespace load failed for 
?mxnet?:
URL: 
https://github.com/apache/incubator-mxnet/issues/8777#issuecomment-346528781
 
 
   Ah ok. Will CUDA 8 support a gtx 1080ti? Thanks so much for your help!
   
   Regards
   Hong Ming
   
   On 23 Nov 2017, at 12:07 PM, Chris Olivier 
> wrote:
   
   So, that particular one is built against CUDA 8, so you?d need to install
   CUDA 8. You can still have CUDA 9 installed. Mxnet supports CUDA 9 if you
   build it yourself from master or 1.0 branch.
   
   Hope this helps,
   
   -Chris
   
   On Wed, Nov 22, 2017 at 7:29 PM Tan Hong Ming 
>
   wrote:
   
   > I did not build from source. Instead I downloaded the R package as I'm
   > using R.
   >
   > location of the .dll: D:\R\R-3.4.2\library\mxnet\libs\x64
   >
   > Perhaps i will try building from source later
   >
   > ?
   > You are receiving this because you commented.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   
   ?
   You are receiving this because you authored the thread.
   Reply to this email directly, view it on 
GitHub,
 or mute the 
thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #8766: NDArray Indexing tutorial and Gradient Compression FAQ

2017-11-22 Thread GitBox
reminisce commented on issue #8766: NDArray Indexing tutorial and Gradient 
Compression FAQ
URL: https://github.com/apache/incubator-mxnet/pull/8766#issuecomment-346528748
 
 
   @mbaijal I just realized that it involves the change in `kvstore.py`. I 
think it is better to wait till the tests are completed.
   
   The previous failing test `test_operator_gpu.test_svmoutput_with_type` 
should not be affected by this PR. Were you able to see the error message? If 
it's indeed the failure of code functionality, the owner of the unit test needs 
fix it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #8767: Factorization machine example & sparse example folder re-org

2017-11-22 Thread GitBox
eric-haibin-lin closed pull request #8767: Factorization machine example & 
sparse example folder re-org
URL: https://github.com/apache/incubator-mxnet/pull/8767
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/example/sparse/factorization_machine/README.md 
b/example/sparse/factorization_machine/README.md
new file mode 100644
index 00..ff10a35d85
--- /dev/null
+++ b/example/sparse/factorization_machine/README.md
@@ -0,0 +1,17 @@
+Factorization Machine
+===
+This example trains a factorization machine model using the criteo dataset.
+
+## Download the Dataset
+
+The provided dataset is a pre-processed [criteo dataset from the kaggle 
challenge](https://www.kaggle.com/c/criteo-display-ad-challenge/data)
+in the [LibSVM 
format](https://mxnet.incubator.apache.org/versions/master/api/python/io.html#mxnet.io.LibSVMIter)
+in MXNet, whose features are re-hashed to 2 million. The total size of the 
dataset is around 13 GB.
+
+- python data.py --dir /path/to/criteo/folder/
+
+## Train the Model
+
+- python train.py --data /path/to/criteo/folder/
+
+[Rendle, Steffen. "Factorization machines." In Data Mining (ICDM), 2010 IEEE 
10th International Conference on, pp. 995-1000. IEEE, 2010. 
](https://www.csie.ntu.edu.tw/~b97053/paper/Rendle2010FM.pdf)
diff --git a/example/sparse/factorization_machine/data.py 
b/example/sparse/factorization_machine/data.py
new file mode 100644
index 00..57e7afb410
--- /dev/null
+++ b/example/sparse/factorization_machine/data.py
@@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import os, gzip, argparse, sys
+import mxnet as mx
+import logging
+head = '%(asctime)-15s %(message)s'
+logging.basicConfig(level=logging.INFO, format=head)
+
+class DummyIter(mx.io.DataIter):
+"A dummy iterator that always return the same batch, used for speed 
testing"
+def __init__(self, real_iter):
+super(DummyIter, self).__init__()
+self.real_iter = real_iter
+self.provide_data = real_iter.provide_data
+self.provide_label = real_iter.provide_label
+self.batch_size = real_iter.batch_size
+
+for batch in real_iter:
+self.the_batch = batch
+break
+
+def __iter__(self):
+return self
+
+def next(self):
+return self.the_batch
+
+
+def get_criteo_data(data_dir):
+if not os.path.isdir(data_dir):
+os.mkdir(data_dir)
+try:
+logging.info("Downloading dataset criteo to " + data_dir + " now ...")
+os.system("aws s3 cp --recursive --no-sign-request 
s3://sparse-dataset/criteo " + data_dir)
+except Exception as e:
+logging.error(e)
+
+parser = argparse.ArgumentParser(description="Download criteo dataset",
+ 
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+parser.add_argument('--dir', type=str, default='./data/',
+help='destination directory to store criteo LibSVM 
dataset.')
+
+if __name__ == '__main__':
+# arg parser
+args = parser.parse_args()
+logging.info(args)
+get_criteo_data(args.dir)
diff --git 
a/example/sparse/factorization_machine/factorization_machine_model.py 
b/example/sparse/factorization_machine/factorization_machine_model.py
new file mode 100644
index 00..d2896a7a91
--- /dev/null
+++ b/example/sparse/factorization_machine/factorization_machine_model.py
@@ -0,0 +1,52 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# 

[GitHub] eric-haibin-lin commented on issue #8766: NDArray Indexing tutorial and Gradient Compression FAQ

2017-11-22 Thread GitBox
eric-haibin-lin commented on issue #8766: NDArray Indexing tutorial and 
Gradient Compression FAQ
URL: https://github.com/apache/incubator-mxnet/pull/8766#issuecomment-346509477
 
 
   were all the CR comments addressed? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.0.0 updated: Indexing (#187)

2017-11-22 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
 new 2cdb2da  Indexing (#187)
2cdb2da is described below

commit 2cdb2dad1fdc719f3f11dc4b92844a8f8f38857b
Author: Aaron Markham 
AuthorDate: Wed Nov 22 17:20:47 2017 -0800

Indexing (#187)

* changed url references from dmlc to apache/incubator-mxnet

* gradient compression faq

* added examples, edited content and order

* indexing features tutorial

* further technical notes, plus example invocation

* updates needed for gradient compression example

* minor patch from rahul

* minor edits after reviews

* added reference and minor grammar fixes

* removed one word

* minor updates to text
---
 docs/faq/gradient_compression.md   | 107 
 docs/faq/index.md  |   7 +-
 docs/faq/multi_devices.md  |  13 +
 docs/tutorials/basic/ndarray_indexing.md   | 377 +
 docs/tutorials/index.md|   1 +
 example/gluon/word_language_model/train.py |  10 +-
 python/mxnet/kvstore.py|  11 +-
 7 files changed, 519 insertions(+), 7 deletions(-)

diff --git a/docs/faq/gradient_compression.md b/docs/faq/gradient_compression.md
new file mode 100644
index 000..4cd58f0
--- /dev/null
+++ b/docs/faq/gradient_compression.md
@@ -0,0 +1,107 @@
+# Gradient Compression
+
+Gradient Compression reduces communication bandwidth, and in some scenarios, 
it can make training more scalable and efficient without significant loss in 
convergence rate or accuracy. Example implementations with GPUs, CPUs, and 
distributed training are provided in this document. 
+
+
+## Benefits
+
+**Increased Speed**
+
+For architectures with fully connected layers, the gradient compression 
capability is observed to speedup training by about 2x, depending on the size 
of the model and the network bandwidth of the instance. Bigger models see 
larger speedup with gradient compression.
+
+**Minimal Accuracy Loss**
+
+Gradient compression uses the approach of delaying the synchronization of 
weight updates which are small. Although small weight updates might not be sent 
for that batch, this information is not discarded. Once the weight updates for 
this location accumulate to become a larger value, they will be propagated. 
Since there is no information loss, but only delayed updates, it does not lead 
to a significant loss in accuracy or convergence rate. In distributed training 
experiments[1], the accur [...]
+
+
+## When to Use Gradient Compression
+
+When training models whose architectures include large fully connected 
components, it can be helpful to use gradient compression. For larger models, 
as well as recurrent neural networks, the communication cost becomes a major 
factor. Such models stand to benefit greatly with gradient compression.
+
+
+### GPU versus CPU
+
+The greatest benefits from gradient compression are realized when using 
multi-node (single or multi-GPU) distributed training. Training on CPU would 
provide a lower compute density per compute node as compared to the massive 
compute density per compute node on a GPU. Due to this, the required 
communication bandwidth for CPU-based nodes during training is not as high as 
for GPU-based nodes. Hence, the benefits of gradient compression are lower for 
CPU-based nodes as compared to GPU-based nodes.
+
+
+### Network Latency
+
+Benefits of gradient compression can be found when using distributed training 
with network connected nodes. Depending on the network latency between nodes 
and the model's size, these can contribute to slow performance such that 
gradient compression may provide speed improvements.
+
+You may not want to use gradient compression if you have low latency network 
communication.
+
+
+### Model Size
+
+Distributed training involves synchronization of weights after each batch. 
Larger models have much higher communication costs during training, hence such 
models stand to benefit much more from gradient compression.
+When running distributed training with gradient compression, the quantize and 
dequantize operations happen on CPU parallelized with OpenMP. For smaller 
models, when training on GPUs, it helps to set `OMP_NUM_THREADS=1` on each 
node, so that the overhead of launching OMP threads doesn't cause the 
compression and decompression to be slow.
+
+### Model Architecture
+
+The communication bandwidth requirements during training vary across various 
neural network architectures and hence the benefits of gradient compression 
vary accordingly.
+
+In networks which have significant fully connected components, since such 
layers have low compute cost on GPUs, communication becomes a 

[GitHub] dwSun commented on issue #8724: im2rec.py output labels.txt file for use in inference.

2017-11-22 Thread GitBox
dwSun commented on issue #8724: im2rec.py output labels.txt file for use in 
inference.
URL: https://github.com/apache/incubator-mxnet/pull/8724#issuecomment-346517725
 
 
   File name changed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Anida-qin commented on issue #3664: ubuntu 16.04 compile errror

2017-11-22 Thread GitBox
Anida-qin commented on issue #3664: ubuntu 16.04 compile errror
URL: 
https://github.com/apache/incubator-mxnet/issues/3664#issuecomment-346521649
 
 
   @chencjiajy 
   hi! I got the same problem as u. Did u solve the problem now ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #8778: cast scalar value in invoke to float

2017-11-22 Thread GitBox
piiswrong commented on issue #8778: cast scalar value in invoke to float
URL: https://github.com/apache/incubator-mxnet/pull/8778#issuecomment-346491444
 
 
   I think numpy has an option to let you set the number of digits to print out


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #8778: cast scalar value in invoke to float

2017-11-22 Thread GitBox
piiswrong closed pull request #8778: cast scalar value in invoke to float
URL: https://github.com/apache/incubator-mxnet/pull/8778
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
index 5eb4f05d6d..013455614f 100644
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -793,9 +793,9 @@ def update(self, index, weight, grad, state):
 srt = op.sqrt(adjusted_add)
 div = _internal._scatter_elemwise_div(grad, srt)
 retained_weight = sparse.retain(weight, grad.indices)
-to_add = sparse.elemwise_add(div, 
_internal._mul_scalar(retained_weight, wd))
+to_add = sparse.elemwise_add(div, 
_internal._mul_scalar(retained_weight, float(wd)))
 assert len(to_add.indices) == grad_indices_count
-weight[:] = sparse.elemwise_add(weight, 
_internal._mul_scalar(to_add, -lr))
+weight[:] = sparse.elemwise_add(weight, 
_internal._mul_scalar(to_add, float(-lr)))
 state[:] = history
 assert state.stype == save_history_stype
 assert len(history_indices) == grad_indices_count
diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py
index e2cf0ecb68..ce7776d948 100644
--- a/python/mxnet/symbol/symbol.py
+++ b/python/mxnet/symbol/symbol.py
@@ -2759,7 +2759,7 @@ def full(shape, val, dtype=None, **kwargs):
 """
 if dtype is None:
 dtype = _numpy.float32
-return _internal._full(shape=shape, dtype=dtype, value=val, **kwargs)
+return _internal._full(shape=shape, dtype=dtype, value=float(val), 
**kwargs)
 
 # pylint: disable=redefined-outer-name
 def arange(start, stop=None, step=1.0, repeat=1, name=None, dtype=None):


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: cast scalar value in invoke to float (#8778)

2017-11-22 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new a0031ac  cast scalar value in invoke to float (#8778)
a0031ac is described below

commit a0031ace5a348091354e61bc17cd6c0d1d8ad610
Author: Chris Olivier 
AuthorDate: Wed Nov 22 14:35:54 2017 -0800

cast scalar value in invoke to float (#8778)
---
 python/mxnet/optimizer.py | 4 ++--
 python/mxnet/symbol/symbol.py | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
index 5eb4f05..0134556 100644
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -793,9 +793,9 @@ class AdaGrad(Optimizer):
 srt = op.sqrt(adjusted_add)
 div = _internal._scatter_elemwise_div(grad, srt)
 retained_weight = sparse.retain(weight, grad.indices)
-to_add = sparse.elemwise_add(div, 
_internal._mul_scalar(retained_weight, wd))
+to_add = sparse.elemwise_add(div, 
_internal._mul_scalar(retained_weight, float(wd)))
 assert len(to_add.indices) == grad_indices_count
-weight[:] = sparse.elemwise_add(weight, 
_internal._mul_scalar(to_add, -lr))
+weight[:] = sparse.elemwise_add(weight, 
_internal._mul_scalar(to_add, float(-lr)))
 state[:] = history
 assert state.stype == save_history_stype
 assert len(history_indices) == grad_indices_count
diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py
index e2cf0ec..ce7776d 100644
--- a/python/mxnet/symbol/symbol.py
+++ b/python/mxnet/symbol/symbol.py
@@ -2759,7 +2759,7 @@ def full(shape, val, dtype=None, **kwargs):
 """
 if dtype is None:
 dtype = _numpy.float32
-return _internal._full(shape=shape, dtype=dtype, value=val, **kwargs)
+return _internal._full(shape=shape, dtype=dtype, value=float(val), 
**kwargs)
 
 # pylint: disable=redefined-outer-name
 def arange(start, stop=None, step=1.0, repeat=1, name=None, dtype=None):

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] mbaijal opened a new pull request #8781: [v1.0.0branch only] Final Changes for 1.0- NEWS.d and README.md

2017-11-22 Thread GitBox
mbaijal opened a new pull request #8781: [v1.0.0branch only] Final Changes for 
1.0- NEWS.d and README.md
URL: https://github.com/apache/incubator-mxnet/pull/8781
 
 
   Can you please review the NEWS.md quickly a final time. Thanks!!
   @eric-haibin-lin @reminisce @szha @piiswrong @rahul003 @cjolivier01 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] NEWS.md changes for 1.0
   - [ ] README.md changes for v1.0
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy opened a new pull request #8784: Fix warning on meaningless return type qualifier

2017-11-22 Thread GitBox
larroy opened a new pull request #8784: Fix warning on meaningless return type 
qualifier
URL: https://github.com/apache/incubator-mxnet/pull/8784
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8777: Error: package or namespace load failed for ?mxnet?:

2017-11-22 Thread GitBox
cjolivier01 commented on issue #8777: Error: package or namespace load failed 
for ?mxnet?:
URL: 
https://github.com/apache/incubator-mxnet/issues/8777#issuecomment-346525052
 
 
   I believe the newest branch will build with CUDA 9. Sorry, I have forgotten
   ? are you building locally? Where is this dll on your machine?
   
   On Wed, Nov 22, 2017 at 7:23 PM Tan Hong Ming 
   wrote:
   
   > Yes all of them...
   >
   > I did some googling and it's some error from depends.exe. I think we can
   > ignore most except CUFFT64_80.DLL
   >
   > I suspect mxnet doesn't support cuda 9?
   >
   > ?
   > You are receiving this because you commented.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon opened a new pull request #8787: add CapsNet example

2017-11-22 Thread GitBox
Soonhwan-Kwon opened a new pull request #8787: add CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8787
 
 
   ## Description ##
   This example is MXNet implementation of 
[CapsNet](https://arxiv.org/abs/1710.09829):  
   Sara Sabour, Nicholas Frosst, Geoffrey E Hinton. Dynamic Routing Between 
Capsules. NIPS 2017
   
   We achieved `the best test error rate=0.29%` and `average test 
error=0.303%`. It is the best accuracy and fastest training time result among 
other implementations(Keras, Tensorflow at 2017-11-23).
   The result on paper is `0.25% (average test error rate)`.
   
   | Implementation| test err(%) | ?train time/epoch | GPU  Used|
   | :---: | :---: | :---: |:---: |
   | MXNet | 0.29 | 36 sec | 2 GTX 1080 |
   | tensorflow | 0.49 | ? 10 min | Unknown(4GB Memory) |
   | Keras | 0.30 | 55 sec | 2 GTX 1080 Ti |
   
   ? tensorflow implementation's batch size is not 100 but 128 but MXNet and 
Keras implementation 's batchsize are 100. 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ## Comments ##


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] edmBernard closed issue #8747: Memory GPU leak with Gluon

2017-11-22 Thread GitBox
edmBernard closed issue #8747: Memory GPU leak with Gluon
URL: https://github.com/apache/incubator-mxnet/issues/8747
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] huangyingsong commented on issue #8772: Build source code problem cannot convert from 'mshadow::Stream *' to 'mshadow::Stream *'

2017-11-22 Thread GitBox
huangyingsong commented on issue #8772: Build source code problem  cannot 
convert from 'mshadow::Stream *' to 'mshadow::Stream *' 
URL: 
https://github.com/apache/incubator-mxnet/issues/8772#issuecomment-346344395
 
 
   Do u install Microsoft Visual C++ Compiler Nov 2013 CTP? 
https://mxnet.incubator.apache.org/get_started/windows_setup.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DevinCheung commented on issue #8773: undefined reference to `gotoblas' Makefile:406: recipe for target 'bin/im2rec' failed

2017-11-22 Thread GitBox
DevinCheung commented on issue #8773: undefined reference to `gotoblas' 
Makefile:406: recipe for target 'bin/im2rec' failed
URL: 
https://github.com/apache/incubator-mxnet/issues/8773#issuecomment-346328081
 
 
   And I find the following about libblas in /usr/lib/:
   libblas
   libblas.a
   libblas.so
   libblas.so.3
   libblas.so.3gf


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aijanai commented on issue #8750: terminate called after throwing an instance of 'std::bad_alloc'

2017-11-22 Thread GitBox
aijanai commented on issue #8750: terminate called after throwing an instance 
of 'std::bad_alloc'
URL: 
https://github.com/apache/incubator-mxnet/issues/8750#issuecomment-346330802
 
 
   Thanks for the keen eye.
   It was actually a bug in the code that generated the vocabulary, resulting 
in abnormally sized vocabularies and, thus, too wide network topologies.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aijanai closed issue #8750: terminate called after throwing an instance of 'std::bad_alloc'

2017-11-22 Thread GitBox
aijanai closed issue #8750: terminate called after throwing an instance of 
'std::bad_alloc'
URL: https://github.com/apache/incubator-mxnet/issues/8750
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aijanai commented on issue #8750: terminate called after throwing an instance of 'std::bad_alloc'

2017-11-22 Thread GitBox
aijanai commented on issue #8750: terminate called after throwing an instance 
of 'std::bad_alloc'
URL: 
https://github.com/apache/incubator-mxnet/issues/8750#issuecomment-346330895
 
 
   Anyway, some more useful debugging messages from the framework would help


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #7573: how to use autograd in c++

2017-11-22 Thread GitBox
szha commented on issue #7573: how to use autograd in c++
URL: 
https://github.com/apache/incubator-mxnet/issues/7573#issuecomment-346335902
 
 
   This issue is closed due to lack of activity in the last 90 days. Feel free 
to ping me to reopen if this is still an active issue. Thanks!
   Also, do please check out our [forum](https://discuss.mxnet.io/) (and 
[Chinese version](https://discuss.gluon.ai/)) for general "how-to" questions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DevinCheung opened a new issue #8773: undefined reference to `gotoblas' Makefile:406: recipe for target 'bin/im2rec' failed

2017-11-22 Thread GitBox
DevinCheung opened a new issue #8773: undefined reference to `gotoblas' 
Makefile:406: recipe for target 'bin/im2rec' failed
URL: https://github.com/apache/incubator-mxnet/issues/8773
 
 
   cd /dev/shm/multitask/mx-maskrcnn/incubator-mxnet/dmlc-core; make libdmlc.a 
USE_SSE=1 config=/dev/shm/multitask/mx-maskrcnn/incubator-mxnet/make/config.mk; 
cd /dev/shm/multitask/mx-maskrcnn/incubator-mxnet
   make[1]: Entering directory 
'/dev/shm/multitask/mx-maskrcnn/incubator-mxnet/dmlc-core'
   make[1]: 'libdmlc.a' is up to date.
   make[1]: Leaving directory 
'/dev/shm/multitask/mx-maskrcnn/incubator-mxnet/dmlc-core'
   g++ -DMSHADOW_FORCE_STREAM -Wall -Wsign-compare -O3 -DNDEBUG=1 
-I/dev/shm/multitask/mx-maskrcnn/incubator-mxnet/mshadow/ 
-I/dev/shm/multitask/mx-maskrcnn/incubator-mxnet/dmlc-core/include -fPIC 
-I/dev/shm/multitask/mx-maskrcnn/incubator-mxnet/nnvm/include 
-I/dev/shm/multitask/mx-maskrcnn/incubator-mxnet/dlpack/include -Iinclude 
-funroll-loops -Wno-unused-variable -Wno-unused-parameter -Wno-unknown-pragmas 
-Wno-unused-local-typedefs -msse3 -I/usr/local/cuda/include 
-DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 
-DMSHADOW_DIST_PS=0 -DMSHADOW_USE_PASCAL=0 -DMXNET_USE_OPENCV=1 
-I/usr/include/opencv -fopenmp -DMXNET_USE_LAPACK -DMSHADOW_USE_CUDNN=1 
-fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free  
-I/dev/shm/multitask/mx-maskrcnn/incubator-mxnet/cub 
-DMXNET_USE_LIBJPEG_TURBO=0 -std=c++11  -o bin/im2rec tools/im2rec.cc 
build/src/operator/mkl/mkl_cppwrapper.o build/src/operator/mkl/mkl_memory.o 
build/src/operator/random/sample_multinomial_op.o 
build/src/operator/random/multisample_op.o 
build/src/operator/random/sample_op.o 
build/src/operator/tensor/elemwise_binary_broadcast_op_extended.o 
build/src/operator/tensor/elemwise_binary_op_logic.o 
build/src/operator/tensor/elemwise_binary_op_extended.o 
build/src/operator/tensor/square_sum.o build/src/operator/tensor/dot.o 
build/src/operator/tensor/elemwise_sum.o build/src/operator/tensor/init_op.o 
build/src/operator/tensor/elemwise_binary_broadcast_op_basic.o 
build/src/operator/tensor/cast_storage.o 
build/src/operator/tensor/elemwise_binary_op.o 
build/src/operator/tensor/elemwise_binary_scalar_op_logic.o 
build/src/operator/tensor/elemwise_scatter_op.o 
build/src/operator/tensor/elemwise_unary_op_basic.o 
build/src/operator/tensor/broadcast_reduce_op_value.o 
build/src/operator/tensor/ordering_op.o 
build/src/operator/tensor/elemwise_binary_op_basic.o 
build/src/operator/tensor/elemwise_binary_scalar_op_basic.o 
build/src/operator/tensor/indexing_op.o 
build/src/operator/tensor/elemwise_binary_broadcast_op_logic.o 
build/src/operator/tensor/la_op.o 
build/src/operator/tensor/broadcast_reduce_op_index.o 
build/src/operator/tensor/sparse_retain.o 
build/src/operator/tensor/control_flow_op.o 
build/src/operator/tensor/elemwise_binary_scalar_op_extended.o 
build/src/operator/tensor/matrix_op.o 
build/src/operator/tensor/elemwise_unary_op_trig.o 
build/src/operator/nnpack/nnpack_util.o 
build/src/operator/contrib/multibox_target.o 
build/src/operator/contrib/proposal.o build/src/operator/contrib/count_sketch.o 
build/src/operator/contrib/dequantize.o 
build/src/operator/contrib/deformable_psroi_pooling.o 
build/src/operator/contrib/fft.o build/src/operator/contrib/multibox_prior.o 
build/src/operator/contrib/ctc_loss.o 
build/src/operator/contrib/multi_proposal.o 
build/src/operator/contrib/psroi_pooling.o 
build/src/operator/contrib/quantize.o 
build/src/operator/contrib/deformable_convolution.o 
build/src/operator/contrib/ifft.o 
build/src/operator/contrib/multibox_detection.o 
build/src/operator/custom/native_op.o build/src/operator/custom/ndarray_op.o 
build/src/operator/custom/custom.o build/src/operator/nn/softmax.o 
build/src/io/image_aug_default.o build/src/io/io.o build/src/io/iter_csv.o 
build/src/io/iter_image_det_recordio.o build/src/io/image_io.o 
build/src/io/image_det_aug_default.o build/src/io/iter_image_recordio.o 
build/src/io/iter_mnist.o build/src/io/iter_image_recordio_2.o 
build/src/io/iter_libsvm.o build/src/common/rtc.o build/src/common/utils.o 
build/src/nnvm/legacy_op_util.o build/src/nnvm/legacy_json_util.o 
build/src/imperative/cached_op.o build/src/imperative/imperative.o 
build/src/ndarray/ndarray_function.o build/src/ndarray/ndarray.o 
build/src/operator/instance_norm.o build/src/operator/pooling.o 
build/src/operator/crop.o build/src/operator/spatial_transformer.o 
build/src/operator/swapaxis.o build/src/operator/convolution_v1.o 
build/src/operator/pad.o build/src/operator/batch_norm.o 
build/src/operator/softmax_output.o build/src/operator/cudnn_algoreg.o 
build/src/operator/correlation.o build/src/operator/operator_util.o 
build/src/operator/sequence_reverse.o build/src/operator/bilinear_sampler.o 
build/src/operator/sequence_last.o build/src/operator/svm_output.o 
build/src/operator/operator.o build/src/operator/optimizer_op.o 
build/src/operator/lrn.o 

[GitHub] szha closed issue #7573: how to use autograd in c++

2017-11-22 Thread GitBox
szha closed issue #7573: how to use autograd in c++
URL: https://github.com/apache/incubator-mxnet/issues/7573
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #8779: [Image] add random lighting

2017-11-22 Thread GitBox
sxjscience commented on a change in pull request #8779: [Image] add random 
lighting
URL: https://github.com/apache/incubator-mxnet/pull/8779#discussion_r152695670
 
 

 ##
 File path: python/mxnet/gluon/data/vision/transforms.py
 ##
 @@ -151,3 +151,23 @@ def __init__(self, max_brightness=0, max_contrast=0, 
max_saturation=0, max_hue=0
 
 def hybrid_forward(self, F, x):
 return F.image.random_color_jitter(x, *self._args)
+
+
+class AdjustLighting(HybridBlock):
+def __init__(self, alpha_rgb=(0.0, 0.0, 0.0), eigval=(55.46, 4.794, 1.148),
 
 Review comment:
   It's (alpha_r, alpha_g, alpha_b)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #7552: Networks for CIFAR-10.

2017-11-22 Thread GitBox
szha commented on issue #7552: Networks for CIFAR-10.
URL: 
https://github.com/apache/incubator-mxnet/issues/7552#issuecomment-346507954
 
 
   This issue is closed due to lack of activity in the last 90 days. Feel free 
to ping me to reopen if this is still an active issue. Thanks!
   Also, do please check out our [forum](https://discuss.mxnet.io/) (and 
[Chinese version](https://discuss.gluon.ai/)) for general "how-to" questions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #7575: How to measure the time consumed by every batch in training?

2017-11-22 Thread GitBox
szha commented on issue #7575: How to measure the time consumed by every batch 
in training?
URL: 
https://github.com/apache/incubator-mxnet/issues/7575#issuecomment-346507956
 
 
   This issue is closed due to lack of activity in the last 90 days. Feel free 
to ping me to reopen if this is still an active issue. Thanks!
   Also, do please check out our [forum](https://discuss.mxnet.io/) (and 
[Chinese version](https://discuss.gluon.ai/)) for general "how-to" questions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liuyiqun1 commented on issue #8655: How do I download the data file for the examples in incubator-mxnet/cpp-package/example/feature_extract/

2017-11-22 Thread GitBox
liuyiqun1 commented on issue #8655: How do I download the data file for the 
examples in incubator-mxnet/cpp-package/example/feature_extract/
URL: 
https://github.com/apache/incubator-mxnet/issues/8655#issuecomment-346516077
 
 
   I have a question about the mean_img used in this feature extract program ,I 
get the mean image from here http://data.dmlc.ml/mxnet/data/Inception.zip and 
use the model provided in the README but I think the  mean_img  should be the 
RGB style but in the feature extract program I think there is no convert for 
the pictures to the RGB style just use the BGR ,also 
https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/predict-cpp/image-classification-predict.cc
 in this program they did the convert


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] everwind opened a new issue #8785: kvstore can not support int64 keys

2017-11-22 Thread GitBox
everwind opened a new issue #8785: kvstore can not support int64 keys
URL: https://github.com/apache/incubator-mxnet/issues/8785
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   >>> import mxnet
   >>> import mxnet as mx
   >>> kv = mx.kv.create('local')
   >>> kv.init(2**32+2, mx.nd.ones(100)) 
   >>> kv.pull(2, out=a) 
   >>> a
   
   [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.]
   
   >>> 
   the key:2  collide with  key:2**32 
   I am building a model that the feature keys are more than 2**32
   
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1.
   2.
   
   ## What have you tried to solve it?
   
   1.
   2.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tanhm07 commented on issue #8777: Error: package or namespace load failed for ?mxnet?:

2017-11-22 Thread GitBox
tanhm07 commented on issue #8777: Error: package or namespace load failed for 
?mxnet?:
URL: 
https://github.com/apache/incubator-mxnet/issues/8777#issuecomment-346525238
 
 
   I did not build from source. Instead I downloaded the R package as I'm using 
R. 
   
   location of the .dll: D:\R\R-3.4.2\library\mxnet\libs\x64
   
   Perhaps i will try building from source later


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chowkamlee81 opened a new issue #8789: I had two pairs of set of images and one ground truth image. How to train in mxnet?

2017-11-22 Thread GitBox
chowkamlee81 opened a new issue #8789: I had two pairs of set of images and one 
ground truth image. How to train in mxnet?
URL: https://github.com/apache/incubator-mxnet/issues/8789
 
 
   Hai,
   
   I want to develop optical flow using CNN.
   I had two pairs of image datset along with ground truth. 
   Till now i trained with one single image and single output image, but to 
train pairs of image set with ground truth. Any implementation ideas of code . 
Any help would be grateful


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8788: [WIP] fix build

2017-11-22 Thread GitBox
szha commented on issue #8788: [WIP] fix build
URL: https://github.com/apache/incubator-mxnet/pull/8788#issuecomment-346535483
 
 
   It currently consistently fails on
   ```
   test_operator_gpu.test_bilinear_sampler_with_type ... [05:37:47] 
/home/ubuntu/mxnet/dmlc-core/include/dmlc/./logging.h:308: [05:37:47] 
src/operator/bilinear_sampler.cu:172: Check failed: err == cudaSuccess (7 vs. 
0) too many resources requested for launch
   ```
   It's a 10-month old test, so I imagine something else is going on.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] javelinjs commented on a change in pull request #8759: image flip op

2017-11-22 Thread GitBox
javelinjs commented on a change in pull request #8759: image flip op
URL: https://github.com/apache/incubator-mxnet/pull/8759#discussion_r152729081
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -144,6 +151,45 @@ static void Normalize(const nnvm::NodeAttrs ,
   });
 }
 
+inline static int FlipIndex(int idx, const int stride, const int trailing) {
+  const int low = idx % trailing;
+  int high = idx / trailing;
+  const int x = high % stride;
+  high /= stride;
+
+  return (high * stride + stride - 1 - x) * trailing + low;
+}
+
+template
+static void FlipImpl(const int size, DType *src, DType *dst,
+ const int stride, const int trailing) {
+  for (int idx = 0; idx < size; ++idx) {
+int new_idx = FlipIndex(idx, stride, trailing);
 
 Review comment:
   how's it now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #8791: Fix check of kvstore type

2017-11-22 Thread GitBox
eric-haibin-lin closed pull request #8791: Fix check of kvstore type
URL: https://github.com/apache/incubator-mxnet/pull/8791
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/kvstore.py b/python/mxnet/kvstore.py
index a6d3aa519f..23eb454b5b 100644
--- a/python/mxnet/kvstore.py
+++ b/python/mxnet/kvstore.py
@@ -408,7 +408,7 @@ def set_gradient_compression(self, compression_params):
 Other keys in this dictionary are optional and specific to the type
 of gradient compression.
 """
-if (self.type == 'device') or ('dist' in self.type):
+if ('device' in self.type) or ('dist' in self.type):
 ckeys, cvals = _ctype_dict(compression_params)
 check_call(_LIB.MXKVStoreSetGradientCompression(self.handle,
 
mx_uint(len(compression_params)),


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.0.0 updated: fix check (#8791)

2017-11-22 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
 new 92d848f  fix check (#8791)
92d848f is described below

commit 92d848ff293b34e470b5f70f501c26b03ced07be
Author: Rahul Huilgol 
AuthorDate: Wed Nov 22 22:58:55 2017 -0800

fix check (#8791)

Signed-off-by: Rahul 
---
 python/mxnet/kvstore.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/python/mxnet/kvstore.py b/python/mxnet/kvstore.py
index a6d3aa5..23eb454 100644
--- a/python/mxnet/kvstore.py
+++ b/python/mxnet/kvstore.py
@@ -408,7 +408,7 @@ class KVStore(object):
 Other keys in this dictionary are optional and specific to the type
 of gradient compression.
 """
-if (self.type == 'device') or ('dist' in self.type):
+if ('device' in self.type) or ('dist' in self.type):
 ckeys, cvals = _ctype_dict(compression_params)
 check_call(_LIB.MXKVStoreSetGradientCompression(self.handle,
 
mx_uint(len(compression_params)),

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] szha commented on issue #8788: [WIP] fix build

2017-11-22 Thread GitBox
szha commented on issue #8788: [WIP] fix build
URL: https://github.com/apache/incubator-mxnet/pull/8788#issuecomment-346535483
 
 
   It currently consistently fails on
   ```test_operator_gpu.test_bilinear_sampler_with_type ... [05:37:47] 
/home/ubuntu/mxnet/dmlc-core/include/dmlc/./logging.h:308: [05:37:47] 
src/operator/bilinear_sampler.cu:172: Check failed: err == cudaSuccess (7 vs. 
0) too many resources requested for launch
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8788: [WIP] fix build

2017-11-22 Thread GitBox
szha commented on issue #8788: [WIP] fix build
URL: https://github.com/apache/incubator-mxnet/pull/8788#issuecomment-346535483
 
 
   It currently consistently fails on
   ```
   test_operator_gpu.test_bilinear_sampler_with_type ... [05:37:47] 
/home/ubuntu/mxnet/dmlc-core/include/dmlc/./logging.h:308: [05:37:47] 
src/operator/bilinear_sampler.cu:172: Check failed: err == cudaSuccess (7 vs. 
0) too many resources requested for launch
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8791: Fix check of kvstore type

2017-11-22 Thread GitBox
eric-haibin-lin commented on issue #8791: Fix check of kvstore type
URL: https://github.com/apache/incubator-mxnet/pull/8791#issuecomment-346544211
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8788: [WIP] fix build

2017-11-22 Thread GitBox
szha commented on issue #8788: [WIP] fix build
URL: https://github.com/apache/incubator-mxnet/pull/8788#issuecomment-346537490
 
 
   Checked offline with Haibin and team. They found the possible cause to be 
85d4bd2c. Closing for now and will let author address this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #8788: [WIP] fix build

2017-11-22 Thread GitBox
szha closed pull request #8788: [WIP] fix build
URL: https://github.com/apache/incubator-mxnet/pull/8788
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/common/rtc.cc b/src/common/rtc.cc
index cc51aaa108..c48afc6895 100644
--- a/src/common/rtc.cc
+++ b/src/common/rtc.cc
@@ -74,7 +74,7 @@ CudaModule::Chunk::~Chunk() {
 CUfunction CudaModule::Chunk::GetFunction(
 const std::string& mangled_name,
 const Context& ctx) {
-  CHECK_EQ(ctx.dev_mask(), gpu::kDevMask)
+  CHECK_EQ(ctx.dev_mask(), Context::kGPU)
   << "CUDA Runtime compilation only supports Nvidia GPU.";
   auto iter = mod_.find(ctx.dev_id);
   CUmodule module;


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 opened a new pull request #8791: Fix check of kvstore type

2017-11-22 Thread GitBox
rahul003 opened a new pull request #8791: Fix check of kvstore type
URL: https://github.com/apache/incubator-mxnet/pull/8791
 
 
   ## Description ##
   kvstore type can be something like local as well as have device in it. For 
such kvstores as well gradient compression is supported
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8790: Fix weird hang bug due to cuInit sometimes calls fork

2017-11-22 Thread GitBox
eric-haibin-lin commented on issue #8790: Fix weird hang bug due to cuInit 
sometimes calls fork
URL: https://github.com/apache/incubator-mxnet/pull/8790#issuecomment-346542198
 
 
   @ptrendx any idea why fork was called by `cuinit`? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8788: [WIP] fix build

2017-11-22 Thread GitBox
szha commented on issue #8788: [WIP] fix build
URL: https://github.com/apache/incubator-mxnet/pull/8788#issuecomment-346533512
 
 
   @eric-haibin-lin @larroy this fixes the build for me on the set-up described 
in #8786. Still verifying tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #8788: [WIP] fix build

2017-11-22 Thread GitBox
szha opened a new pull request #8788: [WIP] fix build
URL: https://github.com/apache/incubator-mxnet/pull/8788
 
 
   ## Description ##
   Addresses #8786 with the same fix as #8692


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8784: Fix warning on meaningless return type qualifier

2017-11-22 Thread GitBox
szha commented on issue #8784: Fix warning on meaningless return type qualifier
URL: https://github.com/apache/incubator-mxnet/pull/8784#issuecomment-346536943
 
 
   master needs to be patched too when build is fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #8784: Fix warning on meaningless return type qualifier

2017-11-22 Thread GitBox
szha closed pull request #8784: Fix warning on meaningless return type qualifier
URL: https://github.com/apache/incubator-mxnet/pull/8784
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/operator_tune.h b/src/operator/operator_tune.h
index 2088d4603d..b343e83a02 100644
--- a/src/operator/operator_tune.h
+++ b/src/operator/operator_tune.h
@@ -154,7 +154,7 @@ class OperatorTuneByType : public OperatorTuneBase {
* \brief Get the current tuning mode
* \return tune::TuningMode value for the current tuning mode
*/
-  static MSHADOW_CINLINE volatile tune::TuningMode tuning_mode() {
+  static MSHADOW_CINLINE tune::TuningMode tuning_mode() {
 return tuning_mode_;
   }
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.0.0 updated: Fix warning on meaningless return type qualifier (#8784)

2017-11-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
 new fef1841  Fix warning on meaningless return type qualifier (#8784)
fef1841 is described below

commit fef1841e1ae352e338b5d4cd52fcae17de21
Author: Pedro Larroy <928489+lar...@users.noreply.github.com>
AuthorDate: Wed Nov 22 21:58:12 2017 -0800

Fix warning on meaningless return type qualifier (#8784)
---
 src/operator/operator_tune.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/operator/operator_tune.h b/src/operator/operator_tune.h
index 2088d46..b343e83 100644
--- a/src/operator/operator_tune.h
+++ b/src/operator/operator_tune.h
@@ -154,7 +154,7 @@ class OperatorTuneByType : public OperatorTuneBase {
* \brief Get the current tuning mode
* \return tune::TuningMode value for the current tuning mode
*/
-  static MSHADOW_CINLINE volatile tune::TuningMode tuning_mode() {
+  static MSHADOW_CINLINE tune::TuningMode tuning_mode() {
 return tuning_mode_;
   }
 

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] piiswrong opened a new pull request #8790: Fix weird hang bug due to cuInit sometimes calls fork

2017-11-22 Thread GitBox
piiswrong opened a new pull request #8790: Fix weird hang bug due to cuInit 
sometimes calls fork
URL: https://github.com/apache/incubator-mxnet/pull/8790
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8668: Precision error setting NDArray from np.float32 scalar

2017-11-22 Thread GitBox
cjolivier01 commented on issue #8668: Precision error setting NDArray from 
np.float32 scalar
URL: 
https://github.com/apache/incubator-mxnet/issues/8668#issuecomment-346370555
 
 
   Yes
   
   On Wed, Nov 22, 2017 at 6:14 AM Christopher Barber 
   wrote:
   
   > Obviously passing floating point data as (decimal?) strings is horribly
   > inefficient and prone to loss of precision. Is this something that is
   > expected to change at some point?
   >
   > ?
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tdomhan commented on issue #8334: Bugfix: Python 3 compatiblity during optimizer serialization.

2017-11-22 Thread GitBox
tdomhan commented on issue #8334: Bugfix: Python 3 compatiblity during 
optimizer serialization.
URL: https://github.com/apache/incubator-mxnet/pull/8334#issuecomment-346376475
 
 
   alright, finally all checks passed. Can we merge this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang opened a new pull request #8774: remove meaningless type qualifier

2017-11-22 Thread GitBox
ZiyueHuang opened a new pull request #8774: remove meaningless type qualifier
URL: https://github.com/apache/incubator-mxnet/pull/8774
 
 
   ## Description ##
   Otherwise there are so many warnings `warning: type qualifier on return type 
is meaningless`.
   
   ```
   g++ --version
   g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11)
   Copyright (C) 2015 Free Software Foundation, Inc.
   This is free software; see the source for copying conditions.  There is NO
   warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
   ```
   cc @eric-haibin-lin @cjolivier01 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon closed pull request #8674: ADD CapsNet example

2017-11-22 Thread GitBox
Soonhwan-Kwon closed pull request #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/example/capsnet/README.md b/example/capsnet/README.md
new file mode 100644
index 00..9e52da0583
--- /dev/null
+++ b/example/capsnet/README.md
@@ -0,0 +1,45 @@
+**CapsNet-MXNet**
+=
+
+This example is MXNet implementation of 
[CapsNet](https://arxiv.org/abs/1710.09829):  
+Sara Sabour, Nicholas Frosst, Geoffrey E Hinton. Dynamic Routing Between 
Capsules. NIPS 2017
+- The current best test error is 0.41%  
+- The average test error on paper is 0.25%  
+
+Due to the permission issue, this example is maintained in this 
[repository](https://github.com/samsungsds-rnd/capsnet.mxnet) separately.
+* * *
+## **Usage**
+Install scipy with pip  
+```
+pip install scipy
+```
+
+On Single gpu
+```
+python capsulenet.py --devices gpu0
+```
+On Multi gpus
+```
+python capsulenet.py --devices gpu0,gpu1
+```
+
+* * *
+## **Prerequisities**
+
+MXNet version above (0.11.0)
+scipy version above (0.19.0)
+
+***
+## **Results**  
+Train time takes about 36 seconds for each epoch (batch_size=100, lr=0.001, 2 
gtx 1080 gpus)  
+and we limited number of epoch to 100 as default to limit our training time(1 
hour).
+
+CapsNet classification test error on MNIST  
+
+```
+python capsulenet.py --devices gpu0,gpu1 --lr 0.0005 --batch_size 100 
--num_routing 3 --decay 0.9
+```
+
+| Epoch | train err | test err | train loss | test loss |
+| :---: | :---: | :---: | :---: | :---: |
+| 62 | 0.25 | 0.41 | 0.000247 | 0.000267 |
diff --git a/example/capsnet/capsulelayers.py b/example/capsnet/capsulelayers.py
new file mode 100644
index 00..5ac4fad491
--- /dev/null
+++ b/example/capsnet/capsulelayers.py
@@ -0,0 +1,106 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import mxnet as mx
+
+
+def squash(data, squash_axis, name=''):
+epsilon = 1e-08
+s_squared_norm = mx.sym.sum(data=mx.sym.square(data, name='square_'+name),
+axis=squash_axis, keepdims=True, 
name='s_squared_norm_'+name)
+scale = s_squared_norm / (1 + s_squared_norm) / 
mx.sym.sqrt(data=(s_squared_norm+epsilon),
+
name='s_squared_norm_sqrt_'+name)
+squashed_net = mx.sym.broadcast_mul(scale, data, name='squashed_net_'+name)
+return squashed_net
+
+
+def primary_caps(data, dim_vector, n_channels, kernel, strides, name=''):
+out = mx.sym.Convolution(data=data,
+ num_filter=dim_vector * n_channels,
+ kernel=kernel,
+ stride=strides,
+ name=name
+ )
+out = mx.sym.Reshape(data=out, shape=(0, -1, dim_vector))
+out = squash(out, squash_axis=2)
+return out
+
+
+class CapsuleLayer:
+"""
+The capsule layer with dynamic routing.
+[batch_size, input_num_capsule, input_dim_vector] => [batch_size, 
num_capsule, dim_vector]
+"""
+
+def __init__(self, num_capsule, dim_vector, batch_size, 
kernel_initializer, bias_initializer, num_routing=3):
+self.num_capsule = num_capsule
+self.dim_vector = dim_vector
+self.batch_size = batch_size
+self.num_routing = num_routing
+self.kernel_initializer = kernel_initializer
+self.bias_initializer = bias_initializer
+
+def __call__(self, data):
+_, out_shapes, __ = data.infer_shape(data=(self.batch_size, 1, 28, 28))
+_, input_num_capsule, input_dim_vector = out_shapes[0]
+
+# build w and bias
+# W : (input_num_capsule, num_capsule, input_dim_vector, dim_vector)
+# bias : (batch_size, input_num_capsule, num_capsule ,1, 1)
+w = mx.sym.Variable('Weight',
+shape=(1, input_num_capsule, self.num_capsule, 
input_dim_vector, self.dim_vector),
+ 

[GitHub] analog-cbarber commented on issue #8756: As there are many JAVA programer, and JDK9 has HAD JAVA REPL JSHELL like PYTHON OR SCALA shell, it will be easy to use JAVA training model. , When wi

2017-11-22 Thread GitBox
analog-cbarber commented on issue #8756: As there are many JAVA programer, and 
JDK9 has HAD JAVA REPL JSHELL like  PYTHON OR SCALA shell, it will be easy to 
use JAVA training model. , When will MXNET support JAVA programing language? 
URL: 
https://github.com/apache/incubator-mxnet/issues/8756#issuecomment-346360904
 
 
   While I agree that a Java front-end makes a lot of sense, I really would not 
expect to see one until someone contributes one and commits to maintaining it.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] analog-cbarber commented on issue #8668: Precision error setting NDArray from np.float32 scalar

2017-11-22 Thread GitBox
analog-cbarber commented on issue #8668: Precision error setting NDArray from 
np.float32 scalar
URL: 
https://github.com/apache/incubator-mxnet/issues/8668#issuecomment-346361655
 
 
   Obviously passing floating point data as (decimal?) strings is horribly 
inefficient and prone to loss of precision. Is this something that is expected 
to change at some point?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] madjam opened a new pull request #8776: Fix security doc link

2017-11-22 Thread GitBox
madjam opened a new pull request #8776: Fix security doc link
URL: https://github.com/apache/incubator-mxnet/pull/8776
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] starimpact opened a new issue #8775: Is the newest cpp package compatible with v0.8.0?

2017-11-22 Thread GitBox
starimpact opened a new issue #8775: Is the newest cpp package compatible with 
v0.8.0?
URL: https://github.com/apache/incubator-mxnet/issues/8775
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #8737: Use RAII and fix Coverity resource leaks #10371 and others

2017-11-22 Thread GitBox
cjolivier01 commented on a change in pull request #8737: Use RAII and fix 
Coverity resource leaks #10371 and others
URL: https://github.com/apache/incubator-mxnet/pull/8737#discussion_r152614153
 
 

 ##
 File path: cpp-package/example/alexnet.cpp
 ##
 @@ -215,7 +215,7 @@ int main(int argc, char const *argv[]) {
   args_map["label"] = NDArray(Shape(batch_size), ctx);
 
   /*with data and label, executor can be generated automatically*/
-  auto *exec = Net.SimpleBind(ctx, args_map);
+  auto exec = Net.SimpleBind(ctx, args_map);
 
 Review comment:
   Why not use unique_ptr rather than a stack variable?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Explicitly convert float value (#8758)

2017-11-22 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 5518986  Explicitly convert float value (#8758)
5518986 is described below

commit 55189860dbeb9486f745403760e10619f40de488
Author: reminisce 
AuthorDate: Wed Nov 22 08:25:55 2017 -0800

Explicitly convert float value (#8758)

* Explicitly convert float value

* Add unit test function
---
 python/mxnet/ndarray/ndarray.py   |  2 +-
 tests/python/unittest/test_ndarray.py | 10 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index 91d0e03..a45a6a8 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -691,7 +691,7 @@ fixed-size items.
 value.copyto(self)
 elif isinstance(value, numeric_types):
 _internal._full(shape=shape, ctx=self.context,
-dtype=self.dtype, value=value, out=self)
+dtype=self.dtype, value=float(value), 
out=self)
 elif isinstance(value, (np.ndarray, np.generic)):
 if isinstance(value, np.generic) or value.shape != shape:
 value = np.broadcast_to(value, shape)
diff --git a/tests/python/unittest/test_ndarray.py 
b/tests/python/unittest/test_ndarray.py
index 8e1f68f..5512b07 100644
--- a/tests/python/unittest/test_ndarray.py
+++ b/tests/python/unittest/test_ndarray.py
@@ -926,6 +926,16 @@ def test_ndarray_indexing():
 test_getitem_autograd(np_array, index[0])
 
 
+def test_assign_float_value_to_ndarray():
+"""Test case from https://github.com/apache/incubator-mxnet/issues/8668"";
+a = np.array([47.844944], dtype=np.float32)
+b = mx.nd.zeros(1, dtype=np.float32)
+b[0] = a
+assert same(a, b.asnumpy())
+b[0] = a[0]
+assert same(a, b.asnumpy())
+
+
 if __name__ == '__main__':
 import nose
 nose.runmodule()

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] cjolivier01 closed pull request #8758: Explicitly convert float value

2017-11-22 Thread GitBox
cjolivier01 closed pull request #8758: Explicitly convert float value
URL: https://github.com/apache/incubator-mxnet/pull/8758
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index 91d0e03e3d..a45a6a8247 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -691,7 +691,7 @@ def _set_nd_basic_indexing(self, key, value):
 value.copyto(self)
 elif isinstance(value, numeric_types):
 _internal._full(shape=shape, ctx=self.context,
-dtype=self.dtype, value=value, out=self)
+dtype=self.dtype, value=float(value), 
out=self)
 elif isinstance(value, (np.ndarray, np.generic)):
 if isinstance(value, np.generic) or value.shape != shape:
 value = np.broadcast_to(value, shape)
diff --git a/tests/python/unittest/test_ndarray.py 
b/tests/python/unittest/test_ndarray.py
index 8e1f68fd62..5512b07c77 100644
--- a/tests/python/unittest/test_ndarray.py
+++ b/tests/python/unittest/test_ndarray.py
@@ -926,6 +926,16 @@ def test_getitem_autograd(np_array, index):
 test_getitem_autograd(np_array, index[0])
 
 
+def test_assign_float_value_to_ndarray():
+"""Test case from https://github.com/apache/incubator-mxnet/issues/8668"";
+a = np.array([47.844944], dtype=np.float32)
+b = mx.nd.zeros(1, dtype=np.float32)
+b[0] = a
+assert same(a, b.asnumpy())
+b[0] = a[0]
+assert same(a, b.asnumpy())
+
+
 if __name__ == '__main__':
 import nose
 nose.runmodule()


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #8680: Fix cmake library path when installing python package

2017-11-22 Thread GitBox
cjolivier01 commented on a change in pull request #8680: Fix cmake library path 
when installing python package
URL: https://github.com/apache/incubator-mxnet/pull/8680#discussion_r152616626
 
 

 ##
 File path: python/mxnet/libinfo.py
 ##
 @@ -31,7 +31,7 @@ def find_lib_path():
 """
 curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
 api_path = os.path.join(curr_path, '../../lib/')
-cmake_build_path = os.path.join(curr_path, '../../build/Release/')
+cmake_build_path = os.path.join(curr_path, '../../build/')
 
 Review comment:
   CMake often tends to make a separate output directory for each build 
configuration "Debug", "Release", "RelWithDebInfo", etc. (when told to do so, 
for example by CLion, except it calls it cmake-build-debug, 
cmake-build-release, etc.). While I am not crazy about trying to guess the 
cmake output directory, I think explicitly pointing to "Release" when what you 
think you're running is "Debug" is dangerous.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] alexmosc commented on issue #7524: Is there a tutorial of using mxnet R LSTM for time series forecasting?

2017-11-22 Thread GitBox
alexmosc commented on issue #7524: Is there a tutorial of using mxnet R LSTM 
for time series forecasting? 
URL: 
https://github.com/apache/incubator-mxnet/issues/7524#issuecomment-346409398
 
 
   It would still be **very** helpful to have a tutorial on a simple 
time-series modelling with MXNET in R.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.0.0 updated: Explicitly convert float value (#8758)

2017-11-22 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
 new b63bec6  Explicitly convert float value (#8758)
b63bec6 is described below

commit b63bec687099cdb09dde8b521b9d0eb930cbc8db
Author: reminisce 
AuthorDate: Wed Nov 22 08:25:55 2017 -0800

Explicitly convert float value (#8758)

* Explicitly convert float value

* Add unit test function
---
 python/mxnet/ndarray/ndarray.py   |  2 +-
 tests/python/unittest/test_ndarray.py | 10 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index 91d0e03..a45a6a8 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -691,7 +691,7 @@ fixed-size items.
 value.copyto(self)
 elif isinstance(value, numeric_types):
 _internal._full(shape=shape, ctx=self.context,
-dtype=self.dtype, value=value, out=self)
+dtype=self.dtype, value=float(value), 
out=self)
 elif isinstance(value, (np.ndarray, np.generic)):
 if isinstance(value, np.generic) or value.shape != shape:
 value = np.broadcast_to(value, shape)
diff --git a/tests/python/unittest/test_ndarray.py 
b/tests/python/unittest/test_ndarray.py
index 8e1f68f..5512b07 100644
--- a/tests/python/unittest/test_ndarray.py
+++ b/tests/python/unittest/test_ndarray.py
@@ -926,6 +926,16 @@ def test_ndarray_indexing():
 test_getitem_autograd(np_array, index[0])
 
 
+def test_assign_float_value_to_ndarray():
+"""Test case from https://github.com/apache/incubator-mxnet/issues/8668"";
+a = np.array([47.844944], dtype=np.float32)
+b = mx.nd.zeros(1, dtype=np.float32)
+b[0] = a
+assert same(a, b.asnumpy())
+b[0] = a[0]
+assert same(a, b.asnumpy())
+
+
 if __name__ == '__main__':
 import nose
 nose.runmodule()

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[incubator-mxnet] branch v1.0.0 updated: Updating ps-lite submodule (#8769)

2017-11-22 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
 new c4a5aad  Updating ps-lite submodule (#8769)
c4a5aad is described below

commit c4a5aad0480a63f7b74b80ca766fe444ed5b5bf9
Author: Madan Jampani 
AuthorDate: Wed Nov 22 09:32:39 2017 -0800

Updating ps-lite submodule (#8769)
---
 ps-lite | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ps-lite b/ps-lite
index bdd4c67..2ce8b9a 16
--- a/ps-lite
+++ b/ps-lite
@@ -1 +1 @@
-Subproject commit bdd4c67e9e34dc0b8350ce306b0caa737eb31c83
+Subproject commit 2ce8b9a256207947acfa2cb9b09ab74b8de74547

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] ZiyueHuang commented on issue #8774: remove meaningless type qualifier

2017-11-22 Thread GitBox
ZiyueHuang commented on issue #8774: remove meaningless type qualifier
URL: https://github.com/apache/incubator-mxnet/pull/8774#issuecomment-346399433
 
 
   @cjolivier01 Thanks for your reference! Update :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #8737: Use RAII and fix Coverity resource leaks #10371 and others

2017-11-22 Thread GitBox
cjolivier01 commented on a change in pull request #8737: Use RAII and fix 
Coverity resource leaks #10371 and others
URL: https://github.com/apache/incubator-mxnet/pull/8737#discussion_r152614153
 
 

 ##
 File path: cpp-package/example/alexnet.cpp
 ##
 @@ -215,7 +215,7 @@ int main(int argc, char const *argv[]) {
   args_map["label"] = NDArray(Shape(batch_size), ctx);
 
   /*with data and label, executor can be generated automatically*/
-  auto *exec = Net.SimpleBind(ctx, args_map);
+  auto exec = Net.SimpleBind(ctx, args_map);
 
 Review comment:
   Why not use unique_ptr rather than a stack variable?  That 
wouldn't change the interface.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #8737: Use RAII and fix Coverity resource leaks #10371 and others

2017-11-22 Thread GitBox
cjolivier01 commented on a change in pull request #8737: Use RAII and fix 
Coverity resource leaks #10371 and others
URL: https://github.com/apache/incubator-mxnet/pull/8737#discussion_r152614866
 
 

 ##
 File path: cpp-package/example/alexnet.cpp
 ##
 @@ -215,7 +215,7 @@ int main(int argc, char const *argv[]) {
   args_map["label"] = NDArray(Shape(batch_size), ctx);
 
   /*with data and label, executor can be generated automatically*/
-  auto *exec = Net.SimpleBind(ctx, args_map);
+  auto exec = Net.SimpleBind(ctx, args_map);
 
 Review comment:
   Ok, I see it is returning unique_ptr. ANy chance you can not use auto in 
this case because it's not obvious what the type is from the code line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed pull request #8769: Updating ps-lite submodule

2017-11-22 Thread GitBox
cjolivier01 closed pull request #8769: Updating ps-lite submodule
URL: https://github.com/apache/incubator-mxnet/pull/8769
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/ps-lite b/ps-lite
index bdd4c67e9e..2ce8b9a256 16
--- a/ps-lite
+++ b/ps-lite
@@ -1 +1 @@
-Subproject commit bdd4c67e9e34dc0b8350ce306b0caa737eb31c83
+Subproject commit 2ce8b9a256207947acfa2cb9b09ab74b8de74547


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #8770: [Merge into v1.0.0 ONLY][Copy of PR #8704] Prep1.0: bump the version number and 0.12.1 updates

2017-11-22 Thread GitBox
szha closed pull request #8770: [Merge into v1.0.0 ONLY][Copy of PR #8704] 
Prep1.0: bump the version number and 0.12.1 updates
URL: https://github.com/apache/incubator-mxnet/pull/8770
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/NEWS.md b/NEWS.md
index 666b5d88e6..740621038d 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -1,5 +1,18 @@
 MXNet Change Log
 
+## 0.12.1
+### Bug-fixes
+  - Added GPU support for the `syevd` operator which ensures that there is GPU 
support for all linalg-operators.
+  - Bugfix for `syevd` on CPU such that it works for `float32`.
+  - Fixed API call when `OMP_NUM_THREADS` environment variable is set. 
+  - Fixed `MakeNonlossGradNode` bug.
+  - Fixed bug related to passing `dtype` to `array()`. 
+  - Fixed some minor bugs for sparse distributed training.
+  - Fixed a bug on `Slice` accessing uninitialized memory in `param.begin` in 
the file `matrix_op-inl.h`. 
+  - Fixed `gluon.data.RecordFileDataset`.
+  - Fixed a bug that caused `autograd` to crash on some networks.
+  
+  
 ## 0.12.0
 ### Performance
   - Added full support for NVIDIA Volta GPU Architecture and CUDA 9. Training 
CNNs is up to 3.5x faster than Pascal when using float16 precision.
diff --git a/R-package/DESCRIPTION b/R-package/DESCRIPTION
index 3d57ea876f..6e0f93294b 100644
--- a/R-package/DESCRIPTION
+++ b/R-package/DESCRIPTION
@@ -1,7 +1,7 @@
 Package: mxnet
 Type: Package
 Title: MXNet: A Flexible and Efficient Machine Learning Library for 
Heterogeneous Distributed Systems
-Version: 0.12.1
+Version: 1.0.0
 Date: 2017-06-27
 Author: Tianqi Chen, Qiang Kou, Tong He
 Maintainer: Qiang Kou 
diff --git a/README.md b/README.md
index fc252a7a72..0326412541 100644
--- a/README.md
+++ b/README.md
@@ -22,6 +22,7 @@ deep learning systems, and interesting insights of DL systems 
for hackers.
 
 What's New
 --
+* [Version 0.12.1 
Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.1) - MXNet 
0.12.1 Patch Release.
 * [Version 0.12.0 
Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.0) - MXNet 
0.12.0 Release.
 * [Version 0.11.0 
Release](https://github.com/apache/incubator-mxnet/releases/tag/0.11.0) - MXNet 
0.11.0 Release.
 * [Apache Incubator](http://incubator.apache.org/projects/mxnet.html) - We are 
now an Apache Incubator project.
diff --git a/docs/build_version_doc/build_all_version.sh 
b/docs/build_version_doc/build_all_version.sh
index 2d33bd72c4..bf02a62a15 100755
--- a/docs/build_version_doc/build_all_version.sh
+++ b/docs/build_version_doc/build_all_version.sh
@@ -21,7 +21,7 @@
 # Built files are stored in $built
 # Version numbers are stored in $tag_list.
 # Version numbers are ordered from latest to old and final one is master.
-tag_list="0.12.0 0.11.0 master"
+tag_list="1.0.0 0.12.0 0.11.0 master"
 
 mxnet_url="https://github.com/apache/incubator-mxnet.git;
 mxnet_folder="apache_mxnet"
diff --git a/include/mxnet/base.h b/include/mxnet/base.h
index 7c136a6470..84b2fea712 100644
--- a/include/mxnet/base.h
+++ b/include/mxnet/base.h
@@ -109,11 +109,11 @@
 #endif
 
 /*! \brief major version */
-#define MXNET_MAJOR 0
+#define MXNET_MAJOR 1
 /*! \brief minor version */
-#define MXNET_MINOR 12
+#define MXNET_MINOR 0
 /*! \brief patch version */
-#define MXNET_PATCH 1
+#define MXNET_PATCH 0
 /*! \brief mxnet version */
 #define MXNET_VERSION (MXNET_MAJOR*1 + MXNET_MINOR*100 + MXNET_PATCH)
 /*! \brief helper for making version number */
diff --git a/python/mxnet/libinfo.py b/python/mxnet/libinfo.py
index d4d100e12d..ce60606236 100644
--- a/python/mxnet/libinfo.py
+++ b/python/mxnet/libinfo.py
@@ -61,4 +61,4 @@ def find_lib_path():
 
 
 # current version
-__version__ = "0.12.1"
+__version__ = "1.0.0"
diff --git a/scala-package/assembly/linux-x86_64-cpu/pom.xml 
b/scala-package/assembly/linux-x86_64-cpu/pom.xml
index f15a7e315d..10f5d39638 100644
--- a/scala-package/assembly/linux-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-full-parent_2.11
-0.12.1-SNAPSHOT
+1.0.0-SNAPSHOT
 ../pom.xml
   
 
@@ -18,12 +18,12 @@
 
   ml.dmlc.mxnet
   mxnet-core_${scala.binary.version}
-  0.12.1-SNAPSHOT
+  1.0.0-SNAPSHOT
 
 
   ml.dmlc.mxnet
   libmxnet-scala-linux-x86_64-cpu
-  0.12.1-SNAPSHOT
+  1.0.0-SNAPSHOT
   so
 
   
diff --git a/scala-package/assembly/linux-x86_64-gpu/pom.xml 
b/scala-package/assembly/linux-x86_64-gpu/pom.xml
index 81e4d1ec59..9c9af8422d 100644
--- a/scala-package/assembly/linux-x86_64-gpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-full-parent_2.11
-0.12.1-SNAPSHOT
+1.0.0-SNAPSHOT
   

  1   2   >