[GitHub] reminisce commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-03-03 Thread GitBox
reminisce commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model 
Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#issuecomment-370205130
 
 
   @wentingj Yes, I noticed that too. I think there is still value of doing 
this. We can always keep improving the performance in the long run. I am 
waiting for @marcoabreu to setup the P3 instance properly. The current os image 
used in the P3 does not have cudnn lib installed. It may take 3-4 weeks to get 
this PR merged. Please plan accordingly. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky commented on issue #55: Fixing 404 issue caused due to missing how_to directory

2018-03-03 Thread GitBox
thinksanky commented on issue #55: Fixing 404 issue caused due to missing 
how_to directory
URL: 
https://github.com/apache/incubator-mxnet-site/pull/55#issuecomment-370204760
 
 
   @yzhliu 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky opened a new pull request #55: Fixing 404 issue caused due to missing how_to directory

2018-03-03 Thread GitBox
thinksanky opened a new pull request #55: Fixing 404 issue caused due to 
missing how_to directory
URL: https://github.com/apache/incubator-mxnet-site/pull/55
 
 
   ## Description ##
   * Added the how_to folder with all the missing  files and redirected to 
faq/.
   * Example redirection - 
https://github.com/apache/incubator-mxnet-site/compare/asf-site...thinksanky:fix_how_to_dir?expand=1#diff-b954bca37bdb86ae6e61ef923167980eR6
   * This will fix - https://issues.amazon.com/AWSDBUX-19332
   
   
   
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9887: Non-blocking row_sparse_pull

2018-03-03 Thread GitBox
reminisce commented on a change in pull request #9887: Non-blocking 
row_sparse_pull 
URL: https://github.com/apache/incubator-mxnet/pull/9887#discussion_r172038187
 
 

 ##
 File path: tests/python/unittest/test_kvstore.py
 ##
 @@ -76,7 +76,7 @@ def check_row_sparse_pull(kv, count):
 for i in range(count):
 vals.append(mx.nd.zeros(shape).tostype('row_sparse'))
 row_id = np.random.randint(num_rows, size=num_rows)
-row_ids.append(mx.nd.array(row_id))
+row_ids.append(mx.nd.array(row_id).reshape((2,2)))
 
 Review comment:
   Why reshape?
   One suggestion: (2, 2) is too hard-coded. If shape is changed in the 
beginning, the test would fail.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9981: [fix issue#9976] The assignment problem in NDArray

2018-03-03 Thread GitBox
reminisce commented on a change in pull request #9981: [fix issue#9976] The 
assignment problem in NDArray
URL: https://github.com/apache/incubator-mxnet/pull/9981#discussion_r172037661
 
 

 ##
 File path: tests/python/unittest/test_ndarray.py
 ##
 @@ -1099,6 +1099,39 @@ def test_assign_float_value_to_ndarray():
 b[0] = a[0]
 assert same(a, b.asnumpy())
 
+@with_seed()
+def test_ndarray_assignment():
+H, W = 10, 10
+a_np = np.random.random((H, W))
+a_nd = mx.nd.array(a_np)
+a_nd_id = id(a_nd)
+
+# assign directly
+a_np[0] = a_np[1]
+a_nd[0] = a_nd[1]
+assert np.allclose(a_np, a_nd.asnumpy())
 
 Review comment:
   Use `assert same(a_np, a_nd.asnumpy())`. Same for all others.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9981: [fix issue#9976] The assignment problem in NDArray

2018-03-03 Thread GitBox
reminisce commented on a change in pull request #9981: [fix issue#9976] The 
assignment problem in NDArray
URL: https://github.com/apache/incubator-mxnet/pull/9981#discussion_r172037688
 
 

 ##
 File path: tests/python/unittest/test_ndarray.py
 ##
 @@ -1099,6 +1099,39 @@ def test_assign_float_value_to_ndarray():
 b[0] = a[0]
 assert same(a, b.asnumpy())
 
+@with_seed()
+def test_ndarray_assignment():
+H, W = 10, 10
+a_np = np.random.random((H, W))
+a_nd = mx.nd.array(a_np)
+a_nd_id = id(a_nd)
+
+# assign directly
+a_np[0] = a_np[1]
+a_nd[0] = a_nd[1]
+assert np.allclose(a_np, a_nd.asnumpy())
+assert id(a_nd) == a_nd_id
 
 Review comment:
   What's the purpose of this check? This seems always true.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9981: [fix issue#9976] The assignment problem in NDArray

2018-03-03 Thread GitBox
reminisce commented on a change in pull request #9981: [fix issue#9976] The 
assignment problem in NDArray
URL: https://github.com/apache/incubator-mxnet/pull/9981#discussion_r172037698
 
 

 ##
 File path: tests/python/unittest/test_ndarray.py
 ##
 @@ -1099,6 +1099,39 @@ def test_assign_float_value_to_ndarray():
 b[0] = a[0]
 assert same(a, b.asnumpy())
 
+@with_seed()
+def test_ndarray_assignment():
 
 Review comment:
   1. Add issue link as code comment.
   2. Change the function name to a special one since this is only for testing 
special cases. General cases have been covered by `test_ndarray_indexing`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Feywell commented on issue #9572: Return a NaN when using operator ( ** ) on Windows version with GPU

2018-03-03 Thread GitBox
Feywell commented on issue #9572: Return a NaN when using operator ( ** ) on 
Windows version with GPU
URL: 
https://github.com/apache/incubator-mxnet/issues/9572#issuecomment-370198874
 
 
   @cgraywang I find the issue still exist.
   I upgraded the mxnet-cu80 windows version to 
   
   > 1.1.0b20180212
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on a change in pull request #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-03 Thread GitBox
wkcn commented on a change in pull request #9939: add multi proposal operator 
(cpu version) and fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#discussion_r172035107
 
 

 ##
 File path: src/operator/contrib/proposal.cu
 ##
 @@ -553,10 +553,10 @@ class ProposalGPUOp : public Operator{
 cudaMemcpyHostToDevice));
 
 // copy results after nms
-dimGrid.x = (rpn_post_nms_top_n + kMaxThreadsPerBlock - 1) / 
kMaxThreadsPerBlock;
+dimGrid.x = (param_.rpn_post_nms_top_n + kMaxThreadsPerBlock - 1) / 
kMaxThreadsPerBlock;
 
 Review comment:
   Thank you! 
   The invalid anchors would be ranked to the bottom for NMS, but the threshold 
is for overlap between two anchors rather than score.
   Should we add a condition that `score != -1` in NMS?
   
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cc#L259


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] precedenceguo commented on a change in pull request #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-03 Thread GitBox
precedenceguo commented on a change in pull request #9939: add multi proposal 
operator (cpu version) and fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#discussion_r172034936
 
 

 ##
 File path: src/operator/contrib/proposal.cu
 ##
 @@ -553,10 +553,10 @@ class ProposalGPUOp : public Operator{
 cudaMemcpyHostToDevice));
 
 // copy results after nms
-dimGrid.x = (rpn_post_nms_top_n + kMaxThreadsPerBlock - 1) / 
kMaxThreadsPerBlock;
+dimGrid.x = (param_.rpn_post_nms_top_n + kMaxThreadsPerBlock - 1) / 
kMaxThreadsPerBlock;
 
 Review comment:
   Yes, they should be thrown out; but that would require a new array. Instead 
I set the predicted confidence to be -1, same as the boxes whose centers are 
out of the input image boundary. Those invalid anchors would be ranked to the 
bottom for NMS so a proper threshold did not produce significant problems.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] precedenceguo commented on a change in pull request #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-03 Thread GitBox
precedenceguo commented on a change in pull request #9939: add multi proposal 
operator (cpu version) and fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#discussion_r172034936
 
 

 ##
 File path: src/operator/contrib/proposal.cu
 ##
 @@ -553,10 +553,10 @@ class ProposalGPUOp : public Operator{
 cudaMemcpyHostToDevice));
 
 // copy results after nms
-dimGrid.x = (rpn_post_nms_top_n + kMaxThreadsPerBlock - 1) / 
kMaxThreadsPerBlock;
+dimGrid.x = (param_.rpn_post_nms_top_n + kMaxThreadsPerBlock - 1) / 
kMaxThreadsPerBlock;
 
 Review comment:
   Yes, it should; but throwing out would require a new array. Instead I set 
the predicted confidence to be -1, same as the boxes whose centers are out of 
the input image boundary. They would be ranked to the bottom for NMS so a 
proper threshold did not produce significant problems.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wentingj commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-03-03 Thread GitBox
wentingj commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model 
Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#issuecomment-370193815
 
 
   @reminisce But MKLDNN int8 kernel has some performance benefit. Besides, If 
we fuse conv and relu into one int8 op, we can decrease layout transform time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #9983: [WIP] language model with google's billion words dataset

2018-03-03 Thread GitBox
eric-haibin-lin opened a new pull request #9983: [WIP] language model with 
google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/9983
 
 
   ## Description ##
   Reproduced LSTMP 2048-512 baseline, achieving test ppl ~42 in 
https://arxiv.org/pdf/1602.02410.pdf 
   The PR still contains many unnecessary changes. Will be cleaned up soon.
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9975: revert DiagrammeR compatibility to 0.9.0

2018-03-03 Thread GitBox
marcoabreu commented on issue #9975: revert DiagrammeR compatibility to 0.9.0
URL: https://github.com/apache/incubator-mxnet/pull/9975#issuecomment-370187701
 
 
   Is this still compatible with 1.0.0?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ShownX commented on issue #9405: BucketingModule causes uncaught exception if symbol name not specified

2018-03-03 Thread GitBox
ShownX commented on issue #9405: BucketingModule causes uncaught exception if 
symbol name not specified
URL: 
https://github.com/apache/incubator-mxnet/issues/9405#issuecomment-370181587
 
 
   Facing the same problem. However, in Gluon, it cannot define a name


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #9633: Gluon image-classification example improvement

2018-03-03 Thread GitBox
szha closed pull request #9633: Gluon image-classification example improvement
URL: https://github.com/apache/incubator-mxnet/pull/9633
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/example/gluon/data.py b/example/gluon/data.py
index dc8f12e81f6..c996c9af9ed 100644
--- a/example/gluon/data.py
+++ b/example/gluon/data.py
@@ -19,8 +19,14 @@
 """ data iterator for mnist """
 import os
 import random
+import logging
+logging.basicConfig(level=logging.INFO)
+
 import mxnet as mx
 from mxnet.test_utils import get_cifar10
+from mxnet.gluon.data.vision import ImageFolderDataset
+from mxnet.gluon.data import DataLoader
+from mxnet.contrib.io import DataLoaderIter
 
 def get_cifar10_iterator(batch_size, data_shape, resize=-1, num_parts=1, 
part_index=0):
 get_cifar10()
@@ -49,50 +55,38 @@ def get_cifar10_iterator(batch_size, data_shape, resize=-1, 
num_parts=1, part_in
 
 return train, val
 
-
-def get_imagenet_iterator(train_data, val_data, batch_size, data_shape, 
resize=-1, num_parts=1, part_index=0):
-train = mx.io.ImageRecordIter(
-path_imgrec = train_data,
-data_shape  = data_shape,
-mean_r  = 123.68,
-mean_g  = 116.779,
-mean_b  = 103.939,
-std_r   = 58.395,
-std_g   = 57.12,
-std_b   = 57.375,
-preprocess_threads  = 32,
-shuffle = True,
-batch_size  = batch_size,
-rand_crop   = True,
-resize  = resize,
-random_mirror   = True,
-max_random_h= 36,
-max_random_s= 50,
-max_random_l= 50,
-max_random_rotate_angle = 10,
-max_random_shear_ratio  = 0.1,
-max_random_aspect_ratio = 0.25,
-fill_value  = 127,
-min_random_scale= 0.533,
-num_parts   = num_parts,
-part_index  = part_index)
-
-val = mx.io.ImageRecordIter(
-path_imgrec= val_data,
-data_shape = data_shape,
-mean_r = 123.68,
-mean_g = 116.779,
-mean_b = 103.939,
-std_r  = 58.395,
-std_g  = 57.12,
-std_b  = 57.375,
-preprocess_threads = 32,
-batch_size = batch_size,
-resize = resize,
-num_parts  = num_parts,
-part_index = part_index)
-
-return train, val
+def get_imagenet_transforms(data_shape=224, dtype='float32'):
+def train_transform(image, label):
+image, _ = mx.image.random_size_crop(image, (data_shape, data_shape), 
0.08, (3/4., 4/3.))
+image = mx.nd.image.random_flip_left_right(image)
+image = mx.nd.image.to_tensor(image)
+image = mx.nd.image.normalize(image, mean=(0.485, 0.456, 0.406), 
std=(0.229, 0.224, 0.225))
+return mx.nd.cast(image, dtype), label
+
+def val_transform(image, label):
+image = mx.image.resize_short(image, data_shape + 32)
+image, _ = mx.image.center_crop(image, (data_shape, data_shape))
+image = mx.nd.image.to_tensor(image)
+image = mx.nd.image.normalize(image, mean=(0.485, 0.456, 0.406), 
std=(0.229, 0.224, 0.225))
+return mx.nd.cast(image, dtype), label
+return train_transform, val_transform
+
+def get_imagenet_iterator(root, batch_size, num_workers, data_shape=224, 
dtype='float32'):
+"""Dataset loader with preprocessing."""
+train_dir = os.path.join(root, 'train')
+train_transform, val_transform = get_imagenet_transforms(data_shape, dtype)
+logging.info("Loading image folder %s, this may take a bit long...", 
train_dir)
+train_dataset = ImageFolderDataset(train_dir, transform=train_transform)
+train_data = DataLoader(train_dataset, batch_size, shuffle=True,
+last_batch='discard', num_workers=num_workers)
+val_dir = os.path.join(root, 'val')
+if not os.path.isdir(os.path.join(os.path.expanduser(root, 'val', 
'n01440764'))):
+user_warning = 'Make sure validation images are stored in one subdir 
per category, a helper script is available at https://git.io/vNQv1'
+raise ValueError(user_warning)
+logging.info("Loading image folder %s, this may take a bit long...", 
val_dir)
+val_dataset = ImageFolderDataset(val_dir, transform=val_transform)
+val_data = DataLoader(val_dataset, batch_size, last_batch='keep', 
num_workers=num_workers)
+return DataLoaderIter(train_data, dtype), DataLoaderIter(val_data, dtype)
 
 
 class 

[incubator-mxnet] branch master updated: Gluon image-classification example improvement (#9633)

2018-03-03 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8780096  Gluon image-classification example improvement (#9633)
8780096 is described below

commit 87800967037f711644216076dd8404f6402ff69c
Author: Joshua Z. Zhang 
AuthorDate: Sat Mar 3 14:22:41 2018 -0600

Gluon image-classification example improvement (#9633)

* backup

* backup

* finish

* fix multiple

* fix

* fix

* fix padding

* add more tests

* fix expanduser
---
 example/gluon/data.py|  82 
 example/gluon/image_classification.py| 162 ---
 python/mxnet/contrib/__init__.py |   2 +
 python/mxnet/contrib/io.py   |  95 ++
 tests/python/unittest/test_contrib_io.py |  46 +
 5 files changed, 289 insertions(+), 98 deletions(-)

diff --git a/example/gluon/data.py b/example/gluon/data.py
index dc8f12e..c996c9a 100644
--- a/example/gluon/data.py
+++ b/example/gluon/data.py
@@ -19,8 +19,14 @@
 """ data iterator for mnist """
 import os
 import random
+import logging
+logging.basicConfig(level=logging.INFO)
+
 import mxnet as mx
 from mxnet.test_utils import get_cifar10
+from mxnet.gluon.data.vision import ImageFolderDataset
+from mxnet.gluon.data import DataLoader
+from mxnet.contrib.io import DataLoaderIter
 
 def get_cifar10_iterator(batch_size, data_shape, resize=-1, num_parts=1, 
part_index=0):
 get_cifar10()
@@ -49,50 +55,38 @@ def get_cifar10_iterator(batch_size, data_shape, resize=-1, 
num_parts=1, part_in
 
 return train, val
 
-
-def get_imagenet_iterator(train_data, val_data, batch_size, data_shape, 
resize=-1, num_parts=1, part_index=0):
-train = mx.io.ImageRecordIter(
-path_imgrec = train_data,
-data_shape  = data_shape,
-mean_r  = 123.68,
-mean_g  = 116.779,
-mean_b  = 103.939,
-std_r   = 58.395,
-std_g   = 57.12,
-std_b   = 57.375,
-preprocess_threads  = 32,
-shuffle = True,
-batch_size  = batch_size,
-rand_crop   = True,
-resize  = resize,
-random_mirror   = True,
-max_random_h= 36,
-max_random_s= 50,
-max_random_l= 50,
-max_random_rotate_angle = 10,
-max_random_shear_ratio  = 0.1,
-max_random_aspect_ratio = 0.25,
-fill_value  = 127,
-min_random_scale= 0.533,
-num_parts   = num_parts,
-part_index  = part_index)
-
-val = mx.io.ImageRecordIter(
-path_imgrec= val_data,
-data_shape = data_shape,
-mean_r = 123.68,
-mean_g = 116.779,
-mean_b = 103.939,
-std_r  = 58.395,
-std_g  = 57.12,
-std_b  = 57.375,
-preprocess_threads = 32,
-batch_size = batch_size,
-resize = resize,
-num_parts  = num_parts,
-part_index = part_index)
-
-return train, val
+def get_imagenet_transforms(data_shape=224, dtype='float32'):
+def train_transform(image, label):
+image, _ = mx.image.random_size_crop(image, (data_shape, data_shape), 
0.08, (3/4., 4/3.))
+image = mx.nd.image.random_flip_left_right(image)
+image = mx.nd.image.to_tensor(image)
+image = mx.nd.image.normalize(image, mean=(0.485, 0.456, 0.406), 
std=(0.229, 0.224, 0.225))
+return mx.nd.cast(image, dtype), label
+
+def val_transform(image, label):
+image = mx.image.resize_short(image, data_shape + 32)
+image, _ = mx.image.center_crop(image, (data_shape, data_shape))
+image = mx.nd.image.to_tensor(image)
+image = mx.nd.image.normalize(image, mean=(0.485, 0.456, 0.406), 
std=(0.229, 0.224, 0.225))
+return mx.nd.cast(image, dtype), label
+return train_transform, val_transform
+
+def get_imagenet_iterator(root, batch_size, num_workers, data_shape=224, 
dtype='float32'):
+"""Dataset loader with preprocessing."""
+train_dir = os.path.join(root, 'train')
+train_transform, val_transform = get_imagenet_transforms(data_shape, dtype)
+logging.info("Loading image folder %s, this may take a bit long...", 
train_dir)
+train_dataset = ImageFolderDataset(train_dir, transform=train_transform)
+train_data = DataLoader(train_dataset, batch_size, shuffle=True,
+

[GitHub] CoinCheung commented on issue #9978: Error with random generator

2018-03-03 Thread GitBox
CoinCheung commented on issue #9978: Error with random generator
URL: 
https://github.com/apache/incubator-mxnet/issues/9978#issuecomment-370145440
 
 
   @asitstands 
   Ah, I see. Thanks a lot. I think this issue can be closed now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CoinCheung closed issue #9978: Error with random generator

2018-03-03 Thread GitBox
CoinCheung closed issue #9978: Error with random generator
URL: https://github.com/apache/incubator-mxnet/issues/9978
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: sparse regression operators (#9625)

2018-03-03 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new dedfd2d  sparse regression operators (#9625)
dedfd2d is described below

commit dedfd2d60713319855c0b9df0aac57eee2d68f2d
Author: Ziyue Huang 
AuthorDate: Sat Mar 3 20:13:48 2018 +0800

sparse regression operators (#9625)

* sparse regression ops

* add elemadd(dns, csr)

* address comments and fix

* replace copy with mshadow_op::identity

* add kWriteInplace check

* elemwise broadcast add

* less template instantiation

* not instantiate broadcast_add

* remove DnsCsrOP instantiation in elemwise_binary

* lint

* remove two regression ops

* enable binary op

* disable binary broadcast

* fix

* duplicate some codes in binary_broadcst

* try to make names short

* try to make names short for infer stype

* disbale sparse broadcst_add

* revert binary broadcast

* update

* disable MAE

* disable DnsCsrOp

* remove IType

* remove binary

* update

* address comments

* update

* try to fix R-test MF

* Revert "try to fix R-test MF"

This reverts commit f6d3e17ea7f5a71d23d81375bf345147a4373a93.

* remove grad_req check for label

* address comments

* trigger CI
---
 docs/api/python/ndarray/sparse.md  |   2 +
 docs/api/python/symbol/sparse.md   |   2 +
 src/operator/regression_output-inl.h   | 179 +
 src/operator/regression_output.cc  |  86 ++--
 src/operator/regression_output.cu  |  12 ++-
 tests/python/unittest/test_operator.py |  70 -
 6 files changed, 272 insertions(+), 79 deletions(-)

diff --git a/docs/api/python/ndarray/sparse.md 
b/docs/api/python/ndarray/sparse.md
index df33570..b0cdd88 100644
--- a/docs/api/python/ndarray/sparse.md
+++ b/docs/api/python/ndarray/sparse.md
@@ -496,6 +496,8 @@ We summarize the interface for each class in the following 
sections.
 make_loss
 stop_gradient
 mxnet.ndarray.contrib.SparseEmbedding
+LinearRegressionOutput
+LogisticRegressionOutput
 ```
 
 ## API Reference
diff --git a/docs/api/python/symbol/sparse.md b/docs/api/python/symbol/sparse.md
index b40276b..a44ff15 100644
--- a/docs/api/python/symbol/sparse.md
+++ b/docs/api/python/symbol/sparse.md
@@ -194,6 +194,8 @@ In the rest of this document, we list sparse related 
routines provided by the
 make_loss
 stop_gradient
 mxnet.symbol.contrib.SparseEmbedding
+LinearRegressionOutput
+LogisticRegressionOutput
 ```
 
 ## API Reference
diff --git a/src/operator/regression_output-inl.h 
b/src/operator/regression_output-inl.h
index 4642f8d..59cbde3 100644
--- a/src/operator/regression_output-inl.h
+++ b/src/operator/regression_output-inl.h
@@ -31,6 +31,7 @@
 #include "./mxnet_op.h"
 #include "./operator_common.h"
 
+
 namespace mxnet {
 namespace op {
 
@@ -77,22 +78,103 @@ inline bool RegressionOpShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
+template
+inline bool RegressionInferStorageType(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   DispatchMode* dispatch_mode,
+   std::vector* in_attrs,
+   std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), is_forward ? 1U : 2U);
+  const size_t label_pos = is_forward ? 1U : 0U;
+  const auto label_stype = in_attrs->at(label_pos);
+  const auto data_stype = in_attrs->at(1 - label_pos);
+  auto& out_stype = out_attrs->at(0);
+  bool dispatched = false;
+  if (!dispatched && data_stype == kDefaultStorage && label_stype == 
kDefaultStorage) {
+dispatched = storage_type_assign(_stype, kDefaultStorage,
+ dispatch_mode, DispatchMode::kFCompute);
+  }
+
+  if (!dispatched && data_stype == kDefaultStorage && label_stype == 
kCSRStorage) {
+dispatched = storage_type_assign(_stype, kDefaultStorage,
+ dispatch_mode, DispatchMode::kFComputeEx);
+  }
+
+  if (!dispatched) {
+dispatched = dispatch_fallback(out_attrs, dispatch_mode);
+  }
+  // In backward pass, although we don't care about gradients of label,
+  // a storage type should be assigned to it.
+  if (!is_forward) type_assign(_attrs->at(1), kDefaultStorage);
+
+  return dispatched;
+}
+
+/*!
+ * \brief Kernel for binary operator of dense -OP- csr ndarray.
+ * Right hand side of OP has no effect.
+ * Parallelize by each row.
+ */
+template
+struct DnsCsrSparseKernel {
+  template
+  MSHADOW_XINLINE 

[GitHub] eric-haibin-lin closed pull request #9625: sparse regression operators

2018-03-03 Thread GitBox
eric-haibin-lin closed pull request #9625: sparse regression operators
URL: https://github.com/apache/incubator-mxnet/pull/9625
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/ndarray/sparse.md 
b/docs/api/python/ndarray/sparse.md
index a7aaa1fd41d..dc44111cdad 100644
--- a/docs/api/python/ndarray/sparse.md
+++ b/docs/api/python/ndarray/sparse.md
@@ -495,6 +495,8 @@ We summarize the interface for each class in the following 
sections.
 make_loss
 stop_gradient
 mxnet.ndarray.contrib.SparseEmbedding
+LinearRegressionOutput
+LogisticRegressionOutput
 ```
 
 ## API Reference
diff --git a/docs/api/python/symbol/sparse.md b/docs/api/python/symbol/sparse.md
index b40276b9f1a..a44ff150356 100644
--- a/docs/api/python/symbol/sparse.md
+++ b/docs/api/python/symbol/sparse.md
@@ -194,6 +194,8 @@ In the rest of this document, we list sparse related 
routines provided by the
 make_loss
 stop_gradient
 mxnet.symbol.contrib.SparseEmbedding
+LinearRegressionOutput
+LogisticRegressionOutput
 ```
 
 ## API Reference
diff --git a/src/operator/regression_output-inl.h 
b/src/operator/regression_output-inl.h
index 4642f8dc467..59cbde3de20 100644
--- a/src/operator/regression_output-inl.h
+++ b/src/operator/regression_output-inl.h
@@ -31,6 +31,7 @@
 #include "./mxnet_op.h"
 #include "./operator_common.h"
 
+
 namespace mxnet {
 namespace op {
 
@@ -77,22 +78,103 @@ inline bool RegressionOpShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
+template
+inline bool RegressionInferStorageType(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   DispatchMode* dispatch_mode,
+   std::vector* in_attrs,
+   std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), is_forward ? 1U : 2U);
+  const size_t label_pos = is_forward ? 1U : 0U;
+  const auto label_stype = in_attrs->at(label_pos);
+  const auto data_stype = in_attrs->at(1 - label_pos);
+  auto& out_stype = out_attrs->at(0);
+  bool dispatched = false;
+  if (!dispatched && data_stype == kDefaultStorage && label_stype == 
kDefaultStorage) {
+dispatched = storage_type_assign(_stype, kDefaultStorage,
+ dispatch_mode, DispatchMode::kFCompute);
+  }
+
+  if (!dispatched && data_stype == kDefaultStorage && label_stype == 
kCSRStorage) {
+dispatched = storage_type_assign(_stype, kDefaultStorage,
+ dispatch_mode, DispatchMode::kFComputeEx);
+  }
+
+  if (!dispatched) {
+dispatched = dispatch_fallback(out_attrs, dispatch_mode);
+  }
+  // In backward pass, although we don't care about gradients of label,
+  // a storage type should be assigned to it.
+  if (!is_forward) type_assign(_attrs->at(1), kDefaultStorage);
+
+  return dispatched;
+}
+
+/*!
+ * \brief Kernel for binary operator of dense -OP- csr ndarray.
+ * Right hand side of OP has no effect.
+ * Parallelize by each row.
+ */
+template
+struct DnsCsrSparseKernel {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out_data,
+  const DType* dns_data,
+  const DType* csr_data,
+  const IType* csr_idx,
+  const RType* csr_indptr,
+  const nnvm::dim_t row_length) {
+nnvm::dim_t row_i = i * row_length;
+for (nnvm::dim_t j=csr_indptr[i]; j < csr_indptr[i+1]; j++) {
+  KERNEL_ASSIGN(out_data[row_i + csr_idx[j]], req,
+OP::Map(dns_data[row_i + csr_idx[j]], csr_data[j]));
+}
+  }
+};
+
+
+template
+inline void RegressionForwardImpl(mshadow::Stream *s, const OpReqType req,
+  const TBlob , const TBlob ) {
+  if (req == kNullOp) return;
+  MSHADOW_REAL_TYPE_SWITCH(data.type_flag_, DType, {
+MXNET_ASSIGN_REQ_SWITCH(req, Req, {
+  const DType* in_data = data.dptr();
+  DType* out_data = out.dptr();
+  using namespace mxnet_op;
+  Kernel, xpu>::Launch(
+s, out.Size(), out_data, in_data);
+});
+  });
+}
+
 template
 void RegressionForward(const nnvm::NodeAttrs& attrs,
const OpContext& ctx,
const std::vector& inputs,
const std::vector& req,
const std::vector& outputs) {
+  CHECK_EQ(inputs.size(), 2U);
+  CHECK_EQ(outputs.size(), 1U);
   mshadow::Stream *s = ctx.get_stream();
-  MSHADOW_REAL_TYPE_SWITCH(inputs[reg_enum::kData].type_flag_, DType, {
-MXNET_ASSIGN_REQ_SWITCH(req[reg_enum::kOut], Req, {
-  

[GitHub] marcoabreu commented on issue #9981: [fix issue#9976] The assignment problem in NDArray

2018-03-03 Thread GitBox
marcoabreu commented on issue #9981: [fix issue#9976] The assignment problem in 
NDArray
URL: https://github.com/apache/incubator-mxnet/pull/9981#issuecomment-370141962
 
 
   @piiswrong 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #9978: Error with random generator

2018-03-03 Thread GitBox
asitstands commented on issue #9978: Error with random generator
URL: 
https://github.com/apache/incubator-mxnet/issues/9978#issuecomment-370140101
 
 
   @CoinCheung That is the usual way that pseudo random number generators work. 
They generate a sequence of pseudo random numbers. So each call of `normal` in 
a run of a script or a REPL session generates different numbers. A seeding sets 
the internal state of the generator. Thus, if you seed with the same number, 
you get the same sequence of numbers. 
   
   Mxnet seeds its generators in `mx.nd.random` with a fixed number every time 
it is initialized, and so `normal` generates the same sequence at every run of 
your script. However, the `DataLoader` uses a generator different from the 
generator of `mx.nd.random`. It uses the global generator of python which is 
seeded with different numbers at each initialization. Thus the two show 
different behaviors. One generates the same sequence at each run of the script 
while the other generates a different sequence. I think that this is confusing 
for newcomers, but anyway it is the way mxnet works.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #9978: Error with random generator

2018-03-03 Thread GitBox
asitstands commented on issue #9978: Error with random generator
URL: 
https://github.com/apache/incubator-mxnet/issues/9978#issuecomment-370140101
 
 
   @CoinCheung That is the usual way that pseudo random number generators work. 
They generate a sequence of pseudo random numbers. So each call of `normal` in 
a program or a REPL session generates different numbers. A seeding sets the 
internal state of the generator. Thus, if you seed with the same number, you 
get the same sequence of numbers. 
   
   Mxnet seeds its generators in `mx.nd.random` with a fixed number every time 
it is initialized, and so `normal` generates the same sequence at every run of 
your script. However, the `DataLoader` uses a generator different from the 
generator of `mx.nd.random`. It uses the global generator of python which is 
seeded with different numbers at each initialization. Thus the two show 
different behaviors. One generates the same sequence at each run of the script 
while the other generates a different sequence at each run. I think that this 
is confusing for newcomers, but anyway it is the way mxnet works.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #9978: Error with random generator

2018-03-03 Thread GitBox
asitstands commented on issue #9978: Error with random generator
URL: 
https://github.com/apache/incubator-mxnet/issues/9978#issuecomment-370140101
 
 
   @CoinCheung That is the usual way that pseudo random number generators work. 
They generate a sequence of pseudo random numbers. So each call of `normal` in 
a run of a script or a REPL session generates different numbers. A seeding sets 
the internal state of the generator. Thus, if you seed with the same number, 
you get the same sequence of numbers. 
   
   Mxnet seeds its generators in `mx.nd.random` with a fixed number every time 
it is initialized, and so `normal` generates the same sequence at every run of 
your script. However, the `DataLoader` uses a generator different from the 
generator of `mx.nd.random`. It uses the global generator of python which is 
seeded with different numbers at each initialization. Thus the two show 
different behaviors. One generates the same sequence at each run of the script 
while the other generates a different sequence at each run. I think that this 
is confusing for newcomers, but anyway it is the way mxnet works.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #9978: Error with random generator

2018-03-03 Thread GitBox
asitstands commented on issue #9978: Error with random generator
URL: 
https://github.com/apache/incubator-mxnet/issues/9978#issuecomment-370140101
 
 
   @CoinCheung That is the usual way that pseudo random number generators work. 
They generate a sequence of pseudo random numbers. So each call of `normal` 
generates different numbers. A seeding sets the internal state of the 
generator. Thus, if you seed with the same number, you get the same sequence of 
numbers. 
   
   Mxnet seeds its generators in `mx.nd.random` with a fixed number every time 
it is initialized, and so `normal` generates the same sequence at every run of 
your script. However, the `DataLoader` uses a generator different from the 
generator of `mx.nd.random`. It uses the global generator of python which is 
seeded with different numbers at each initialization. Thus the two show 
different behaviors. One generates the same sequence at each run of the script 
while the other generates a different sequence at each run. I think that this 
is confusing for newcomers, but anyway it is the way mxnet works.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dma100180 commented on issue #4527: MXNet for R on Windows installation fails due to latest update in DiagremmeR

2018-03-03 Thread GitBox
dma100180 commented on issue #4527: MXNet for R on Windows installation fails 
due to latest update in DiagremmeR
URL: 
https://github.com/apache/incubator-mxnet/issues/4527#issuecomment-370139374
 
 
   Hello, everything was fine with MXNET and an update has caused me to change 
the diagrammer.
   I fixed it with:
   
   require (devtools)
   install_version ("DiagrammeR", version = "0.9.1", repos = 
"http://cran.us.r-project.org;)
   
   a greeting


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9981: [fix issue#9976] The assignment problem in NDArray

2018-03-03 Thread GitBox
wkcn commented on issue #9981: [fix issue#9976] The assignment problem in 
NDArray
URL: https://github.com/apache/incubator-mxnet/pull/9981#issuecomment-370139225
 
 
   @marcoabreu Yes, I will add it soon :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9981: [fix issue#9976] The assignment problem in NDArray

2018-03-03 Thread GitBox
marcoabreu commented on issue #9981: [fix issue#9976] The assignment problem in 
NDArray
URL: https://github.com/apache/incubator-mxnet/pull/9981#issuecomment-370138765
 
 
   Hello, could you please add a test for this case?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CoinCheung commented on issue #9978: Error with random generator

2018-03-03 Thread GitBox
CoinCheung commented on issue #9978: Error with random generator
URL: 
https://github.com/apache/incubator-mxnet/issues/9978#issuecomment-370138600
 
 
   @asitstands 
   Then why it changes when I run it in my command line?
   ```
   $ python
   >>> import mxnet as mx
   >>> import numpy as np
   >>> 
print(mx.nd.random.normal(0,1,shape=(2,2),dtype=np.float32,ctx=mx.gpu()))
   [[-1.32045507  0.68232244]
[-0.98583829  0.01992839]]
   
   >>> 
print(mx.nd.random.normal(0,1,shape=(2,2),dtype=np.float32,ctx=mx.gpu()))
   [[-0.17739409  0.8909654 ]
 [ 0.72020799 -0.04110664]]
   
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseeger commented on issue #9982: Unary ops logcdf_normal, derivlogcdf_normal for log CDF of standard n?

2018-03-03 Thread GitBox
mseeger commented on issue #9982: Unary ops logcdf_normal, derivlogcdf_normal 
for log CDF of standard n?
URL: https://github.com/apache/incubator-mxnet/pull/9982#issuecomment-370138558
 
 
   Sorry, this is just #9961 reopened, I deleted the remote branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseeger opened a new pull request #9982: Unary ops logcdf_normal, derivlogcdf_normal for log CDF of standard n?

2018-03-03 Thread GitBox
mseeger opened a new pull request #9982: Unary ops logcdf_normal, 
derivlogcdf_normal for log CDF of standard n?
URL: https://github.com/apache/incubator-mxnet/pull/9982
 
 
   ?ormal
   
   * New unary ops
   
   * Improved unit tests for basic unary, binary ops
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseeger closed pull request #9961: Unary ops norm_logcdf, norm_derivlogcdf for log CDF of standard normal

2018-03-03 Thread GitBox
mseeger closed pull request #9961: Unary ops norm_logcdf, norm_derivlogcdf for 
log CDF of standard normal
URL: https://github.com/apache/incubator-mxnet/pull/9961
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/ndarray/ndarray.md 
b/docs/api/python/ndarray/ndarray.md
index 59ca4a612e6..7d639a5e127 100644
--- a/docs/api/python/ndarray/ndarray.md
+++ b/docs/api/python/ndarray/ndarray.md
@@ -606,6 +606,8 @@ The `ndarray` package provides several classes:
 sign
 gamma
 gammaln
+   norm_logcdf
+   norm_derivlogcdf
 ```
 
 ## Neural network
diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md
index e383597236d..92d8e4e7845 100644
--- a/docs/api/python/symbol/symbol.md
+++ b/docs/api/python/symbol/symbol.md
@@ -607,6 +607,8 @@ Composite multiple symbols into a new one by an operator.
 sign
 gamma
 gammaln
+   norm_logcdf
+   norm_derivlogcdf
 ```
 
 ## Neural network
diff --git a/src/operator/mshadow_op.h b/src/operator/mshadow_op.h
index 1d4284e1ac2..f9776748095 100644
--- a/src/operator/mshadow_op.h
+++ b/src/operator/mshadow_op.h
@@ -621,6 +621,72 @@ MSHADOW_XINLINE double gammaln_grad::Map(double a) 
{
   return special_functions::cephes::psi(a);
 }
 
+/* norm_logcdf **/
+
+struct norm_logcdf : public mxnet_op::tunable {
+  template
+  MSHADOW_XINLINE static DType Map(DType a) {
+// Default implementation using floating precision
+float af(static_cast(a));
+return DType(special_functions::apbsint::logCdfNormal(af));
+  }
+};
+
+template<>
+MSHADOW_XINLINE double norm_logcdf::Map(double a) {
+  return special_functions::apbsint::logCdfNormal(a);
+}
+
+struct norm_logcdf_grad : public mxnet_op::tunable {
+  template
+  MSHADOW_XINLINE static DType Map(DType a) {
+// Default implementation using floating precision
+float af(static_cast(a));
+return DType(special_functions::apbsint::derivLogCdfNormal(af));
+  }
+};
+
+template<>
+MSHADOW_XINLINE double norm_logcdf_grad::Map(double a) {
+  return special_functions::apbsint::derivLogCdfNormal(a);
+}
+
+/* norm_derivlogcdf **/
+
+struct norm_derivlogcdf : public mxnet_op::tunable {
+  template
+  MSHADOW_XINLINE static DType Map(DType a) {
+// Default implementation using floating precision
+float af(static_cast(a));
+return DType(special_functions::apbsint::derivLogCdfNormal(af));
+  }
+};
+
+template<>
+MSHADOW_XINLINE double norm_derivlogcdf::Map(double a) {
+  return special_functions::apbsint::derivLogCdfNormal(a);
+}
+
+// NOTE: This grad would best be computed as ElemwiseGradUseInOut, with a and 
da as
+// input. Here, we recompute da, because ElemwiseGradUseInOut is not properly 
supported
+// for basic unary functions.
+struct norm_derivlogcdf_grad : public mxnet_op::tunable {
+  template
+  MSHADOW_XINLINE static DType Map(DType a) {
+// Default implementation using floating precision
+float af(static_cast(a));
+float daf(special_functions::apbsint::derivLogCdfNormal(af));
+return DType(-daf * (af + daf));
+  }
+};
+
+template<>
+MSHADOW_XINLINE double norm_derivlogcdf_grad::Map(double a) {
+  double da(special_functions::apbsint::derivLogCdfNormal(a));
+  return -da * (a + da);
+}
+
+
 /* Smooth L1 Loss is a loss specific for R-CNN franchise training
  * Smooth L1 Loss function:
  * f(x) = 0.5 * (sigma * x) ^ 2, |x| < 1 / sigma^2
diff --git a/src/operator/operator_tune.cc b/src/operator/operator_tune.cc
index c13f1ac2fae..fe67793a9bf 100644
--- a/src/operator/operator_tune.cc
+++ b/src/operator/operator_tune.cc
@@ -277,6 +277,10 @@ 
IMPLEMENT_UNARY_WORKLOAD_FWD(mxnet::op::mshadow_op::gamma);  // NOLINT()
 IMPLEMENT_UNARY_WORKLOAD_BWD(mxnet::op::mshadow_op::gamma_grad);  // NOLINT()
 IMPLEMENT_UNARY_WORKLOAD_FWD(mxnet::op::mshadow_op::gammaln);  // NOLINT()
 IMPLEMENT_UNARY_WORKLOAD_BWD(mxnet::op::mshadow_op::gammaln_grad);  // NOLINT()
+IMPLEMENT_UNARY_WORKLOAD_FWD(mxnet::op::mshadow_op::norm_logcdf);  // NOLINT()
+IMPLEMENT_UNARY_WORKLOAD_BWD(mxnet::op::mshadow_op::norm_logcdf_grad);  // 
NOLINT()
+IMPLEMENT_UNARY_WORKLOAD_FWD(mxnet::op::mshadow_op::norm_derivlogcdf);  // 
NOLINT()
+IMPLEMENT_UNARY_WORKLOAD_BWD(mxnet::op::mshadow_op::norm_derivlogcdf_grad);  
// NOLINT()
 IMPLEMENT_UNARY_WORKLOAD_FWD(mxnet::op::mshadow_op::ceil);  // NOLINT()
 IMPLEMENT_UNARY_WORKLOAD_FWD(mxnet::op::mshadow_op::degrees);  // NOLINT()
 IMPLEMENT_UNARY_WORKLOAD_BWD(mxnet::op::mshadow_op::degrees_grad);  // NOLINT()
diff --git a/src/operator/special_functions-inl.h 
b/src/operator/special_functions-inl.h
index 743391e0fce..aab234c7bcc 100644
--- a/src/operator/special_functions-inl.h
+++ b/src/operator/special_functions-inl.h
@@ -9,6 +9,8 @@
 #ifndef 

[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-03 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-369813766
 
 
   @pengzhao-intel Thank you! I will have a try.
   
   ~~I use `#pragma omp paraller for` for each for-loop in Multi Proposal (cpu 
implementation),
   But the performance doesn't improve.
   Maybe it costs a little calculation.~~


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-03 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370133350
 
 
   @pengzhao-intel Here is the testing code.
   https://gist.github.com/wkcn/4a09c142bc9886b45b5a23461bbe4733
   
   I found that I made a mistake that I didn't use `nd.waitall()` to test the 
performance.
   If not using `nd.waitall()`, the calculation will not execute because of 
lazy-evaluation.
   
   performance|CPU(no omp)|CPU(omp)|GPU
   -|---|---|-
   Time(s)|33.899|12.432|4.435
   
   However, when I set the environment variables `MXNET_OMP_MAX_THREADS` or 
`OMP_NUM_THREADS`, it may bring bad performance.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-03 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370133350
 
 
   @pengzhao-intel Here is the testing code.
   https://gist.github.com/wkcn/4a09c142bc9886b45b5a23461bbe4733
   
   I found that I made a mistake that I didn't use `nd.waitall()` to test the 
performance.
   If not using `nd.waitall()`, the calculation will not execute because of 
lazy-evaluation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn opened a new pull request #9981: [fix issue#9976] The assignment problem in NDArray

2018-03-03 Thread GitBox
wkcn opened a new pull request #9981: [fix issue#9976] The assignment problem 
in NDArray
URL: https://github.com/apache/incubator-mxnet/pull/9981
 
 
   ## Description ##
   Fix issue[#9976](https://github.com/apache/incubator-mxnet/issues/9976)
   
   ```python
   >>> a = mx.nd.array(np.arange(12).reshape((3,4)))
   >>> a
   
   [[ 0.  1.  2.  3.]
[ 4.  5.  6.  7.]
[ 8.  9. 10. 11.]]
   
   >>> a[0]=a[1] #   HERE
   >>> a
   
   [[ 0.  1.  2.  3.]
[ 4.  5.  6.  7.]
[ 8.  9. 10. 11.]]
   
   ```
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Add `byte_offset()` function for NDArray
   - [x] Fix the assignment problem in NDArray. 
https://github.com/apache/incubator-mxnet/blob/master/src/ndarray/ndarray.cc#L1131
   
   ## Comments ##
   - Is it necessary to add `byte_offset()` function for NDArray?
   - Is it necessary to add `is_itself()` function for NDArray?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services