[GitHub] zheng-da commented on issue #10256: Fix a compile error in BN

2018-03-26 Thread GitBox
zheng-da commented on issue #10256: Fix a compile error in BN
URL: https://github.com/apache/incubator-mxnet/pull/10256#issuecomment-376406156
 
 
   CI doesn't have a test for CUDA. It only has CuDNN


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands closed pull request #10258: [MXNET-145] Remove the dependences of mx.io and mx.initializer on the numpy's global random number generator

2018-03-26 Thread GitBox
asitstands closed pull request #10258: [MXNET-145]  Remove the dependences of 
mx.io and mx.initializer on the numpy's global random number generator
URL: https://github.com/apache/incubator-mxnet/pull/10258
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/initializer.py b/python/mxnet/initializer.py
index 78afa2dbd29..1297c3da9a7 100755
--- a/python/mxnet/initializer.py
+++ b/python/mxnet/initializer.py
@@ -530,9 +530,9 @@ def _init_weight(self, _, arr):
 nout = arr.shape[0]
 nin = np.prod(arr.shape[1:])
 if self.rand_type == "uniform":
-tmp = np.random.uniform(-1.0, 1.0, (nout, nin))
+tmp = random.uniform(-1.0, 1.0, shape=(nout, nin)).asnumpy()
 elif self.rand_type == "normal":
-tmp = np.random.normal(0.0, 1.0, (nout, nin))
+tmp = random.normal(0.0, 1.0, shape=(nout, nin)).asnumpy()
 u, _, v = np.linalg.svd(tmp, full_matrices=False) # pylint: 
disable=invalid-name
 if u.shape == tmp.shape:
 res = u
diff --git a/python/mxnet/io.py b/python/mxnet/io.py
index 201414e8f6e..14500182792 100644
--- a/python/mxnet/io.py
+++ b/python/mxnet/io.py
@@ -39,6 +39,8 @@
 from .ndarray import _ndarray_cls
 from .ndarray import array
 from .ndarray import concatenate
+from .ndarray import arange
+from .ndarray.random import shuffle as random_shuffle
 
 class DataDesc(namedtuple('DataDesc', ['name', 'shape'])):
 """DataDesc is used to store name, shape, type and layout
@@ -535,9 +537,9 @@ def _shuffle(data, idx):
 if (isinstance(v, h5py.Dataset) if h5py else False):
 shuffle_data.append((k, v))
 elif isinstance(v, CSRNDArray):
-shuffle_data.append((k, sparse_array(v.asscipy()[idx], v.context)))
+shuffle_data.append((k, sparse_array(v.asscipy()[idx.asnumpy()], 
v.context)))
 else:
-shuffle_data.append((k, array(v.asnumpy()[idx], v.context)))
+shuffle_data.append((k, v[idx.as_in_context(v.context)]))
 
 return shuffle_data
 
@@ -651,10 +653,10 @@ def __init__(self, data, label=None, batch_size=1, 
shuffle=False,
 raise NotImplementedError("`NDArrayIter` only supports 
``CSRNDArray``" \
   " with `last_batch_handle` set to 
`discard`.")
 
-self.idx = np.arange(self.data[0][1].shape[0])
+self.idx = arange(self.data[0][1].shape[0])
 # shuffle data
 if shuffle:
-np.random.shuffle(self.idx)
+random_shuffle(self.idx, out=self.idx)
 self.data = _shuffle(self.data, self.idx)
 self.label = _shuffle(self.label, self.idx)
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
spidyDev commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] 
Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177311527
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/translation_utils.py
 ##
 @@ -90,10 +90,46 @@ def _fix_pooling(pool_type, inputs, new_attr):
 stride = new_attr.get('stride')
 kernel = new_attr.get('kernel')
 padding = new_attr.get('pad')
-pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, len(kernel))
-new_pad_op = symbol.pad(inputs[0], mode='constant', pad_width=pad_width)
-new_pooling_op = symbol.Pooling(new_pad_op, pool_type=pool_type,
-stride=stride, kernel=kernel)
+
+# Adding default stride.
+if stride is None:
+stride = (1,) * len(kernel)
+
+# Add padding attr if not provided.
+if padding is None:
+padding = (0,) * len(kernel) * 2
+
+# Mxnet Pad operator supports only 4D/5D tensors.
+# For 1D case, these are the steps:
+#Step 1. Add extra dummy dimension to make it 4D. Adding to  axis = 2
+#Step 2. Apply padding to this changed tensor
+#Step 3. Remove the extra dimension added in step 1.
+if len(kernel) == 1:
+dummy_axis = 2
+# setting 0 padding to the new dim to be added.
+padding = (0, padding[0], 0, padding[1])
+pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, kernel_dim=2)
+
+# Step 1.
+curr_sym = symbol.expand_dims(inputs[0], axis=dummy_axis)
+
+# Step 2. Common for all tensor sizes
+new_pad_op = symbol.pad(curr_sym, mode='edge', pad_width=pad_width)
+
+# Step 3: Removing extra dim added.
+new_pad_op = symbol.split(new_pad_op, axis=dummy_axis, num_outputs=1, 
squeeze_axis=1)
+else:
+# For 2D/3D cases:
+# Apply padding
+pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, 
kernel_dim=len(kernel))
+curr_sym = inputs[0]
+if pool_type == 'max':
+new_pad_op = symbol.pad(curr_sym, mode='edge', pad_width=pad_width)
+else:
+new_pad_op = symbol.pad(curr_sym, mode='constant', 
pad_width=pad_width)
+
+# Apply pooling without pads.
+new_pooling_op = symbol.Pooling(new_pad_op, pool_type=pool_type, 
stride=stride, kernel=kernel)
 
 Review comment:
   So, may be should add more comments. 
Basically , ONNX pooling operator can have uneven padding,  as it specifies 
padding values at each dimension 
   (axis_0_start, axis_1_start, axis_0_end, axis_1_end)  -- > (0,1,0,2) 
   
   But Mxnet only supports even padding specified as (axis_0_start/end, 
axis_1_start/end)  >(0,1)  
   
   To solve this issue , we follow the steps:
   1. Apply padding on the tensor first , as Mxnet padding also supports uneven 
padding.
   2. Once padding is done apply pooling without  any padding.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #9823: RCNN example fails for using latest mxnet

2018-03-26 Thread GitBox
chinakook commented on issue #9823: RCNN example fails for using latest mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/9823#issuecomment-376401653
 
 
   It's solved when I roll back to mxnet v1.1.0.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
spidyDev commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] 
Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177310727
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/translation_utils.py
 ##
 @@ -90,10 +90,46 @@ def _fix_pooling(pool_type, inputs, new_attr):
 stride = new_attr.get('stride')
 kernel = new_attr.get('kernel')
 padding = new_attr.get('pad')
-pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, len(kernel))
-new_pad_op = symbol.pad(inputs[0], mode='constant', pad_width=pad_width)
-new_pooling_op = symbol.Pooling(new_pad_op, pool_type=pool_type,
-stride=stride, kernel=kernel)
+
+# Adding default stride.
 
 Review comment:
   Mxnet Pooling doesn't have "dilation"  .
   
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.Pooling


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10234: [MXNET-135] mx.image.imread support s3

2018-03-26 Thread GitBox
cjolivier01 commented on issue #10234: [MXNET-135] mx.image.imread support s3
URL: https://github.com/apache/incubator-mxnet/pull/10234#issuecomment-376399877
 
 
   should be a way to get file size. it may differ by stream type, but I expect 
they all have some way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #10258: [MXNET-145] Remove the dependences of mx.io and mx.initializer on the numpy's global random number generator

2018-03-26 Thread GitBox
asitstands commented on issue #10258: [MXNET-145]  Remove the dependences of 
mx.io and mx.initializer on the numpy's global random number generator
URL: https://github.com/apache/incubator-mxnet/pull/10258#issuecomment-376398312
 
 
   It looks like this PR does not handle h5py datasets correctly. I'll fix it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #10258: [MXNET-145] Remove the dependences of mx.io and mx.initializer on the numpy's global random number generator

2018-03-26 Thread GitBox
asitstands commented on issue #10258: [MXNET-145]  Remove the dependences of 
mx.io and mx.initializer on the numpy's global random number generator
URL: https://github.com/apache/incubator-mxnet/pull/10258#issuecomment-376398312
 
 
   It looks like the current implementation does not handle h5py datasets 
correctly. I'll fix it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10232: [MXNET-136] Enabling USE_DIST_KVSTORE flag for CI

2018-03-26 Thread GitBox
rahul003 commented on issue #10232: [MXNET-136] Enabling USE_DIST_KVSTORE flag 
for CI
URL: https://github.com/apache/incubator-mxnet/pull/10232#issuecomment-376394320
 
 
   Updating ps-lite is necessary as the latest commit there has the fix for 
build issue with cmake versions before 3.6
   
   Ref: https://github.com/dmlc/ps-lite/pull/130


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10232: [MXNET-136] Enabling USE_DIST_KVSTORE flag for CI

2018-03-26 Thread GitBox
rahul003 commented on issue #10232: [MXNET-136] Enabling USE_DIST_KVSTORE flag 
for CI
URL: https://github.com/apache/incubator-mxnet/pull/10232#issuecomment-376394178
 
 
   @marcoabreu This is ready to be merged IMO. Please take a look


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #9823: RCNN example fails for using latest mxnet

2018-03-26 Thread GitBox
chinakook commented on issue #9823: RCNN example fails for using latest mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/9823#issuecomment-376394058
 
 
   @marcoabreu It's not only the bug in RCNN, but also in mx.sym.SoftmaxOutput 
or mx.sym.SoftmaxActivation when their result are using in metric such as 
```pred.asnumpy()```.
   It may occur in multi-gpu case.
   So I suggest that reopen this issue util it's solved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #9823: RCNN example fails for using latest mxnet

2018-03-26 Thread GitBox
chinakook commented on issue #9823: RCNN example fails for using latest mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/9823#issuecomment-376394058
 
 
   @marcoabreu It's not only the bug in RCNN, but also in mx.sym.SoftmaxOutput 
or mx.sym.SoftmaxActivation when their result are using in metric such as 
```pred.asnumpy()```
   So I suggest that reopen this issue util it's solved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177305755
 
 

 ##
 File path: tests/python-pytest/onnx/onnx_backend_test.py
 ##
 @@ -57,37 +57,40 @@
 'test_floor',
 
 ## Joining and spliting
-#'test_concat.*',  #---Failing test
+'test_concat',
 
 #Basic neural network functions
 'test_sigmoid',
 'test_relu',
-#'test_constant_pad',
-#'test_edge_pad',
-#'test_reflect_pad',
+'test_constant_pad',
+'test_edge_pad',
+'test_reflect_pad',
 'test_matmul',
 'test_leakyrelu',
 'test_elu',
-#'test_softmax*',
+'test_softmax_example',
+'test_softmax_large_number',
+'test_softmax_axis_2',
 'test_conv',
 'test_basic_conv',
-#'test_globalmaxpool',
-#'test_globalaveragepool',
-#'test_batch_norm',
+'test_transpose',
+#'test_globalmaxpool', - tests to be added
+#'test_globalaveragepool', - tests to be added
+#'test_batch_norm', - tests to be added
+#'test_gather',
 
 Review comment:
   Does this mean, for an operator we are adding, we do not cover with test 
cases? May be prone for issues?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177305637
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/op_translations.py
 ##
 @@ -402,9 +438,9 @@ def max_pooling(attrs, inputs, cls):
 'strides': 'stride',
 'pads': 'pad',
})
+
 new_attrs = translation_utils._add_extra_attributes(new_attrs,
-{'pool_type': 'avg',
- 'pooling_convention': 
'valid'
+{'pooling_convention': 
'valid'
 })
 new_op = translation_utils._fix_pooling('max', inputs, new_attrs)
 
 Review comment:
   Pooling is a same category for max, avg and so on. Will it help to unify 
them, you translator is getting too many detailed information about each 
individual operator, may be very hard for maintaining and making changes in 
future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177305494
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/op_translations.py
 ##
 @@ -328,6 +354,17 @@ def squeeze(attrs, inputs, cls):
 mxnet_op = symbol.split(mxnet_op, axis=i-1, num_outputs=1, 
squeeze_axis=1)
 return mxnet_op, new_attrs, inputs
 
+def take(attrs, inputs, cls):
+""" Takes elements from an input array along the given axis.
+Currently only slicing along axis 0 is supported for now."""
+return 'take', attrs, inputs
 
 Review comment:
   same as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177305478
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/op_translations.py
 ##
 @@ -328,6 +354,17 @@ def squeeze(attrs, inputs, cls):
 mxnet_op = symbol.split(mxnet_op, axis=i-1, num_outputs=1, 
squeeze_axis=1)
 return mxnet_op, new_attrs, inputs
 
+def take(attrs, inputs, cls):
+""" Takes elements from an input array along the given axis.
+Currently only slicing along axis 0 is supported for now."""
+return 'take', attrs, inputs
+
+def flatten(attrs, inputs, cls):
+"""Flattens the input array into a 2-D array by collapsing the higher 
dimensions."""
+#Mxnet does not have axis support.
+new_attrs = translation_utils._remove_attributes(attrs, ['axis'])
 
 Review comment:
   Should we show warning or error from here on such condition? It will be hard 
for users to go and refer documentation


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #9823: RCNN example fails for using latest mxnet

2018-03-26 Thread GitBox
chinakook commented on issue #9823: RCNN example fails for using latest mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/9823#issuecomment-376393268
 
 
   I encountered this problem in RCNN too. I've tested with cuda 8.0 and 
cudnn6.0.2/cudnn7.1.2, both of them are failed today. However, It can run 
seccussfully on mxnet version two month ago.
   I think there may be a bug within mxnet backend.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands opened a new pull request #10258: [MXNET-145] Remove the dependences of mx.io and mx.initializer on the numpy's global random number generator

2018-03-26 Thread GitBox
asitstands opened a new pull request #10258: [MXNET-145]  Remove the 
dependences of mx.io and mx.initializer on the numpy's global random number 
generator
URL: https://github.com/apache/incubator-mxnet/pull/10258
 
 
   ## Description ##
   
   This PR removes the dependences of `mx.io` and `mx.initializer` on the 
numpy's global random number generator. The dependences are not sound as they 
introduce unnecessary coupling with user's environment and confuse newcomers 
about seeding. 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ## Comments ##
   - If this PR is accepted, I'll make similar PRs removing the dependences on 
python's and numpy's global random number generators from `rnn/io`, `gluon`'s 
samplers and so on.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8480: Cannot recognize Intel MPI

2018-03-26 Thread GitBox
eric-haibin-lin commented on issue #8480: Cannot recognize Intel MPI
URL: 
https://github.com/apache/incubator-mxnet/issues/8480#issuecomment-376389602
 
 
   Related:
   
https://cwiki.apache.org/confluence/display/MXNET/Extend+MXNet+Distributed+Training+by+MPI+AllReduce
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10255: [MXNET-142] Enhance test for LeakyReLU operator

2018-03-26 Thread GitBox
rahul003 commented on issue #10255: [MXNET-142] Enhance test for LeakyReLU 
operator
URL: https://github.com/apache/incubator-mxnet/pull/10255#issuecomment-376389085
 
 
   The seed is unnecessary for float32 and float64, right? Can we use the seed 
only for float16?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] solin319 commented on issue #10234: [MXNET-135] mx.image.imread support s3

2018-03-26 Thread GitBox
solin319 commented on issue #10234: [MXNET-135] mx.image.imread support s3
URL: https://github.com/apache/incubator-mxnet/pull/10234#issuecomment-376387934
 
 
   @cjolivier01 
   We have changed  single_buff to unique_ptr.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10232: [MXNET-136] [WIP] Enabling USE_DIST_KVSTORE flag for CI

2018-03-26 Thread GitBox
rahul003 commented on issue #10232: [MXNET-136] [WIP] Enabling USE_DIST_KVSTORE 
flag for CI
URL: https://github.com/apache/incubator-mxnet/pull/10232#issuecomment-375847064
 
 
   The build with CMake and USE_DIST_KVSTORE fails with an error because 
FindProtobuf CMake module behavior changed with version 3.6. 
   I've made the below PR to fix the build. I've verified that it works for 
cmake 3.5, 3.6, 3.10. 
   
   https://github.com/dmlc/ps-lite/pull/130
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10232: [MXNET-136] [WIP] Enabling USE_DIST_KVSTORE flag for CI

2018-03-26 Thread GitBox
rahul003 commented on issue #10232: [MXNET-136] [WIP] Enabling USE_DIST_KVSTORE 
flag for CI
URL: https://github.com/apache/incubator-mxnet/pull/10232#issuecomment-375847064
 
 
   @cjolivier01 The build with CMake and USE_DIST_KVSTORE fails with an error I 
had shown you earlier. This is apparently because FindProtobuf CMake module 
behavior changed with version 3.6. 
   I've made the below PR to fix the build. I've verified that it works for 
cmake 3.5, 3.6, 3.10. 
   
   https://github.com/dmlc/ps-lite/pull/130
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ashokei commented on issue #10021: [MXNET-33] SSD example not working with mkl-dnn

2018-03-26 Thread GitBox
ashokei commented on issue #10021: [MXNET-33] SSD example not working with 
mkl-dnn
URL: https://github.com/apache/incubator-mxnet/pull/10021#issuecomment-376384105
 
 
   @zheng-da sorry for the confusion, mkl-dnn has more general pooling support, 
it allows us to implement any arbitrary convention we like. So it is not bug in 
mkl-dnn, it is just that we havent integrated this specific use-case in our 
mxnet mkldnn pooling implementation yet.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience closed issue #10257: class mxnet.gluon.rnn.RNN has a mistake default activation function in official document.

2018-03-26 Thread GitBox
sxjscience closed issue #10257: class mxnet.gluon.rnn.RNN has a mistake  
default activation function in official document.
URL: https://github.com/apache/incubator-mxnet/issues/10257
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10257: class mxnet.gluon.rnn.RNN has a mistake default activation function in official document.

2018-03-26 Thread GitBox
sxjscience commented on issue #10257: class mxnet.gluon.rnn.RNN has a mistake  
default activation function in official document.
URL: 
https://github.com/apache/incubator-mxnet/issues/10257#issuecomment-376381147
 
 
   Duplicate of https://github.com/apache/incubator-mxnet/issues/10152 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] henripal commented on issue #9087: MXNET 1.0.0 - marginal performance improvement Titan V (Volta) with half precision cuda 9.0 + cudnn 7.0.5

2018-03-26 Thread GitBox
henripal commented on issue #9087: MXNET 1.0.0 - marginal performance 
improvement Titan V (Volta) with half precision cuda 9.0 + cudnn 7.0.5
URL: 
https://github.com/apache/incubator-mxnet/issues/9087#issuecomment-376380911
 
 
   @yangjunpro ran this w/ imagenet on a TitanV; am getting 500 samples/sec in 
fp16 VS 290 samples/sec in fp32.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xiaotao321 opened a new issue #10257: class mxnet.gluon.rnn.RNN has a mistake default activation function in official document.

2018-03-26 Thread GitBox
xiaotao321 opened a new issue #10257: class mxnet.gluon.rnn.RNN has a mistake  
default activation function in official document.
URL: https://github.com/apache/incubator-mxnet/issues/10257
 
 
   ## Description
   class mxnet.gluon.rnn.RNN has a mistake  default activation function in 
official document.
   
   ## Error Message:
   [here](url) has a conflict about the default activation function of RNN.
   
   class mxnet.gluon.rnn.RNN(hidden_size, num_layers=1, # 
**activation='relu'**, layout='TNC', dropout=0, bidirectional=False, 
i2h_weight_initializer=None, h2h_weight_initializer=None, 
i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', input_size=0, 
**kwargs)
   
   Parameters:  activation ({'relu' or 'tanh'}, # **default 'tanh'**) – The 
activation function to use.
   
   
   
   
   
   
   
   
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #9740: add axis support and gradient for L2norm

2018-03-26 Thread GitBox
sxjscience commented on a change in pull request #9740: add axis support and 
gradient for L2norm
URL: https://github.com/apache/incubator-mxnet/pull/9740#discussion_r177294267
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce_op.h
 ##
 @@ -862,15 +905,47 @@ void L2NormComputeImpl(mshadow::Stream *s,
   });
 }
 
+template
+void SqRootForL2(const OpContext& ctx, OpReqType req, const TBlob ) {
+  mshadow::Stream *s = ctx.get_stream();
+  MSHADOW_REAL_TYPE_SWITCH(output.type_flag_, DType, {
+MXNET_ASSIGN_REQ_SWITCH(req, Req, {
+  DType* out_data = output.dptr();
+  using namespace mxnet_op;
+  Kernel, xpu>::Launch(
+s, output.Size(), out_data, out_data);
+});
+  });
+}
+
+struct square {
+  /*! \brief map a to result using defined operation */
+  template
+  MSHADOW_XINLINE static DType Map(DType a) {
+return a * a;
+  }
+};
 
 Review comment:
   I think square is supported in `mshadow_op::square`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10021: [MXNET-33] SSD example not working with mkl-dnn

2018-03-26 Thread GitBox
zheng-da commented on issue #10021: [MXNET-33] SSD example not working with 
mkl-dnn
URL: https://github.com/apache/incubator-mxnet/pull/10021#issuecomment-376378129
 
 
   what i mean is that mkldnn doesn't support "full" pooling convention. will 
mkldnn eventually support it? if it does, when will it support it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10021: [MXNET-33] SSD example not working with mkl-dnn

2018-03-26 Thread GitBox
zheng-da commented on issue #10021: [MXNET-33] SSD example not working with 
mkl-dnn
URL: https://github.com/apache/incubator-mxnet/pull/10021#issuecomment-376378129
 
 
   what i mean is that mkldnn doesn't support "full" pooling convention. will 
mkldnn eventually support it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da opened a new pull request #10256: Fix a compile error in BN

2018-03-26 Thread GitBox
zheng-da opened a new pull request #10256: Fix a compile error in BN
URL: https://github.com/apache/incubator-mxnet/pull/10256
 
 
   ## Description ##
   This is to fix the compile error reported in 
https://github.com/apache/incubator-mxnet/issues/10235
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10116: [MXNET-105] Fix CuDNN performance after code refactor

2018-03-26 Thread GitBox
zheng-da commented on a change in pull request #10116: [MXNET-105] Fix CuDNN 
performance after code refactor
URL: https://github.com/apache/incubator-mxnet/pull/10116#discussion_r177289356
 
 

 ##
 File path: src/operator/nn/batch_norm.cu
 ##
 @@ -705,19 +700,18 @@ void BatchNormGradCompute(const nnvm::NodeAttrs& 
attrs,
   if (!param.use_global_stats && !param.cudnn_off && shape.ndim() <= 4
   && param.axis == mxnet::op::batchnorm::DEFAULT_AXIS) {
 MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
-  GetCuDNNOp(param).Backward(ctx, out_grad, in_data, out_data,
-req, in_grad, aux_states);
+  GetCuDNNOp(param).Backward(ctx, inputs, req, outputs);
 })
   } else {
 MSHADOW_REAL_TYPE_SWITCH_EX(dtype, DType, AccReal, {
-  BatchNormBackward(ctx, param, out_grad,
-  in_data, out_data, req, in_grad, aux_states);
+  BatchNormBackward(ctx, param, inputs, req, outputs);
 })
   }
 #else
+  aux_states[batchnorm::kMovingMean] = inputs[6];
+  aux_states[batchnorm::kMovingVar] = inputs[7];
 
 Review comment:
   agree. @marcoabreu could you add a CI only with CUDA?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10235: Build fails with USE_CUDNN = 0

2018-03-26 Thread GitBox
zheng-da commented on issue #10235: Build fails with USE_CUDNN = 0
URL: 
https://github.com/apache/incubator-mxnet/issues/10235#issuecomment-376372492
 
 
   i'll fix it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10116: [MXNET-105] Fix CuDNN performance after code refactor

2018-03-26 Thread GitBox
zheng-da commented on a change in pull request #10116: [MXNET-105] Fix CuDNN 
performance after code refactor
URL: https://github.com/apache/incubator-mxnet/pull/10116#discussion_r177287985
 
 

 ##
 File path: src/operator/nn/batch_norm.cu
 ##
 @@ -705,19 +700,18 @@ void BatchNormGradCompute(const nnvm::NodeAttrs& 
attrs,
   if (!param.use_global_stats && !param.cudnn_off && shape.ndim() <= 4
   && param.axis == mxnet::op::batchnorm::DEFAULT_AXIS) {
 MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
-  GetCuDNNOp(param).Backward(ctx, out_grad, in_data, out_data,
-req, in_grad, aux_states);
+  GetCuDNNOp(param).Backward(ctx, inputs, req, outputs);
 })
   } else {
 MSHADOW_REAL_TYPE_SWITCH_EX(dtype, DType, AccReal, {
-  BatchNormBackward(ctx, param, out_grad,
-  in_data, out_data, req, in_grad, aux_states);
+  BatchNormBackward(ctx, param, inputs, req, outputs);
 })
   }
 #else
+  aux_states[batchnorm::kMovingMean] = inputs[6];
+  aux_states[batchnorm::kMovingVar] = inputs[7];
 
 Review comment:
   i see. i'll update it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-03-26 Thread GitBox
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 
support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r177287609
 
 

 ##
 File path: tests/nightly/dist_sync_kvstore.py
 ##
 @@ -278,39 +316,49 @@ def check_compr_random(kv, threshold, nworker):
 decompr *= nworker * rate
 assert_almost_equal(diff.asnumpy(), decompr)
 
-print ('worker '+str(my_rank)+' started with non compression tests')
-check_default_keys(kv, my_rank, nworker)
-check_row_sparse_keys(kv, my_rank, nworker)
-check_row_sparse_keys_with_zeros(kv, my_rank, nworker)
-check_big_row_sparse_keys(kv, my_rank, nworker)
-print('worker ' + str(my_rank) + ' is done with non compression tests')
-
-# don't run non compressed keys after this as kvstore now is set to 
compressed
-print ('worker '+str(my_rank)+' started with compression tests')
-kv, threshold = init_kv_compressed(kv)
-check_compr_pull_before_push(kv)
-check_compr_zero(kv)
-check_compr_residual(kv, threshold, nworker)
-check_compr_ones(kv, threshold, nworker)
-check_compr_random(kv, threshold, nworker)
+print ('worker ' + str(my_rank) + ' started with compression tests')
+check_compr_pull_before_push()
+check_compr_zero()
+check_compr_residual(threshold)
+check_compr_ones(threshold)
+check_compr_random(threshold, nrepeat)
 print('worker ' + str(my_rank) + ' is done with compression tests')
 
-def test_sync_init():
+def test_sync_init(gpu_tests=False):
+def get_dtype(idx, cur_keys):
+if idx < len(cur_keys)/2:
+dtype = 'float32'
+else:
+dtype = 'float16'
+return dtype
+
 def check_init(kv, cur_keys, cur_shape, device=False):
 ctx = mx.gpu(0) if device else mx.cpu()
-val = [mx.nd.zeros(cur_shape, ctx) for i in cur_keys]
+val = [mx.nd.zeros(cur_shape, ctx=ctx, dtype=get_dtype(i, cur_keys)) 
for i in range(len(cur_keys))]
 for i in range(len(cur_keys)):
 expected = i
-kv.init(cur_keys[i], [mx.nd.ones(cur_shape, ctx) * i])
+kv.init(cur_keys[i], [mx.nd.ones(cur_shape, ctx=ctx, 
dtype=get_dtype(i, cur_keys)) * i])
 kv.pull(cur_keys[i], out=val[i])
 check_diff_to_scalar(val[i], expected)
 check_init(kv, init_test_keys, shape)
 check_init(kv, init_test_keys_big, big_shape)
-check_init(kv, init_test_keys_device, shape, device=True)
-check_init(kv, init_test_keys_device_big, big_shape, device=True)
-my_rank = kv.rank
-print('worker ' + str(my_rank) + ' is initialized')
+if gpu_tests:
+check_init(kv, init_test_keys_device, shape, device=True)
+check_init(kv, init_test_keys_device_big, big_shape, device=True)
+print('worker ' + str(kv.rank) + ' is initialized')
 
 if __name__ == "__main__":
-test_sync_init()
-test_sync_push_pull()
+parser = argparse.ArgumentParser(description='test distributed kvstore in 
dist_sync mode')
+parser.add_argument('--nrepeat', type=int, default=5)
 
 Review comment:
   Same command as before can be used because the default values of the 
arguments runs all the earlier tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sergeykolychev commented on issue #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-03-26 Thread GitBox
sergeykolychev commented on issue #10208: [MXNET-117] Sparse operator 
broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#issuecomment-376367027
 
 
   @haojin2 ok, I'll fix this problem for you until end of Thursday this week.
   Don't worry and continue your development, I'll take about the perl side.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpengboy commented on issue #9611: program can't finished normally in dist_sync mode

2018-03-26 Thread GitBox
kpengboy commented on issue #9611: program can't finished normally in dist_sync 
mode
URL: 
https://github.com/apache/incubator-mxnet/issues/9611#issuecomment-376365827
 
 
   @mli has there been any updates? This issue has also caused annoyance for me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpengboy commented on issue #9611: program can't finished normally in dist_sync mode

2018-03-26 Thread GitBox
kpengboy commented on issue #9611: program can't finished normally in dist_sync 
mode
URL: 
https://github.com/apache/incubator-mxnet/issues/9611#issuecomment-376365827
 
 
   @mli has there been any updates? This issue is also causing annoyance for me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #10165: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md

2018-03-26 Thread GitBox
szha closed pull request #10165: [MXNET-114] Add the ability to exclude 
specific lines in tutorial notebooks generated from .md
URL: https://github.com/apache/incubator-mxnet/pull/10165
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/mxdoc.py b/docs/mxdoc.py
index 907ec7cc57f..7f567f0b8d0 100644
--- a/docs/mxdoc.py
+++ b/docs/mxdoc.py
@@ -265,9 +265,11 @@ def _get_python_block_output(src, global_dict, local_dict):
 ret_status = False
 return (ret_status, s.getvalue()+err)
 
-def _get_jupyter_notebook(lang, lines):
+def _get_jupyter_notebook(lang, all_lines):
 cells = []
-for in_code, blk_lang, lines in _get_blocks(lines):
+# Exclude lines containing 
+filtered_lines = [line for line in all_lines if 
"" not in line]
+for in_code, blk_lang, lines in _get_blocks(filtered_lines):
 if blk_lang != lang:
 in_code = False
 src = '\n'.join(lines)
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index 3eff299d778..8a597e95bfb 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -119,6 +119,8 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 - [Simple autograd 
example](http://mxnet.incubator.apache.org/tutorials/gluon/autograd.html)
 
+- [Inference using an ONNX 
model](http://mxnet.incubator.apache.org/tutorials/onnx/inference_on_onnx_model.html)
+
  
 
  
diff --git a/docs/tutorials/onnx/inference_on_onnx_model.md 
b/docs/tutorials/onnx/inference_on_onnx_model.md
new file mode 100644
index 000..2eb90cd55ab
--- /dev/null
+++ b/docs/tutorials/onnx/inference_on_onnx_model.md
@@ -0,0 +1,269 @@
+
+# Running inference on MXNet/Gluon from an ONNX model
+
+[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides 
an open source format for AI models. It defines an extensible computation graph 
model, as well as definitions of built-in operators and standard data types.
+
+In this tutorial we will:
+
+- learn how to load a pre-trained .onnx model file into MXNet/Gluon
+- learn how to test this model using the sample input/output
+- learn how to test the model on custom images
+
+## Pre-requisite
+
+To run the tutorial you will need to have installed the following python 
modules:
+- [MXNet](http://mxnet.incubator.apache.org/install/index.html)
+- [onnx](https://github.com/onnx/onnx) (follow the install guide)
+- [onnx-mxnet](https://github.com/onnx/onnx-mxnet)
+- matplotlib
+- wget
+
+
+```python
+import numpy as np
+import onnx_mxnet
+import mxnet as mx
+from mxnet import gluon, nd
+%matplotlib inline
+import matplotlib.pyplot as plt
+import tarfile, os
+import wget
+import json
+```
+
+### Downloading supporting files
+These are images and a vizualisation script
+
+
+```python
+image_folder = "images"
+utils_file = "utils.py" # contain utils function to plot nice visualization
+image_net_labels_file = "image_net_labels.json"
+images = ['apron', 'hammerheadshark', 'dog', 'wrench', 'dolphin', 'lotus']
+base_url = 
"https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/onnx/{}?raw=true;
+
+if not os.path.isdir(image_folder):
+os.makedirs(image_folder)
+for image in images:
+wget.download(base_url.format("{}/{}.jpg".format(image_folder, 
image)), image_folder)
+if not os.path.isfile(utils_file):
+wget.download(base_url.format(utils_file))   
+if not os.path.isfile(image_net_labels_file):
+wget.download(base_url.format(image_net_labels_file))  
+```
+
+
+```python
+from utils import *
+```
+
+## Downloading a model from the ONNX model zoo
+
+We download a pre-trained model, in our case the 
[vgg16](https://arxiv.org/abs/1409.1556) model, trained on 
[ImageNet](http://www.image-net.org/) from the [ONNX model 
zoo](https://github.com/onnx/models). The model comes packaged in an archive 
`tar.gz` file containing an `model.onnx` model file and some sample 
input/output data.
+
+
+```python
+base_url = "https://s3.amazonaws.com/download.onnx/models/; 
+current_model = "vgg16"
+model_folder = "model"
+archive = "{}.tar.gz".format(current_model)
+archive_file = os.path.join(model_folder, archive)
+url = "{}{}".format(base_url, archive)
+```
+
+Create the model folder and download the zipped model
+
+
+```python
+os.makedirs(model_folder, exist_ok=True)
+if not os.path.isfile(archive_file):  
+wget.download(url, model_folder)
+```
+
+Extract the model
+
+
+```python
+if not os.path.isdir(os.path.join(model_folder, current_model)):
+tar = tarfile.open(archive_file, "r:gz")
+tar.extractall(model_folder)
+tar.close()
+```
+
+The models have been pre-trained on ImageNet, let's load the label mapping of 
the 1000 classes.
+
+
+```python
+categories = 

[incubator-mxnet] branch master updated: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md (#10165)

2018-03-26 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new bd9b9c8  [MXNET-114] Add the ability to exclude specific lines in 
tutorial notebooks generated from .md (#10165)
bd9b9c8 is described below

commit bd9b9c8b76d68b2b7cd957dc0bd07fb4fbc29c4c
Author: ThomasDelteil 
AuthorDate: Mon Mar 26 18:23:28 2018 -0700

[MXNET-114] Add the ability to exclude specific lines in tutorial notebooks 
generated from .md (#10165)

* ONNX tutorial + skip notebook line

* Update index.md

* Update inference_on_onnx_model.md

putting in back tick to highlight the fact that it is output from cell
---
 docs/mxdoc.py  |   6 +-
 docs/tutorials/index.md|   2 +
 docs/tutorials/onnx/inference_on_onnx_model.md | 269 +
 example/README.md  |   6 +
 4 files changed, 281 insertions(+), 2 deletions(-)

diff --git a/docs/mxdoc.py b/docs/mxdoc.py
index 907ec7c..7f567f0 100644
--- a/docs/mxdoc.py
+++ b/docs/mxdoc.py
@@ -265,9 +265,11 @@ def _get_python_block_output(src, global_dict, local_dict):
 ret_status = False
 return (ret_status, s.getvalue()+err)
 
-def _get_jupyter_notebook(lang, lines):
+def _get_jupyter_notebook(lang, all_lines):
 cells = []
-for in_code, blk_lang, lines in _get_blocks(lines):
+# Exclude lines containing 
+filtered_lines = [line for line in all_lines if 
"" not in line]
+for in_code, blk_lang, lines in _get_blocks(filtered_lines):
 if blk_lang != lang:
 in_code = False
 src = '\n'.join(lines)
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index 3eff299..8a597e9 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -119,6 +119,8 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 - [Simple autograd 
example](http://mxnet.incubator.apache.org/tutorials/gluon/autograd.html)
 
+- [Inference using an ONNX 
model](http://mxnet.incubator.apache.org/tutorials/onnx/inference_on_onnx_model.html)
+
  
 
  
diff --git a/docs/tutorials/onnx/inference_on_onnx_model.md 
b/docs/tutorials/onnx/inference_on_onnx_model.md
new file mode 100644
index 000..2eb90cd
--- /dev/null
+++ b/docs/tutorials/onnx/inference_on_onnx_model.md
@@ -0,0 +1,269 @@
+
+# Running inference on MXNet/Gluon from an ONNX model
+
+[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides 
an open source format for AI models. It defines an extensible computation graph 
model, as well as definitions of built-in operators and standard data types.
+
+In this tutorial we will:
+
+- learn how to load a pre-trained .onnx model file into MXNet/Gluon
+- learn how to test this model using the sample input/output
+- learn how to test the model on custom images
+
+## Pre-requisite
+
+To run the tutorial you will need to have installed the following python 
modules:
+- [MXNet](http://mxnet.incubator.apache.org/install/index.html)
+- [onnx](https://github.com/onnx/onnx) (follow the install guide)
+- [onnx-mxnet](https://github.com/onnx/onnx-mxnet)
+- matplotlib
+- wget
+
+
+```python
+import numpy as np
+import onnx_mxnet
+import mxnet as mx
+from mxnet import gluon, nd
+%matplotlib inline
+import matplotlib.pyplot as plt
+import tarfile, os
+import wget
+import json
+```
+
+### Downloading supporting files
+These are images and a vizualisation script
+
+
+```python
+image_folder = "images"
+utils_file = "utils.py" # contain utils function to plot nice visualization
+image_net_labels_file = "image_net_labels.json"
+images = ['apron', 'hammerheadshark', 'dog', 'wrench', 'dolphin', 'lotus']
+base_url = 
"https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/onnx/{}?raw=true;
+
+if not os.path.isdir(image_folder):
+os.makedirs(image_folder)
+for image in images:
+wget.download(base_url.format("{}/{}.jpg".format(image_folder, 
image)), image_folder)
+if not os.path.isfile(utils_file):
+wget.download(base_url.format(utils_file))   
+if not os.path.isfile(image_net_labels_file):
+wget.download(base_url.format(image_net_labels_file))  
+```
+
+
+```python
+from utils import *
+```
+
+## Downloading a model from the ONNX model zoo
+
+We download a pre-trained model, in our case the 
[vgg16](https://arxiv.org/abs/1409.1556) model, trained on 
[ImageNet](http://www.image-net.org/) from the [ONNX model 
zoo](https://github.com/onnx/models). The model comes packaged in an archive 
`tar.gz` file containing an `model.onnx` model file and some sample 
input/output data.
+
+
+```python
+base_url = "https://s3.amazonaws.com/download.onnx/models/; 
+current_model = "vgg16"
+model_folder = "model"
+archive = 

[GitHub] ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude 
specific lines in tutorial notebooks generated from .md
URL: https://github.com/apache/incubator-mxnet/pull/10165#issuecomment-376363624
 
 
   @szha re-rebased, is it ready to be merged? See approval from @thomelane 
@safrooze @Ishitori above.
   New link here: 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-10165/10/tutorials/onnx/inference_on_onnx_model.html
   I confirm that the `` filter is working as expected


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude 
specific lines in tutorial notebooks generated from .md
URL: https://github.com/apache/incubator-mxnet/pull/10165#issuecomment-376363624
 
 
   @szha re-rebased, is it ready to be merged? See approval from @thomelane 
@safrooze @Ishitori above.
   New link here: 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-10165/10/tutorials/onnx/inference_on_onnx_model.html
   I confirmed `` filter is working as expected


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-03-26 Thread GitBox
haojin2 commented on issue #10208: [MXNET-117] Sparse operator 
broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#issuecomment-376363351
 
 
   @sergeykolychev Probably by the end of this month would be a hard deadline 
for me, since it would be good check this in for the next release. I guess all 
what we need are some minor changes to the interface similar to my changes to 
sparse.py in this PR on the Perl side, so I think any help at your earliest 
convenience is appreciated, thanks for your response and enjoy your vacation!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new pull request #10255: [MXNET-142] Enhance test for LeakyReLU operator

2018-03-26 Thread GitBox
haojin2 opened a new pull request #10255: [MXNET-142] Enhance test for 
LeakyReLU operator
URL: https://github.com/apache/incubator-mxnet/pull/10255
 
 
   ## Description ##
   Enhancement of test_leaky_relu and test_prelu
   
   ## Checklist ##
   ### Essentials ###
   - [x] The PR title starts with [MXNET-142]
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] Code is well-documented
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Improve test_leaky_relu to provide coverage for all float types
   - [x] Improve test_prelu to provide coverage for all float types
   
   ## Comments ##
   - This PR aims to address the issue with previous version of test_leaky_relu 
caused by precision issues with finite difference method with floating point 16 
data type
   - With some experiment I discovered that finite difference method may not be 
suitable for checking numeric gradients with 16-bit floating point inputs. 
Here's an example (All numbers used are represented by 16-bit floating point 
numbers):
   x: [[-0.9,-0.8,-0.7],
[-0.6,-0.5,-0.4],
[-0.3,-0.2,-0.1],
[0.1,0.2,0.3],
[0.4,0.5,0.6],
[0.7,0.8,0.9]]
   act_type: leaky_relu
   slope:0.25
   Analytical Derivative:
   [[0.25,0.25,0.25],
[0.25,0.25,0.25],
[0.25,0.25,0.25],
[ 1.0 ,  1.0 ,  1.0],
[ 1.0 ,  1.0 ,  1.0],
[ 1.0 ,  1.0 ,  1.0]]
   Numeric Derivative from finite difference method with epsilon=1e-4:
   [[ 0.61035156  0.61035156  0.61035156]
[ 0.61035156  0.30517578  0.30517578]
[ 0.30517578  0.15258789  0.22888184]
[ 0.91552734  0.61035156  1.22070312]
[ 1.22070312  1.22070312  2.44140625]
[ 2.44140625  2.44140625  2.44140625]]
   Now if we divide all values in x by 256, which means we are shrinking their 
absolute values, then apply numeric method with the same epsilon=1e-4, we get a 
new set of derivatives:
   [ 0.25268555  0.25268555  0.25268555]
[ 0.25268555  0.25024414  0.25024414]
[ 0.25024414  0.24914551  0.24975586]
[ 0.99902344  0.99658203  1.00097656]
[ 1.00097656  1.00097656  1.01074219]
[ 1.01074219  1.01074219  1.01074219]]
   We can see that derivatives from numeric and analytical methods have way 
bigger difference when the absolute value of input x gets bigger. As a result, 
we need to use a smaller range for drawing the random inputs if we want to do 
verification through numeric methods on 16-bit floating point numbers.
   - The seeds for both tests are set to be fixed because the test could still 
be a bit flaky for check_numeric_gradient, most randomized tests run on my 
local machine passed. The failure cases all failed with a slightly bigger error 
than the tolerance, to reduce the occasional flaky behavior, I chose to fix the 
seed, or we can also get rid of check_numeric_gradient and just check the 
analytical gradients.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sergeykolychev commented on issue #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-03-26 Thread GitBox
sergeykolychev commented on issue #10208: [MXNET-117] Sparse operator 
broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#issuecomment-376360961
 
 
   @haojin2 Hi Hao, certainly, when you need this by the latest ? I'm on 
vacation right now but if it's pressing I can look at it sooner.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #10242: [MXNET-137]fix parameters name inconsistent for Proposal OP and Multi Proposal OP

2018-03-26 Thread GitBox
wkcn commented on issue #10242: [MXNET-137]fix parameters name inconsistent for 
Proposal OP and Multi Proposal OP
URL: https://github.com/apache/incubator-mxnet/pull/10242#issuecomment-376360282
 
 
   @piiswrong 
   Yes. I use the test case, 
[code](https://gist.github.com/wkcn/8015feae59a63956884d1ef8b9fbb743)
   If I replace `cls_score` with `cls_prob` in the code, it will raise the 
error:
   ```
   Traceback (most recent call last):
 File "/home/wkcn/proj/testpy/testmxp.py", line 95, in 
   test(rpn_pre_nms_top_n, rpn_post_nms_top_n)
 File "/home/wkcn/proj/testpy/testmxp.py", line 58, in test
   rpn_min_size = rpn_min_size, output_score = True)
 File "", line 82, in Proposal
 File "/home/wkcn/proj/mxnet/python/mxnet/_ctypes/ndarray.py", line 92, in 
_imperative_invoke
   ctypes.byref(out_stypes)))
 File "/home/wkcn/proj/mxnet/python/mxnet/base.py", line 149, in check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: Cannot find argument 'cls_prob', Possible Arguments:
   ```
   
   But it's right to use `cls_prob` in mx.sym.contrib.Proposal in [the 
code](https://github.com/apache/incubator-mxnet/blob/master/example/rcnn/rcnn/symbol/symbol_resnet.py#L193)
   
   The reason is that the function `ListArguments()` in 
[proposal-inl.h](https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal-inl.h#L160)
 returns `{"cls_prob", "bbox_pred", "im_info"}`, it's used in symbol.
   However, `.add_argument("cls_score", "NDArray-or-Symbol", "Score of how 
likely proposal is object.")` in 
[proposal.cc](https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cc#L462)
 is used in ndarray.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #10250: Model too big?

2018-03-26 Thread GitBox
eric-haibin-lin commented on issue #10250: Model too big?
URL: 
https://github.com/apache/incubator-mxnet/issues/10250#issuecomment-376359209
 
 
   SparseEmbedding could help reduce the memory footprint for the gradient. See 
example usage in 
https://github.com/apache/incubator-mxnet/tree/master/example/sparse/matrix_factorization
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
anirudhacharya commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177276981
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/op_translations.py
 ##
 @@ -402,9 +438,9 @@ def max_pooling(attrs, inputs, cls):
 'strides': 'stride',
 'pads': 'pad',
})
+
 new_attrs = translation_utils._add_extra_attributes(new_attrs,
-{'pool_type': 'avg',
- 'pooling_convention': 
'valid'
+{'pooling_convention': 
'valid'
 })
 new_op = translation_utils._fix_pooling('max', inputs, new_attrs)
 
 Review comment:
   if more than one operator needs the same fix during translation, we are 
putting that code in translation_utils so that there is no code repetition and 
redundancy. Here Max_Pooling and Avg_pooling are calling this method from 
translation_utils. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
anirudhacharya commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177276477
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/op_translations.py
 ##
 @@ -328,6 +354,17 @@ def squeeze(attrs, inputs, cls):
 mxnet_op = symbol.split(mxnet_op, axis=i-1, num_outputs=1, 
squeeze_axis=1)
 return mxnet_op, new_attrs, inputs
 
+def take(attrs, inputs, cls):
+""" Takes elements from an input array along the given axis.
+Currently only slicing along axis 0 is supported for now."""
+return 'take', attrs, inputs
 
 Review comment:
   operator support details will be documented here -  
https://cwiki.apache.org/confluence/display/MXNET/ONNX


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
anirudhacharya commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177276276
 
 

 ##
 File path: tests/python-pytest/onnx/onnx_test.py
 ##
 @@ -126,11 +144,111 @@ def test_super_resolution_example():
 
 output_img_dim = 672
 input_image, img_cb, img_cr = super_resolution.get_test_image()
-result_img = super_resolution.perform_inference(sym, params, input_image,
-img_cb, img_cr)
+result_img = super_resolution.perform_inference(sym, arg_params, 
aux_params,
+input_image, img_cb, 
img_cr)
 
 assert hashlib.md5(result_img.tobytes()).hexdigest() == 
'0d98393a49b1d9942106a2ed89d1e854'
 assert result_img.size == (output_img_dim, output_img_dim)
 
+def get_test_files(name):
+"""Extract tar file and returns model path and input, output data"""
+tar_name = download(URLS.get(name), dirname=CURR_PATH.__str__())
+# extract tar file
+tar_path = os.path.join(CURR_PATH, tar_name)
+tar = tarfile.open(tar_path.__str__(), "r:*")
+tar.extractall(path=CURR_PATH.__str__())
+tar.close()
+data_dir = os.path.join(CURR_PATH, name)
+model_path = os.path.join(data_dir, 'model.onnx')
+
+inputs = []
+outputs = []
+# get test files
+for test_file in os.listdir(data_dir):
+case_dir = os.path.join(data_dir, test_file)
+# skip the non-dir files
+if not os.path.isdir(case_dir):
+continue
+input_file = os.path.join(case_dir, 'input_0.pb')
+input_tensor = TensorProto()
+with open(input_file, 'rb') as proto_file:
+input_tensor.ParseFromString(proto_file.read())
+inputs.append(numpy_helper.to_array(input_tensor))
+
+output_tensor = TensorProto()
+output_file = os.path.join(case_dir, 'output_0.pb')
+with open(output_file, 'rb') as proto_file:
+output_tensor.ParseFromString(proto_file.read())
+outputs.append(numpy_helper.to_array(output_tensor))
+
+return model_path, inputs, outputs
+
+def test_bvlc_googlenet():
+""" Tests Googlenet model"""
+model_path, inputs, outputs = get_test_files('bvlc_googlenet')
+logging.info("Translating Googlenet model from ONNX to Mxnet")
+sym, arg_params, aux_params = onnx_mxnet.import_model(model_path)
+
+# run test for each test file
+for input_data, output_data in zip(inputs, outputs):
+# create module
+mod = mx.mod.Module(symbol=sym, data_names=['input_0'], 
context=mx.cpu(), label_names=None)
+mod.bind(for_training=False, data_shapes=[('input_0', 
input_data.shape)], label_shapes=None)
+mod.set_params(arg_params=arg_params, aux_params=aux_params,
+   allow_missing=True, allow_extra=True)
+# run inference
+batch = namedtuple('Batch', ['data'])
+mod.forward(batch([mx.nd.array(input_data)]), is_train=False)
+
+# verify the results
+npt.assert_equal(mod.get_outputs()[0].shape, output_data.shape)
+npt.assert_almost_equal(output_data, mod.get_outputs()[0].asnumpy(), 
decimal=3)
+logging.info("Googlenet model conversion Successful")
+
+def test_bvlc_reference_caffenet():
+"""Tests the bvlc cafenet model"""
+model_path, inputs, outputs = get_test_files('bvlc_reference_caffenet')
+logging.info("Translating Caffenet model from ONNX to Mxnet")
+sym, arg_params, aux_params = onnx_mxnet.import_model(model_path)
+
+# run test for each test file
+for input_data, output_data in zip(inputs, outputs):
+# create module
+mod = mx.mod.Module(symbol=sym, data_names=['input_0'], 
context=mx.cpu(), label_names=None)
 
 Review comment:
   We are populating these data names while translating the onnx model into 
mxnet model here - 
https://github.com/spidyDev/incubator-mxnet/blob/AddOp-Maxpool-BatchNorm/python/mxnet/contrib/onnx/_import/import_onnx.py#L107-L111


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
anirudhacharya commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177275133
 
 

 ##
 File path: tests/python-pytest/onnx/backend.py
 ##
 @@ -94,12 +94,13 @@ def run_node(cls, node, inputs, device='CPU'):
 result obtained after running the operator
 """
 graph = GraphProto()
-sym, _ = graph.from_onnx(MXNetBackend.make_graph(node, inputs))
-data_names = [i for i in sym.get_internals().list_inputs()]
+sym, arg_params, aux_params = 
graph.from_onnx(MXNetBackend.make_graph(node, inputs))
+data_names = [i for i in sym.list_inputs() if i not in (arg_params, 
aux_params)]
 
 Review comment:
   will make the change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
anirudhacharya commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177274972
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/op_translations.py
 ##
 @@ -328,6 +354,17 @@ def squeeze(attrs, inputs, cls):
 mxnet_op = symbol.split(mxnet_op, axis=i-1, num_outputs=1, 
squeeze_axis=1)
 return mxnet_op, new_attrs, inputs
 
+def take(attrs, inputs, cls):
+""" Takes elements from an input array along the given axis.
+Currently only slicing along axis 0 is supported for now."""
+return 'take', attrs, inputs
+
+def flatten(attrs, inputs, cls):
+"""Flattens the input array into a 2-D array by collapsing the higher 
dimensions."""
+#Mxnet does not have axis support.
+new_attrs = translation_utils._remove_attributes(attrs, ['axis'])
 
 Review comment:
   WE have an operator support confluence page which is being linked in the 
documentation for the API - 
https://cwiki.apache.org/confluence/display/MXNET/ONNX


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 opened a new pull request #10254: [MXNET-116] Solve the native package build failure

2018-03-26 Thread GitBox
lanking520 opened a new pull request #10254: [MXNET-116] Solve the native 
package build failure
URL: https://github.com/apache/incubator-mxnet/pull/10254
 
 
   ## Description ##
   #10253 [Solved]
   @nswamy @Roshrini @aaronmarkham 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177271211
 
 

 ##
 File path: tests/python-pytest/onnx/backend.py
 ##
 @@ -123,7 +124,10 @@ def run_node(cls, node, inputs, device='CPU'):
 mod.bind(for_training=False, data_shapes=data_shapes, 
label_shapes=None)
 
 # initializing parameters for calculating result of each individual 
node
-mod.init_params()
+if arg_params is None or aux_params is None:
 
 Review comment:
   If arg_params is not None and aux_params are None?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177269859
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/op_translations.py
 ##
 @@ -402,9 +438,9 @@ def max_pooling(attrs, inputs, cls):
 'strides': 'stride',
 'pads': 'pad',
})
+
 new_attrs = translation_utils._add_extra_attributes(new_attrs,
-{'pool_type': 'avg',
- 'pooling_convention': 
'valid'
+{'pooling_convention': 
'valid'
 })
 new_op = translation_utils._fix_pooling('max', inputs, new_attrs)
 
 Review comment:
   Would it be rather better approach to have a translator for Pooling operator 
(also applicable for other category of operators) and all this magic of fix_* 
calls is handled within it? 
   For example, how do I know for pooling, I need to call fix_pooling?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177271067
 
 

 ##
 File path: tests/python-pytest/onnx/backend.py
 ##
 @@ -94,12 +94,13 @@ def run_node(cls, node, inputs, device='CPU'):
 result obtained after running the operator
 """
 graph = GraphProto()
-sym, _ = graph.from_onnx(MXNetBackend.make_graph(node, inputs))
-data_names = [i for i in sym.get_internals().list_inputs()]
+sym, arg_params, aux_params = 
graph.from_onnx(MXNetBackend.make_graph(node, inputs))
+data_names = [i for i in sym.list_inputs() if i not in (arg_params, 
aux_params)]
 
 Review comment:
   nit: i -> something more meaningful


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177269413
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/op_translations.py
 ##
 @@ -328,6 +354,17 @@ def squeeze(attrs, inputs, cls):
 mxnet_op = symbol.split(mxnet_op, axis=i-1, num_outputs=1, 
squeeze_axis=1)
 return mxnet_op, new_attrs, inputs
 
+def take(attrs, inputs, cls):
+""" Takes elements from an input array along the given axis.
+Currently only slicing along axis 0 is supported for now."""
+return 'take', attrs, inputs
+
+def flatten(attrs, inputs, cls):
+"""Flattens the input array into a 2-D array by collapsing the higher 
dimensions."""
+#Mxnet does not have axis support.
+new_attrs = translation_utils._remove_attributes(attrs, ['axis'])
 
 Review comment:
   How does user know about this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177271919
 
 

 ##
 File path: tests/python-pytest/onnx/onnx_test.py
 ##
 @@ -126,11 +144,111 @@ def test_super_resolution_example():
 
 output_img_dim = 672
 input_image, img_cb, img_cr = super_resolution.get_test_image()
-result_img = super_resolution.perform_inference(sym, params, input_image,
-img_cb, img_cr)
+result_img = super_resolution.perform_inference(sym, arg_params, 
aux_params,
+input_image, img_cb, 
img_cr)
 
 assert hashlib.md5(result_img.tobytes()).hexdigest() == 
'0d98393a49b1d9942106a2ed89d1e854'
 assert result_img.size == (output_img_dim, output_img_dim)
 
+def get_test_files(name):
+"""Extract tar file and returns model path and input, output data"""
+tar_name = download(URLS.get(name), dirname=CURR_PATH.__str__())
+# extract tar file
+tar_path = os.path.join(CURR_PATH, tar_name)
+tar = tarfile.open(tar_path.__str__(), "r:*")
+tar.extractall(path=CURR_PATH.__str__())
+tar.close()
+data_dir = os.path.join(CURR_PATH, name)
+model_path = os.path.join(data_dir, 'model.onnx')
+
+inputs = []
+outputs = []
+# get test files
+for test_file in os.listdir(data_dir):
+case_dir = os.path.join(data_dir, test_file)
+# skip the non-dir files
+if not os.path.isdir(case_dir):
+continue
+input_file = os.path.join(case_dir, 'input_0.pb')
+input_tensor = TensorProto()
+with open(input_file, 'rb') as proto_file:
+input_tensor.ParseFromString(proto_file.read())
+inputs.append(numpy_helper.to_array(input_tensor))
+
+output_tensor = TensorProto()
+output_file = os.path.join(case_dir, 'output_0.pb')
+with open(output_file, 'rb') as proto_file:
+output_tensor.ParseFromString(proto_file.read())
+outputs.append(numpy_helper.to_array(output_tensor))
+
+return model_path, inputs, outputs
+
+def test_bvlc_googlenet():
+""" Tests Googlenet model"""
+model_path, inputs, outputs = get_test_files('bvlc_googlenet')
+logging.info("Translating Googlenet model from ONNX to Mxnet")
+sym, arg_params, aux_params = onnx_mxnet.import_model(model_path)
+
+# run test for each test file
+for input_data, output_data in zip(inputs, outputs):
+# create module
+mod = mx.mod.Module(symbol=sym, data_names=['input_0'], 
context=mx.cpu(), label_names=None)
+mod.bind(for_training=False, data_shapes=[('input_0', 
input_data.shape)], label_shapes=None)
+mod.set_params(arg_params=arg_params, aux_params=aux_params,
+   allow_missing=True, allow_extra=True)
+# run inference
+batch = namedtuple('Batch', ['data'])
+mod.forward(batch([mx.nd.array(input_data)]), is_train=False)
+
+# verify the results
+npt.assert_equal(mod.get_outputs()[0].shape, output_data.shape)
+npt.assert_almost_equal(output_data, mod.get_outputs()[0].asnumpy(), 
decimal=3)
+logging.info("Googlenet model conversion Successful")
+
+def test_bvlc_reference_caffenet():
+"""Tests the bvlc cafenet model"""
+model_path, inputs, outputs = get_test_files('bvlc_reference_caffenet')
+logging.info("Translating Caffenet model from ONNX to Mxnet")
+sym, arg_params, aux_params = onnx_mxnet.import_model(model_path)
+
+# run test for each test file
+for input_data, output_data in zip(inputs, outputs):
+# create module
+mod = mx.mod.Module(symbol=sym, data_names=['input_0'], 
context=mx.cpu(), label_names=None)
 
 Review comment:
   Is there a way to get data_names from ONNX model?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177269447
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/op_translations.py
 ##
 @@ -328,6 +354,17 @@ def squeeze(attrs, inputs, cls):
 mxnet_op = symbol.split(mxnet_op, axis=i-1, num_outputs=1, 
squeeze_axis=1)
 return mxnet_op, new_attrs, inputs
 
+def take(attrs, inputs, cls):
+""" Takes elements from an input array along the given axis.
+Currently only slicing along axis 0 is supported for now."""
+return 'take', attrs, inputs
 
 Review comment:
   if axis is not 0?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177271724
 
 

 ##
 File path: tests/python-pytest/onnx/onnx_test.py
 ##
 @@ -21,19 +21,37 @@
 ONNX backend test framework. Once we have PRs on the ONNX repo and get
 those PRs merged, this file will get EOL'ed.
 """
+# pylint: disable=too-many-locals,wrong-import-position,import-error
 from __future__ import absolute_import
 import sys
 import os
 import unittest
 import logging
 import hashlib
+import tarfile
+from collections import namedtuple
 import numpy as np
 import numpy.testing as npt
 from onnx import helper
-import backend as mxnet_backend
+from onnx import numpy_helper
+from onnx import TensorProto
+from mxnet.test_utils import download
+from mxnet.contrib import onnx as onnx_mxnet
+import mxnet as mx
 CURR_PATH = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
 sys.path.insert(0, os.path.join(CURR_PATH, '../../python/unittest'))
 from common import with_seed
+import backend as mxnet_backend
+
+
+URLS = {
+'bvlc_googlenet' :
 
 Review comment:
   how big are these models? what is the impact on PR builds and nightly 
builds? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177271405
 
 

 ##
 File path: tests/python-pytest/onnx/onnx_backend_test.py
 ##
 @@ -57,37 +57,40 @@
 'test_floor',
 
 ## Joining and spliting
-#'test_concat.*',  #---Failing test
+'test_concat',
 
 #Basic neural network functions
 'test_sigmoid',
 'test_relu',
-#'test_constant_pad',
-#'test_edge_pad',
-#'test_reflect_pad',
+'test_constant_pad',
+'test_edge_pad',
+'test_reflect_pad',
 'test_matmul',
 'test_leakyrelu',
 'test_elu',
-#'test_softmax*',
+'test_softmax_example',
+'test_softmax_large_number',
+'test_softmax_axis_2',
 'test_conv',
 'test_basic_conv',
-#'test_globalmaxpool',
-#'test_globalaveragepool',
-#'test_batch_norm',
+'test_transpose',
+#'test_globalmaxpool', - tests to be added
+#'test_globalaveragepool', - tests to be added
+#'test_batch_norm', - tests to be added
+#'test_gather',
 
 Review comment:
   Can you please add these tests as part of this PR, because, these operators 
are added in this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177270889
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/translation_utils.py
 ##
 @@ -90,10 +90,46 @@ def _fix_pooling(pool_type, inputs, new_attr):
 stride = new_attr.get('stride')
 kernel = new_attr.get('kernel')
 padding = new_attr.get('pad')
-pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, len(kernel))
-new_pad_op = symbol.pad(inputs[0], mode='constant', pad_width=pad_width)
-new_pooling_op = symbol.Pooling(new_pad_op, pool_type=pool_type,
-stride=stride, kernel=kernel)
+
+# Adding default stride.
+if stride is None:
+stride = (1,) * len(kernel)
+
+# Add padding attr if not provided.
+if padding is None:
+padding = (0,) * len(kernel) * 2
+
+# Mxnet Pad operator supports only 4D/5D tensors.
+# For 1D case, these are the steps:
+#Step 1. Add extra dummy dimension to make it 4D. Adding to  axis = 2
+#Step 2. Apply padding to this changed tensor
+#Step 3. Remove the extra dimension added in step 1.
+if len(kernel) == 1:
+dummy_axis = 2
+# setting 0 padding to the new dim to be added.
+padding = (0, padding[0], 0, padding[1])
+pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, kernel_dim=2)
+
+# Step 1.
+curr_sym = symbol.expand_dims(inputs[0], axis=dummy_axis)
+
+# Step 2. Common for all tensor sizes
+new_pad_op = symbol.pad(curr_sym, mode='edge', pad_width=pad_width)
+
+# Step 3: Removing extra dim added.
+new_pad_op = symbol.split(new_pad_op, axis=dummy_axis, num_outputs=1, 
squeeze_axis=1)
+else:
+# For 2D/3D cases:
+# Apply padding
+pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, 
kernel_dim=len(kernel))
+curr_sym = inputs[0]
+if pool_type == 'max':
+new_pad_op = symbol.pad(curr_sym, mode='edge', pad_width=pad_width)
+else:
+new_pad_op = symbol.pad(curr_sym, mode='constant', 
pad_width=pad_width)
+
+# Apply pooling without pads.
+new_pooling_op = symbol.Pooling(new_pad_op, pool_type=pool_type, 
stride=stride, kernel=kernel)
 
 Review comment:
   why not pad with pooling operator?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177270800
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/translation_utils.py
 ##
 @@ -90,10 +90,46 @@ def _fix_pooling(pool_type, inputs, new_attr):
 stride = new_attr.get('stride')
 kernel = new_attr.get('kernel')
 padding = new_attr.get('pad')
-pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, len(kernel))
-new_pad_op = symbol.pad(inputs[0], mode='constant', pad_width=pad_width)
-new_pooling_op = symbol.Pooling(new_pad_op, pool_type=pool_type,
-stride=stride, kernel=kernel)
+
+# Adding default stride.
+if stride is None:
+stride = (1,) * len(kernel)
+
+# Add padding attr if not provided.
+if padding is None:
+padding = (0,) * len(kernel) * 2
+
+# Mxnet Pad operator supports only 4D/5D tensors.
+# For 1D case, these are the steps:
+#Step 1. Add extra dummy dimension to make it 4D. Adding to  axis = 2
+#Step 2. Apply padding to this changed tensor
+#Step 3. Remove the extra dimension added in step 1.
+if len(kernel) == 1:
+dummy_axis = 2
+# setting 0 padding to the new dim to be added.
+padding = (0, padding[0], 0, padding[1])
+pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, kernel_dim=2)
+
+# Step 1.
+curr_sym = symbol.expand_dims(inputs[0], axis=dummy_axis)
+
+# Step 2. Common for all tensor sizes
+new_pad_op = symbol.pad(curr_sym, mode='edge', pad_width=pad_width)
+
+# Step 3: Removing extra dim added.
+new_pad_op = symbol.split(new_pad_op, axis=dummy_axis, num_outputs=1, 
squeeze_axis=1)
+else:
+# For 2D/3D cases:
+# Apply padding
+pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, 
kernel_dim=len(kernel))
+curr_sym = inputs[0]
+if pool_type == 'max':
 
 Review comment:
   Please add description for mode


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #10118: 
[MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#discussion_r177270041
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/translation_utils.py
 ##
 @@ -90,10 +90,46 @@ def _fix_pooling(pool_type, inputs, new_attr):
 stride = new_attr.get('stride')
 kernel = new_attr.get('kernel')
 padding = new_attr.get('pad')
-pad_width = (0, 0, 0, 0) + _pad_sequence_fix(padding, len(kernel))
-new_pad_op = symbol.pad(inputs[0], mode='constant', pad_width=pad_width)
-new_pooling_op = symbol.Pooling(new_pad_op, pool_type=pool_type,
-stride=stride, kernel=kernel)
+
+# Adding default stride.
 
 Review comment:
   default for dilation?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jwfromm commented on issue #9974: DataLoader with workers not compatible with ImageRecordDataset

2018-03-26 Thread GitBox
jwfromm commented on issue #9974: DataLoader with workers not compatible with 
ImageRecordDataset
URL: 
https://github.com/apache/incubator-mxnet/issues/9974#issuecomment-376351149
 
 
   If this is an acceptable fix, it'd be great to get it in the master branch 
since I'm sure other people will start hitting this error soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #9974: DataLoader with workers not compatible with ImageRecordDataset

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #9974: DataLoader with workers not compatible 
with ImageRecordDataset
URL: 
https://github.com/apache/incubator-mxnet/issues/9974#issuecomment-376347564
 
 
   hot-fix for this problem: use at your own risk:
   ```
   import mxnet as mx
   from mxnet import gluon
   from mxnet.gluon.data import RecordFileDataset
   from mxnet.gluon.data.dataloader import DataLoader
   from mxnet import recordio
   
   # We keep the filename as an attribute
   # So that we can open a new handle per process
   # in the dataloader
   
   def __init__new(self, filename):
   self._filename = filename
   self.reinitialize()
   
   def reinitialize(self):
   idx_file = os.path.splitext(self._filename)[0] + '.idx'
   self._record = recordio.MXIndexedRecordIO(idx_file, self._filename, 'r')
   
   RecordFileDataset.reinitialize = reinitialize
   RecordFileDataset.__init__ = __init__new
   
   # We modify the dataloader worker_loop to reinit the dataset if possible
   # And then call to the original worker_loop
   
   gluon.data.dataloader.worker_loop_old = gluon.data.dataloader.worker_loop
   
   def worker_loop_new(dataset, key_queue, data_queue, batchify_fn):
   if 'reinitialize' in dir(dataset):
   dataset.reinitialize()
   gluon.data.dataloader.worker_loop_old(dataset, key_queue, data_queue, 
batchify_fn)
   
   gluon.data.dataloader.worker_loop = worker_loop_new
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #9974: DataLoader with workers not compatible with ImageRecordDataset

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #9974: DataLoader with workers not compatible 
with ImageRecordDataset
URL: 
https://github.com/apache/incubator-mxnet/issues/9974#issuecomment-376347564
 
 
   hot-fix for this problem: use at your own risk:
   ```
   import mxnet as mx
   from mxnet import gluon
   from mxnet.gluon.data import RecordFileDataset
   from mxnet.gluon.data.dataloader import DataLoader
   from mxnet import recordio
   
   # We keep the filename as an attribute
   # So that we can open a new handle per process
   # in the dataloader
   
   def __init__new(self, filename):
   self._filename = filename
   self.reinitialize()
   
   def reinitialize(self):
   idx_file = os.path.splitext(self._filename)[0] + '.idx'
   self._record = recordio.MXIndexedRecordIO(idx_file, self._filename, 'r')
   
   RecordFileDataset.reinit = reinit
   RecordFileDataset.__init__ = __init__new
   
   # We modify the dataloader worker_loop to reinit the dataset if possible
   # And then call to the original worker_loop
   
   gluon.data.dataloader.worker_loop_old = gluon.data.dataloader.worker_loop
   
   def worker_loop_new(dataset, key_queue, data_queue, batchify_fn):
   if 'reinitialize' in dir(dataset):
   dataset.reinitialize()
   gluon.data.dataloader.worker_loop_old(dataset, key_queue, data_queue, 
batchify_fn)
   
   gluon.data.dataloader.worker_loop = worker_loop_new
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #9974: DataLoader with workers not compatible with ImageRecordDataset

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #9974: DataLoader with workers not compatible 
with ImageRecordDataset
URL: 
https://github.com/apache/incubator-mxnet/issues/9974#issuecomment-376347564
 
 
   Use at your own risk. This worked for me:
   ```
   import mxnet as mx
   from mxnet import gluon
   from mxnet.gluon.data import RecordFileDataset
   from mxnet.gluon.data.dataloader import DataLoader
   from mxnet import recordio
   
   # We keep the filename as an attribute
   # So that we can open a new handle per process
   # in the dataloader
   
   def __init__new(self, filename):
   self._filename = filename
   self.reinit()
   
   def reinit(self):
   idx_file = os.path.splitext(self._filename)[0] + '.idx'
   self._record = recordio.MXIndexedRecordIO(idx_file, self._filename, 'r')
   
   RecordFileDataset.reinit = reinit
   RecordFileDataset.__init__ = __init__new
   
   # We modify the dataloader worker_loop to reinit the dataset if possible
   # And then call to the original worker_loop
   
   gluon.data.dataloader.worker_loop_old = gluon.data.dataloader.worker_loop
   
   def worker_loop_new(dataset, key_queue, data_queue, batchify_fn):
   if 'reinit' in dir(dataset):
   dataset.reinit()
   gluon.data.dataloader.worker_loop_old(dataset, key_queue, data_queue, 
batchify_fn)
   
   gluon.data.dataloader.worker_loop = worker_loop_new
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #9974: DataLoader with workers not compatible with ImageRecordDataset

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #9974: DataLoader with workers not compatible 
with ImageRecordDataset
URL: 
https://github.com/apache/incubator-mxnet/issues/9974#issuecomment-376347564
 
 
   hot-fix for this problem: use at your own risk:
   ```
   import mxnet as mx
   from mxnet import gluon
   from mxnet.gluon.data import RecordFileDataset
   from mxnet.gluon.data.dataloader import DataLoader
   from mxnet import recordio
   
   # We keep the filename as an attribute
   # So that we can open a new handle per process
   # in the dataloader
   
   def __init__new(self, filename):
   self._filename = filename
   self.reinit()
   
   def reinit(self):
   idx_file = os.path.splitext(self._filename)[0] + '.idx'
   self._record = recordio.MXIndexedRecordIO(idx_file, self._filename, 'r')
   
   RecordFileDataset.reinit = reinit
   RecordFileDataset.__init__ = __init__new
   
   # We modify the dataloader worker_loop to reinit the dataset if possible
   # And then call to the original worker_loop
   
   gluon.data.dataloader.worker_loop_old = gluon.data.dataloader.worker_loop
   
   def worker_loop_new(dataset, key_queue, data_queue, batchify_fn):
   if 'reinit' in dir(dataset):
   dataset.reinit()
   gluon.data.dataloader.worker_loop_old(dataset, key_queue, data_queue, 
batchify_fn)
   
   gluon.data.dataloader.worker_loop = worker_loop_new
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #9974: DataLoader with workers not compatible with ImageRecordDataset

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #9974: DataLoader with workers not compatible 
with ImageRecordDataset
URL: 
https://github.com/apache/incubator-mxnet/issues/9974#issuecomment-376347564
 
 
   Use at your own risk. This worked for me:
   ```
   import mxnet as mx
   from mxnet import gluon
   from mxnet.gluon.data import RecordFileDataset
   from mxnet.gluon.data.dataloader import DataLoader
   from mxnet import recordio
   
   # We keep the filename as an attribute
   # So that we can open a new handle per process
   # in the dataloader
   
   def __init__new(self, filename):
   self._filename = filename
   self.reinit()
   
   def reinit(self):
   idx_file = os.path.splitext(self._filename)[0] + '.idx'
   self._record = recordio.MXIndexedRecordIO(idx_file, self._filename, 'r')
   
   RecordFileDataset.reinit = reinit
   RecordFileDataset.__init__ = __init__new
   
   # We modify the dataloader worker_loop to reinit the dataset if possible
   # And then call to the original worker_loop
   
   gluon.data.dataloader.worker_loop_ = gluon.data.dataloader.worker_loop
   
   def worker_loop_new(dataset, key_queue, data_queue, batchify_fn):
   if 'reinit' in dir(dataset):
   dataset.reinit()
   gluon.data.dataloader.worker_loop_(dataset, key_queue, data_queue, 
batchify_fn)
   
   gluon.data.dataloader.worker_loop = worker_loop_new
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #10081: [MXNET-82] Sparse op tutorial for developers

2018-03-26 Thread GitBox
eric-haibin-lin closed pull request #10081: [MXNET-82] Sparse op tutorial for 
developers
URL: https://github.com/apache/incubator-mxnet/pull/10081
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/faq/index.md b/docs/faq/index.md
index 099cd509b14..098d37f5fc0 100644
--- a/docs/faq/index.md
+++ b/docs/faq/index.md
@@ -56,6 +56,8 @@ and full working examples, visit the [tutorials 
section](../tutorials/index.md).
 
 * [How do I create new operators in MXNet?](http://mxnet.io/faq/new_op.html)
 
+* [How do I implement sparse operators in MXNet 
backend?](https://cwiki.apache.org/confluence/display/MXNET/A+Guide+to+Implementing+Sparse+Operators+in+MXNet+Backend)
+
 * [How do I contribute an example or 
tutorial?](https://github.com/apache/incubator-mxnet/tree/master/example#contributing)
 
 * [How do I set MXNet's environmental 
variables?](http://mxnet.io/faq/env_var.html)
diff --git a/src/operator/contrib/quadratic_op-inl.h 
b/src/operator/contrib/quadratic_op-inl.h
index 8d73a4286f6..fe477811b06 100644
--- a/src/operator/contrib/quadratic_op-inl.h
+++ b/src/operator/contrib/quadratic_op-inl.h
@@ -32,6 +32,7 @@
 #include "../mxnet_op.h"
 #include "../operator_common.h"
 #include "../elemwise_op_common.h"
+#include "../tensor/init_op.h"
 
 namespace mxnet {
 namespace op {
@@ -73,6 +74,33 @@ inline bool QuadraticOpType(const nnvm::NodeAttrs& attrs,
   return out_attrs->at(0) != -1;
 }
 
+inline bool QuadraticOpStorageType(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   DispatchMode* dispatch_mode,
+   std::vector* in_attrs,
+   std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  const QuadraticParam& param = nnvm::get(attrs.parsed);
+  const int in_stype = in_attrs->at(0);
+  int& out_stype = out_attrs->at(0);
+  bool dispatched = false;
+  if (!dispatched && in_stype == kDefaultStorage) {
+// dns -> dns
+dispatched = storage_type_assign(_stype, kDefaultStorage,
+ dispatch_mode, DispatchMode::kFCompute);
+  }
+  if (!dispatched && in_stype == kCSRStorage && param.c == 0.0) {
+// csr -> csr
+dispatched = storage_type_assign(_stype, kCSRStorage,
+ dispatch_mode, DispatchMode::kFComputeEx);
+  }
+  if (!dispatched) {
+dispatched = dispatch_fallback(out_attrs, dispatch_mode);
+  }
+  return dispatched;
+}
+
 template
 struct quadratic_forward {
   template
@@ -114,6 +142,61 @@ void QuadraticOpForward(const nnvm::NodeAttrs& attrs,
   });
 }
 
+template
+void QuadraticOpForwardCsrImpl(const QuadraticParam& param,
+   const OpContext& ctx,
+   const NDArray& input,
+   const OpReqType req,
+   const NDArray& output) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  using namespace csr;
+  if (req == kNullOp) return;
+  CHECK_EQ(req, kWriteTo) << "QuadraticOp with CSR only supports kWriteTo";
+  Stream *s = ctx.get_stream();
+  if (!input.storage_initialized()) {
+FillZerosCsrImpl(s, output);
+return;
+  }
+  const nnvm::dim_t nnz = input.storage_shape()[0];
+  const nnvm::dim_t num_rows = output.shape()[0];
+  output.CheckAndAlloc({Shape1(num_rows + 1), Shape1(nnz)});
+  CHECK_EQ(output.aux_type(kIdx), output.aux_type(kIndPtr))
+<< "The dtypes of indices and indptr don't match";
+  MSHADOW_TYPE_SWITCH(output.dtype(), DType, {
+MSHADOW_IDX_TYPE_SWITCH(output.aux_type(kIdx), IType, {
+  MXNET_ASSIGN_REQ_SWITCH(req, req_type, {
+Kernel::Launch(
+s, nnz, output.data().dptr(), input.data().dptr(),
+param.a, param.b, param.c);
+Copy(output.aux_data(kIdx).FlatTo1D(),
+ input.aux_data(kIdx).FlatTo1D(), s);
+Copy(output.aux_data(kIndPtr).FlatTo1D(),
+ input.aux_data(kIndPtr).FlatTo1D(), s);
+  });
+});
+  });
+}
+
+template
+void QuadraticOpForwardEx(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  CHECK_EQ(inputs.size(), 1U);
+  CHECK_EQ(outputs.size(), 1U);
+  CHECK_EQ(req.size(), 1U);
+  const QuadraticParam& param = nnvm::get(attrs.parsed);
+  const auto in_stype = inputs[0].storage_type();
+  const auto out_stype = outputs[0].storage_type();
+  if (in_stype == kCSRStorage && out_stype == kCSRStorage && param.c == 0.0) {
+   

[incubator-mxnet] branch master updated: add guide for implementing sparse ops (#10081)

2018-03-26 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new a2c4b0a  add guide for implementing sparse ops (#10081)
a2c4b0a is described below

commit a2c4b0a0ccaec40e22216aa83873a49d7f7506ef
Author: Haibin Lin 
AuthorDate: Mon Mar 26 16:32:11 2018 -0700

add guide for implementing sparse ops (#10081)
---
 docs/faq/index.md |  2 +
 src/operator/contrib/quadratic_op-inl.h   | 83 +++
 src/operator/contrib/quadratic_op.cc  |  9 ++-
 src/operator/contrib/quadratic_op.cu  |  1 +
 tests/python/unittest/test_sparse_operator.py | 20 +++
 5 files changed, 114 insertions(+), 1 deletion(-)

diff --git a/docs/faq/index.md b/docs/faq/index.md
index 099cd50..098d37f 100644
--- a/docs/faq/index.md
+++ b/docs/faq/index.md
@@ -56,6 +56,8 @@ and full working examples, visit the [tutorials 
section](../tutorials/index.md).
 
 * [How do I create new operators in MXNet?](http://mxnet.io/faq/new_op.html)
 
+* [How do I implement sparse operators in MXNet 
backend?](https://cwiki.apache.org/confluence/display/MXNET/A+Guide+to+Implementing+Sparse+Operators+in+MXNet+Backend)
+
 * [How do I contribute an example or 
tutorial?](https://github.com/apache/incubator-mxnet/tree/master/example#contributing)
 
 * [How do I set MXNet's environmental 
variables?](http://mxnet.io/faq/env_var.html)
diff --git a/src/operator/contrib/quadratic_op-inl.h 
b/src/operator/contrib/quadratic_op-inl.h
index 8d73a42..fe47781 100644
--- a/src/operator/contrib/quadratic_op-inl.h
+++ b/src/operator/contrib/quadratic_op-inl.h
@@ -32,6 +32,7 @@
 #include "../mxnet_op.h"
 #include "../operator_common.h"
 #include "../elemwise_op_common.h"
+#include "../tensor/init_op.h"
 
 namespace mxnet {
 namespace op {
@@ -73,6 +74,33 @@ inline bool QuadraticOpType(const nnvm::NodeAttrs& attrs,
   return out_attrs->at(0) != -1;
 }
 
+inline bool QuadraticOpStorageType(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   DispatchMode* dispatch_mode,
+   std::vector* in_attrs,
+   std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  const QuadraticParam& param = nnvm::get(attrs.parsed);
+  const int in_stype = in_attrs->at(0);
+  int& out_stype = out_attrs->at(0);
+  bool dispatched = false;
+  if (!dispatched && in_stype == kDefaultStorage) {
+// dns -> dns
+dispatched = storage_type_assign(_stype, kDefaultStorage,
+ dispatch_mode, DispatchMode::kFCompute);
+  }
+  if (!dispatched && in_stype == kCSRStorage && param.c == 0.0) {
+// csr -> csr
+dispatched = storage_type_assign(_stype, kCSRStorage,
+ dispatch_mode, DispatchMode::kFComputeEx);
+  }
+  if (!dispatched) {
+dispatched = dispatch_fallback(out_attrs, dispatch_mode);
+  }
+  return dispatched;
+}
+
 template
 struct quadratic_forward {
   template
@@ -115,6 +143,61 @@ void QuadraticOpForward(const nnvm::NodeAttrs& attrs,
 }
 
 template
+void QuadraticOpForwardCsrImpl(const QuadraticParam& param,
+   const OpContext& ctx,
+   const NDArray& input,
+   const OpReqType req,
+   const NDArray& output) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  using namespace csr;
+  if (req == kNullOp) return;
+  CHECK_EQ(req, kWriteTo) << "QuadraticOp with CSR only supports kWriteTo";
+  Stream *s = ctx.get_stream();
+  if (!input.storage_initialized()) {
+FillZerosCsrImpl(s, output);
+return;
+  }
+  const nnvm::dim_t nnz = input.storage_shape()[0];
+  const nnvm::dim_t num_rows = output.shape()[0];
+  output.CheckAndAlloc({Shape1(num_rows + 1), Shape1(nnz)});
+  CHECK_EQ(output.aux_type(kIdx), output.aux_type(kIndPtr))
+<< "The dtypes of indices and indptr don't match";
+  MSHADOW_TYPE_SWITCH(output.dtype(), DType, {
+MSHADOW_IDX_TYPE_SWITCH(output.aux_type(kIdx), IType, {
+  MXNET_ASSIGN_REQ_SWITCH(req, req_type, {
+Kernel::Launch(
+s, nnz, output.data().dptr(), input.data().dptr(),
+param.a, param.b, param.c);
+Copy(output.aux_data(kIdx).FlatTo1D(),
+ input.aux_data(kIdx).FlatTo1D(), s);
+Copy(output.aux_data(kIndPtr).FlatTo1D(),
+ input.aux_data(kIndPtr).FlatTo1D(), s);
+  });
+});
+  });
+}
+
+template
+void QuadraticOpForwardEx(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  

[GitHub] eric-haibin-lin commented on issue #10081: [MXNET-82] Sparse op tutorial for developers

2018-03-26 Thread GitBox
eric-haibin-lin commented on issue #10081: [MXNET-82] Sparse op tutorial for 
developers
URL: https://github.com/apache/incubator-mxnet/pull/10081#issuecomment-376345646
 
 
   Guide added to the confluence page: 
   
https://cwiki.apache.org/confluence/display/MXNET/A+Guide+to+Implementing+Sparse+Operators+in+MXNet+Backend
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: adding context parameter to infer api- imageclassifier and objectdetector (#10252)

2018-03-26 Thread nswamy
This is an automated email from the ASF dual-hosted git repository.

nswamy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 777089d  adding context parameter to infer api- imageclassifier and 
objectdetector (#10252)
777089d is described below

commit 777089dd771735aa2c8efb4ae088a4a68ce896a4
Author: Roshani Nagmote 
AuthorDate: Mon Mar 26 16:27:17 2018 -0700

adding context parameter to infer api- imageclassifier and objectdetector 
(#10252)

* adding context parameter

* parameter description added
---
 .../ml/dmlc/mxnet/infer/ImageClassifier.scala  | 18 +++
 .../scala/ml/dmlc/mxnet/infer/ObjectDetector.scala | 37 ++
 .../ml/dmlc/mxnet/infer/ImageClassifierSuite.scala | 26 ---
 .../ml/dmlc/mxnet/infer/ObjectDetectorSuite.scala  | 11 ---
 4 files changed, 56 insertions(+), 36 deletions(-)

diff --git 
a/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala 
b/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala
index 45c4e76..070b0bf 100644
--- 
a/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala
+++ 
b/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala
@@ -17,7 +17,7 @@
 
 package ml.dmlc.mxnet.infer
 
-import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.{Context, DataDesc, NDArray, Shape}
 
 import scala.collection.mutable.ListBuffer
 
@@ -37,13 +37,15 @@ import javax.imageio.ImageIO
   * file://model-dir/synset.txt
   * @param inputDescriptors Descriptors defining the input node names, shape,
   * layout and Type parameters
+  * @param contexts Device Contexts on which you want to run Inference, 
defaults to CPU.
+  * @param epoch Model epoch to load, defaults to 0.
   */
 class ImageClassifier(modelPathPrefix: String,
-  inputDescriptors: IndexedSeq[DataDesc])
+  inputDescriptors: IndexedSeq[DataDesc],
+  contexts: Array[Context] = Context.cpu(),
+  epoch: Option[Int] = Some(0))
   extends Classifier(modelPathPrefix,
-  inputDescriptors) {
-
-  val classifier: Classifier = getClassifier(modelPathPrefix, inputDescriptors)
+  inputDescriptors, contexts, epoch) {
 
   protected[infer] val inputLayout = inputDescriptors.head.layout
 
@@ -108,8 +110,10 @@ class ImageClassifier(modelPathPrefix: String,
 result
   }
 
-  def getClassifier(modelPathPrefix: String, inputDescriptors: 
IndexedSeq[DataDesc]): Classifier = {
-new Classifier(modelPathPrefix, inputDescriptors)
+  def getClassifier(modelPathPrefix: String, inputDescriptors: 
IndexedSeq[DataDesc],
+contexts: Array[Context] = Context.cpu(),
+epoch: Option[Int] = Some(0)): Classifier = {
+new Classifier(modelPathPrefix, inputDescriptors, contexts, epoch)
   }
 }
 
diff --git 
a/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ObjectDetector.scala 
b/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ObjectDetector.scala
index 2d83caf..30e1432 100644
--- 
a/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ObjectDetector.scala
+++ 
b/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ObjectDetector.scala
@@ -16,12 +16,14 @@
  */
 
 package ml.dmlc.mxnet.infer
+
 // scalastyle:off
 import java.awt.image.BufferedImage
 // scalastyle:on
-import ml.dmlc.mxnet.NDArray
-import ml.dmlc.mxnet.DataDesc
+
+import ml.dmlc.mxnet.{Context, DataDesc, NDArray}
 import scala.collection.mutable.ListBuffer
+
 /**
   * A class for object detection tasks
   *
@@ -32,11 +34,16 @@ import scala.collection.mutable.ListBuffer
   * file://model-dir/synset.txt
   * @param inputDescriptors Descriptors defining the input node names, shape,
   * layout and Type parameters
+  * @param contexts Device Contexts on which you want to run Inference, 
defaults to CPU.
+  * @param epoch Model epoch to load, defaults to 0.
   */
 class ObjectDetector(modelPathPrefix: String,
- inputDescriptors: IndexedSeq[DataDesc]) {
+ inputDescriptors: IndexedSeq[DataDesc],
+ contexts: Array[Context] = Context.cpu(),
+ epoch: Option[Int] = Some(0)) {
 
-  val imgClassifier: ImageClassifier = getImageClassifier(modelPathPrefix, 
inputDescriptors)
+  val imgClassifier: ImageClassifier =
+getImageClassifier(modelPathPrefix, inputDescriptors, contexts, epoch)
 
   val inputShape = imgClassifier.inputShape
 
@@ -54,7 +61,7 @@ class ObjectDetector(modelPathPrefix: String,
 * To Detect bounding boxes and corresponding labels
 *
 * @param inputImage : PathPrefix of the input 

[GitHub] nswamy closed pull request #10252: adding context parameter to infer api- imageclassifier and objectdetector

2018-03-26 Thread GitBox
nswamy closed pull request #10252: adding context parameter to infer api- 
imageclassifier and objectdetector
URL: https://github.com/apache/incubator-mxnet/pull/10252
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala 
b/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala
index 45c4e767cb3..070b0bf2011 100644
--- 
a/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala
+++ 
b/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala
@@ -17,7 +17,7 @@
 
 package ml.dmlc.mxnet.infer
 
-import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.{Context, DataDesc, NDArray, Shape}
 
 import scala.collection.mutable.ListBuffer
 
@@ -37,13 +37,15 @@ import javax.imageio.ImageIO
   * file://model-dir/synset.txt
   * @param inputDescriptors Descriptors defining the input node names, shape,
   * layout and Type parameters
+  * @param contexts Device Contexts on which you want to run Inference, 
defaults to CPU.
+  * @param epoch Model epoch to load, defaults to 0.
   */
 class ImageClassifier(modelPathPrefix: String,
-  inputDescriptors: IndexedSeq[DataDesc])
+  inputDescriptors: IndexedSeq[DataDesc],
+  contexts: Array[Context] = Context.cpu(),
+  epoch: Option[Int] = Some(0))
   extends Classifier(modelPathPrefix,
-  inputDescriptors) {
-
-  val classifier: Classifier = getClassifier(modelPathPrefix, inputDescriptors)
+  inputDescriptors, contexts, epoch) {
 
   protected[infer] val inputLayout = inputDescriptors.head.layout
 
@@ -108,8 +110,10 @@ class ImageClassifier(modelPathPrefix: String,
 result
   }
 
-  def getClassifier(modelPathPrefix: String, inputDescriptors: 
IndexedSeq[DataDesc]): Classifier = {
-new Classifier(modelPathPrefix, inputDescriptors)
+  def getClassifier(modelPathPrefix: String, inputDescriptors: 
IndexedSeq[DataDesc],
+contexts: Array[Context] = Context.cpu(),
+epoch: Option[Int] = Some(0)): Classifier = {
+new Classifier(modelPathPrefix, inputDescriptors, contexts, epoch)
   }
 }
 
diff --git 
a/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ObjectDetector.scala 
b/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ObjectDetector.scala
index 2d83caf2386..30e1432d416 100644
--- 
a/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ObjectDetector.scala
+++ 
b/scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ObjectDetector.scala
@@ -16,12 +16,14 @@
  */
 
 package ml.dmlc.mxnet.infer
+
 // scalastyle:off
 import java.awt.image.BufferedImage
 // scalastyle:on
-import ml.dmlc.mxnet.NDArray
-import ml.dmlc.mxnet.DataDesc
+
+import ml.dmlc.mxnet.{Context, DataDesc, NDArray}
 import scala.collection.mutable.ListBuffer
+
 /**
   * A class for object detection tasks
   *
@@ -32,11 +34,16 @@ import scala.collection.mutable.ListBuffer
   * file://model-dir/synset.txt
   * @param inputDescriptors Descriptors defining the input node names, shape,
   * layout and Type parameters
+  * @param contexts Device Contexts on which you want to run Inference, 
defaults to CPU.
+  * @param epoch Model epoch to load, defaults to 0.
   */
 class ObjectDetector(modelPathPrefix: String,
- inputDescriptors: IndexedSeq[DataDesc]) {
+ inputDescriptors: IndexedSeq[DataDesc],
+ contexts: Array[Context] = Context.cpu(),
+ epoch: Option[Int] = Some(0)) {
 
-  val imgClassifier: ImageClassifier = getImageClassifier(modelPathPrefix, 
inputDescriptors)
+  val imgClassifier: ImageClassifier =
+getImageClassifier(modelPathPrefix, inputDescriptors, contexts, epoch)
 
   val inputShape = imgClassifier.inputShape
 
@@ -54,7 +61,7 @@ class ObjectDetector(modelPathPrefix: String,
 * To Detect bounding boxes and corresponding labels
 *
 * @param inputImage : PathPrefix of the input image
-* @param topK : Get top k elements with maximum probability
+* @param topK   : Get top k elements with maximum probability
 * @return List of List of tuples of (class, [probability, xmin, ymin, 
xmax, ymax])
 */
   def imageObjectDetect(inputImage: BufferedImage,
@@ -71,9 +78,10 @@ class ObjectDetector(modelPathPrefix: String,
   /**
 * Takes input images as NDArrays. Useful when you want to perform multiple 
operations on
 * the input Array, or when you want to pass a batch of input images.
+*
 * @param input : Indexed 

[GitHub] thomelane commented on issue #10251: [MXNET-141] Add tutorial Gluon Datasets and DataLoaders

2018-03-26 Thread GitBox
thomelane commented on issue #10251: [MXNET-141] Add tutorial Gluon Datasets 
and DataLoaders
URL: https://github.com/apache/incubator-mxnet/pull/10251#issuecomment-376341971
 
 
   @piiswrong agree with num_workers. Will add commentary on that, and use it 
in the examples.
   Question about RecordIO though... doesn't it give better performance? And if 
so, is there a replacement planned/implemented?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #10140: Docs page for ONNX module.

2018-03-26 Thread GitBox
anirudhacharya commented on issue #10140: Docs page for ONNX module.
URL: https://github.com/apache/incubator-mxnet/pull/10140#issuecomment-376341467
 
 
   @aaronmarkham 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #10165: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md

2018-03-26 Thread GitBox
szha commented on issue #10165: [MXNET-114] Add the ability to exclude specific 
lines in tutorial notebooks generated from .md
URL: https://github.com/apache/incubator-mxnet/pull/10165#issuecomment-376340666
 
 
   @ThomasDelteil there are recent changes in the way we include submodules, 
which is why it breaks CI if you don't rebase. It shouldn't be needed after 
this rebase (at least for a while)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new issue #10253: make scalapkg error - ld: file not found: ../../../nnvm/lib/libnnvm.a

2018-03-26 Thread GitBox
aaronmarkham opened a new issue #10253: make scalapkg error - ld: file not 
found: ../../../nnvm/lib/libnnvm.a
URL: https://github.com/apache/incubator-mxnet/issues/10253
 
 
   
   ## Description
   Cannot `make scalapkg` on mac osx.
   
   ## Environment info (Required)
   
   ```
   python diagnose.py 
   --Python Info--
   Version  : 3.6.4
   Compiler : GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)
   Build: ('default', 'Mar  1 2018 18:36:42')
   Arch : ('64bit', '')
   Pip Info---
   Version  : 9.0.2
   Directory: /usr/local/lib/python3.6/site-packages/pip
   --MXNet Info---
   Version  : 1.1.0
   Directory: /usr/local/lib/python3.6/site-packages/mxnet
   Commit Hash   : 07a83a0325a3d782513a04f47d711710972cb144
   --System Info--
   Platform : Darwin-16.7.0-x86_64-i386-64bit
   system   : Darwin
   node : 8c8590217d260a.ant.amazon.com
   release  : 16.7.0
   version  : Darwin Kernel Version 16.7.0: Thu Jan 11 22:59:40 PST 2018; 
root:xnu-3789.73.8~1/RELEASE_X86_64
   --Hardware Info--
   machine  : x86_64
   processor: i386
   b'machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW 
RDTSCP TSCI'
   b'machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 HLE 
AVX2 BMI2 INVPCID RTM SMAP RDSEED ADX IPT SGX FPU_CSDS MPX CLFSOPT'
   b'machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE 
MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ 
DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC 
MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C'
   b'machdep.cpu.brand_string: Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz'
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0179 
sec, LOAD: 0.6255 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0187 sec, LOAD: 
0.1204 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0254 sec, LOAD: 
0.3290 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0198 sec, LOAD: 0.1069 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0186 sec, LOAD: 
0.1314 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0149 sec, 
LOAD: 0.0721 sec.
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   clang
   
   MXNet commit hash:
   892494d405f2358405e6cdc656ae24fc5897fa83
   
   Build config:
   Doesn't matter if I use the provided one in make/osx.config or just go with 
a default build.
   
   ## Error Message:
   ```
   [INFO] --- native-maven-plugin:1.0-alpha-7:link (default-link) @ 
libmxnet-scala-osx-x86_64-cpu ---
   [INFO] /bin/sh -c cd 
/Users/markhama/Development/mxnet/scala-package/native/osx-x86_64-cpu && g++ 
-shared 
-o/Users/markhama/Development/mxnet/scala-package/native/osx-x86_64-cpu/target/libmxnet-scala-osx-x86_64-cpu.jnilib
 target/objs/ml_dmlc_mxnet_native_c_api.o -framework JavaVM 
'-Wl,-exported_symbol,_Java_*' -Wl,-x 
/Users/markhama/Development/mxnet/3rdparty/dmlc-core/libdmlc.a 
/Users/markhama/Development/mxnet/3rdparty/nnvm/lib/libnnvm.a 
/Users/markhama/Development/mxnet/lib/libmxnet.a -force_load 
../../../lib/libmxnet.a -force_load ../../../nnvm/lib/libnnvm.a -pthread -lm 
-lopenblas -L/usr/local/Cellar/opencv/3.3.0_3/lib -lopencv_stitching 
-lopencv_superres -lopencv_videostab -lopencv_photo -lopencv_aruco 
-lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dpm -lopencv_face 
-lopencv_fuzzy -lopencv_img_hash -lopencv_line_descriptor -lopencv_optflow 
-lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo 
-lopencv_structured_light -lopencv_phase_unwrapping -lopencv_surface_matching 
-lopencv_tracking -lopencv_datasets -lopencv_text -lopencv_dnn -lopencv_plot 
-lopencv_ml -lopencv_xfeatures2d -lopencv_shape -lopencv_video 
-lopencv_ximgproc -lopencv_calib3d -lopencv_features2d -lopencv_highgui 
-lopencv_videoio -lopencv_flann -lopencv_xobjdetect -lopencv_imgcodecs 
-lopencv_objdetect -lopencv_xphoto -lopencv_imgproc -lopencv_core 
-L/usr/local/opt/openblas/lib -L/usr/local/lib/graphviz/
   clang: warning: argument unused during compilation: '-pthread' 
[-Wunused-command-line-argument]
   ld: file not found: ../../../nnvm/lib/libnnvm.a
   clang: error: linker command failed with exit code 1 (use -v to see 
invocation)
   [INFO] 

   [INFO] 

[GitHub] conradwt commented on issue #10185: [Documentation][Installation][macOS] Building From Source Has Incomplete Instructions For macOS 10.13.3

2018-03-26 Thread GitBox
conradwt commented on issue #10185: [Documentation][Installation][macOS] 
Building From Source Has Incomplete Instructions For macOS 10.13.3 
URL: 
https://github.com/apache/incubator-mxnet/issues/10185#issuecomment-376336759
 
 
   @aaronmarkham If this is a build issue, should this be happening?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #10185: [Documentation][Installation][macOS] Building From Source Has Incomplete Instructions For macOS 10.13.3

2018-03-26 Thread GitBox
aaronmarkham commented on issue #10185: [Documentation][Installation][macOS] 
Building From Source Has Incomplete Instructions For macOS 10.13.3 
URL: 
https://github.com/apache/incubator-mxnet/issues/10185#issuecomment-376333603
 
 
   **This is a build issue.**
   No additional documentation is required if the build acted as expected. That 
being said, if there were workarounds or alternative instructions available, 
then we could throw those in the docs.
   
   My results are slightly different, but close enough:
   
   ```
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(cudnn_algoreg.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(cudnn_batch_norm.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_act.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_base.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_concat.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_convolution.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_copy.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_deconvolution.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_fully_connected.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_pooling.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_softmax.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(mkldnn_sum.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(nnpack_util.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(rtc.o) has no symbols
   
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
 file: lib/libmxnet.a(vtune.o) has no symbols
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #10185: [Documentation][Installation][macOS] Building From Source Has Incomplete Instructions For macOS 10.13.3

2018-03-26 Thread GitBox
aaronmarkham commented on issue #10185: [Documentation][Installation][macOS] 
Building From Source Has Incomplete Instructions For macOS 10.13.3 
URL: 
https://github.com/apache/incubator-mxnet/issues/10185#issuecomment-376333603
 
 
   **This is a build issue.**
   No additional documentation is required if the build acted as expected. That 
being said, if there were workarounds or alternative instructions available, 
then we could throw those in the docs.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10251: [MXNET-141] Add tutorial Gluon Datasets and DataLoaders

2018-03-26 Thread GitBox
piiswrong commented on issue #10251: [MXNET-141] Add tutorial Gluon Datasets 
and DataLoaders
URL: https://github.com/apache/incubator-mxnet/pull/10251#issuecomment-376326165
 
 
   1. DataLoader has num_workers to allow parallelization. This is very 
important. We should introduce it to users.
   2. recordio is pretty hard to use. We are considering phasing it out. I 
don't think we should recommend it to users here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #10039: [MXNET-103] Added tutorial on types of data augmentations.

2018-03-26 Thread GitBox
piiswrong commented on a change in pull request #10039: [MXNET-103] Added 
tutorial on types of data augmentations.
URL: https://github.com/apache/incubator-mxnet/pull/10039#discussion_r177243913
 
 

 ##
 File path: docs/tutorials/python/types_of_data_augmentation.md
 ##
 @@ -0,0 +1,380 @@
+
+# Types of Data Augmentation
+
+Data Augmentation is a regularization technique that's used to avoid 
overfitting when training Machine Learning models. Although the technique can 
be applied in a variety of domains, it's very common in Computer Vision, and 
this will be the focus of the tutorial.
+
+Adjustments are made to the original images in the training dataset before 
being used in training. Some example adjustments include translating, croping, 
scaling, rotating, changing brightness and contrast. We do this to reduce the 
dependence of the model on spurious characteristics; e.g. training data may 
only contain faces that fill 1/4 of the image, so the model trainied without 
data augmentation might unhelpfully learn that faces can only be of this size.
+
+After defining some utility functions to visualise the example images, this 
tutorial details each different augmentation that can be used to adjust both 
the position and the colors of images. We discuss augmentations that are 
combined into single functions, and conclude with a FAQ section.
+
+
+```python
+%matplotlib inline
+from matplotlib.pyplot import imshow
+import mxnet as mx  # used version '1.0.0' at time of writing
+import numpy as np
+
+mx.random.seed(42) # set seed for repeatability
+```
+
+We define a utility function below, that will be used for visualising the 
augmentations in the tutorial.
+
+
+```python
+def plot_mx_array(array):
+"""
+Array expected to be height x width x 3 (channels), and values are floats 
between 0 and 255.
+"""
+assert array.shape[2] == 3, "RGB Channel should be last"
+imshow((array.clip(0, 255)/255).asnumpy())
+```
+
+We load an example image, this will be the target for our augmentations in the 
tutorial. 
+
+```python
+!wget 
https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/data_aug/inputs/0.jpg
+```
+
+```python
+example_image = mx.image.imread("./0.jpg")
+assert str(example_image.dtype) == ""
+```
+
+
+You'll notice that the image is loaded with with `numpy.int8` datatype. Some 
functions such as `swapaxes` don't work on `int` types, so we'll convert to 
`float32`, and visualize.
+
+
+```python
+example_image = example_image.astype("float32")
+plot_mx_array(example_image)
+```
+
+
+![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/data_aug/outputs/types_of/output_8_0.png)
+
+
+# Position Augmentation
+
+One form of augmentation affects the position of pixel values. Using 
combinations of slicing, scaling, translating, rotating and fliping the values 
of the original image can be shifted to create new images. Some operations 
(like scaling and rotation) require interpolation as pixels in the new image 
are combinations of pixels in the original image.
+
+### Crop
+
+You can use 
[`mxnet.image.RandomCropAug`](https://mxnet.incubator.apache.org/api/python/image/image.html?highlight=randomcropaug#mxnet.image.RandomCropAug)
 and 
[`mxnet.image.CenterCropAug`](https://mxnet.incubator.apache.org/api/python/image/image.html?highlight=centercropaug#mxnet.image.CenterCropAug)
 to create instances of the Augmenter class, which can be called just like a 
function.
+
+It's worth noting that the randomisation for `RandomCropAug` happens when 
calling the Augmenter, and not at the point of instantiation. You'll end up 
with different images each time you call the Augmenter, so it can't be used to 
apply the same augmentation to another image. You can use 
[`mxnet.random.seed`](https://mxnet.incubator.apache.org/api/python/symbol/random.html?highlight=seed#mxnet.random.seed)
 for random but repeatable augmentations.
+
+`CenterCropAug` is determanistic and just takes the most central crop of given 
size.
+
+
+```python
+aug = mx.image.RandomCropAug(size=(100, 100))
+aug_image = aug(example_image)
+plot_mx_array(aug_image)
+
+assert example_image.shape == (427, 640, 3)
+assert aug_image.shape == (100, 100, 3)
+```
+
+![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/data_aug/outputs/types_of/output_13_1.png)
+
+
+__*Watch Out!*__ Crop are a great way of adding diversity to your training 
examples, but be careful not to take it to the extreme. An example of this 
would be cropping out an object of interest from the image completely. 
Visualise a few examples after cropping to determine if this will be an issue.
+
+If you're training object detection models, it's recommended that you use the 
[`mxnet.image.DetRandomCropAug`](https://mxnet.incubator.apache.org/api/python/image/image.html?highlight=detrandomcropaug#mxnet.image.DetRandomCropAug)
 augmenter. Instead of the `size` parameter, it has parameters 

[GitHub] Ishitori commented on issue #10251: [MXNET-141] Add tutorial Gluon Datasets and DataLoaders

2018-03-26 Thread GitBox
Ishitori commented on issue #10251: [MXNET-141] Add tutorial Gluon Datasets and 
DataLoaders
URL: https://github.com/apache/incubator-mxnet/pull/10251#issuecomment-376320518
 
 
   Looks fine to me, @thomelane 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10055: [MXNET-102] Added tutorial on how to use data augmenters.

2018-03-26 Thread GitBox
piiswrong commented on issue #10055: [MXNET-102] Added tutorial on how to use 
data augmenters.
URL: https://github.com/apache/incubator-mxnet/pull/10055#issuecomment-376319656
 
 
   Mixing gluon and mx.image is weird. We should have separate tutorials


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10055: [MXNET-102] Added tutorial on how to use data augmenters.

2018-03-26 Thread GitBox
piiswrong commented on issue #10055: [MXNET-102] Added tutorial on how to use 
data augmenters.
URL: https://github.com/apache/incubator-mxnet/pull/10055#issuecomment-376319656
 
 
   Mixing gluon and mx.image is weird.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude 
specific lines in tutorial notebooks generated from .md
URL: https://github.com/apache/incubator-mxnet/pull/10165#issuecomment-376319179
 
 
   @szha  @cjolivier01 do I need to rebase every time I commit to get the build 
to not show as failed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude 
specific lines in tutorial notebooks generated from .md
URL: https://github.com/apache/incubator-mxnet/pull/10165#issuecomment-376317716
 
 
   They are showing on the website, but not in the generated notebook, which is 
what we want for the ``: 
   
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-10165/7/tutorials/onnx/inference_on_onnx_model.ipynb
   
   edit: I added the line in back ticks to make it clearer that this is the 
output of the cell above
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude 
specific lines in tutorial notebooks generated from .md
URL: https://github.com/apache/incubator-mxnet/pull/10165#issuecomment-376317716
 
 
   They are showing on the website, but not in the generated notebook, which is 
what we want for the ``: 
   
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-10165/7/tutorials/onnx/inference_on_onnx_model.ipynb
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md

2018-03-26 Thread GitBox
ThomasDelteil commented on issue #10165: [MXNET-114] Add the ability to exclude 
specific lines in tutorial notebooks generated from .md
URL: https://github.com/apache/incubator-mxnet/pull/10165#issuecomment-376317716
 
 
   They are showing on the website, but not in the generated notebook: 
   
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-10165/7/tutorials/onnx/inference_on_onnx_model.ipynb
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #10165: [MXNET-114] Add the ability to exclude specific lines in tutorial notebooks generated from .md

2018-03-26 Thread GitBox
szha commented on issue #10165: [MXNET-114] Add the ability to exclude specific 
lines in tutorial notebooks generated from .md
URL: https://github.com/apache/incubator-mxnet/pull/10165#issuecomment-376314500
 
 
   Seems that these two lines are still showing:
   
https://github.com/apache/incubator-mxnet/pull/10165/files#diff-cde6f9988a988026c382afae6b5234baR157
   
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-10165/7/tutorials/onnx/inference_on_onnx_model.html#test-using-sample-inputs-and-outputs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-03-26 Thread GitBox
haojin2 commented on issue #10208: [MXNET-117] Sparse operator 
broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#issuecomment-376314422
 
 
   @sergeykolychev Hi, my PR is failing some tests because I've changed some 
interface in Python but I'm not familiar with Perl so I'm not able to make the 
same changes in Perl. I wonder if you could please give some help on how to 
make corresponding changes in Perl so that my builds could pass? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on issue #10222: Fixed memory leak

2018-03-26 Thread GitBox
lebeg commented on issue #10222: Fixed memory leak
URL: https://github.com/apache/incubator-mxnet/pull/10222#issuecomment-376313788
 
 
   There is a use case for explicitly v1.0.0 on Raspberry Pi devices, where the 
whole unit tests suite can not be run at once since the device runs out of 
memory. This pull request is fixing one of the memory leaks on the release 
branch.
   
   What makes you think we do not need it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini opened a new pull request #10252: adding context parameter to infer api- imageclassifier and objectdetector

2018-03-26 Thread GitBox
Roshrini opened a new pull request #10252: adding context parameter to infer 
api- imageclassifier and objectdetector
URL: https://github.com/apache/incubator-mxnet/pull/10252
 
 
   ## Description ##
   adding context parameter to infer api- imageclassifier and objectdetector
   @nswamy @lanking520 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #10241: Css and install fix

2018-03-26 Thread GitBox
aaronmarkham commented on issue #10241: Css and install fix
URL: https://github.com/apache/incubator-mxnet/pull/10241#issuecomment-376309858
 
 
   The install page is exhibiting bad behavior. For example, windows 
instructions are gone.
   
   I don't think we're ready to add the versions dropdown back to the install 
page...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane opened a new pull request #10251: [MXNET-141] Add tutorial Gluon Datasets and DataLoaders

2018-03-26 Thread GitBox
thomelane opened a new pull request #10251: [MXNET-141] Add tutorial Gluon 
Datasets and DataLoaders
URL: https://github.com/apache/incubator-mxnet/pull/10251
 
 
   ## Description ##
   Intro to Datasets and Dataloaders.
   Using own data with Included Dataset objects (including RecordIO format).
   Using own data with Custom Dataset objects.
   Wrappers for converting between DataLoader and DataIters.
   
   ## Checklist ##
   N/A. Added single markdown file.
   
   ## Comments ##
   
   https://issues.apache.org/jira/browse/MXNET-141


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #10241: Css and install fix

2018-03-26 Thread GitBox
aaronmarkham commented on issue #10241: Css and install fix
URL: https://github.com/apache/incubator-mxnet/pull/10241#issuecomment-376309858
 
 
   The install page is exhibiting bad behavior. For example, windows 
instructions are gone.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #10118: [MXNET-106][ONNX-MXNET] Adding ONNX Model zoo tests.

2018-03-26 Thread GitBox
anirudhacharya commented on issue #10118: [MXNET-106][ONNX-MXNET] Adding ONNX 
Model zoo tests.
URL: https://github.com/apache/incubator-mxnet/pull/10118#issuecomment-375831579
 
 
   @nswamy @piiswrong @sandeep-krishnamurthy  Please review this PR. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >