[GitHub] [incubator-mxnet] xidulu opened a new issue #16162: [CI] Unix Cpu test failed on Clojure cases

2019-09-12 Thread GitBox
xidulu opened a new issue #16162: [CI] Unix Cpu test failed on Clojure cases
URL: https://github.com/apache/incubator-mxnet/issues/16162
 
 
   As titled
   
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-16152/1/pipeline
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Vikas-kum commented on issue #16138: julia: fix `mx.forward` kwargs checking

2019-09-12 Thread GitBox
Vikas-kum commented on issue #16138: julia: fix `mx.forward` kwargs checking
URL: https://github.com/apache/incubator-mxnet/pull/16138#issuecomment-531083097
 
 
   No issues, thanks for checking. We will check if this is one off failure. 
   We are trying to make master stable, best effort. The CI test that runs on 
PR is just unit tests. There are several other test that runs when PR gets 
merged into master. So, CI can find regressions after getting merged into 
master and there we would require help from PR writers to investigate failures.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #15838: [numpy] nonzero

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #15838: [numpy] nonzero
URL: https://github.com/apache/incubator-mxnet/pull/15838#discussion_r324023157
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1744,6 +1744,44 @@ def test_indexing_mode(sampler, set_size, samples_size, 
replace, weight=None):
 test_indexing_mode(test_choice_weighted, num_classes, num_classes 
// 2, replace, weight)
 
 
+@with_seed()
+@use_np
+def test_np_nonzero():
+class TestNonzero(HybridBlock):
+def __init__(self):
+super(TestNonzero, self).__init__()
+
+def hybrid_forward(self, F, x):
+return F.npx.nonzero(x)
+
+types = ['int32', 'int64', 'float64', 'float32', 'float16']
+for hybridize in [True, False]:
+for shape in [(),
+  (1,),
+  (1, 1),
+  (1, 2, 3),
+  (1, 0),
+  (2, 0, 3)
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #15838: [numpy] nonzero

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #15838: [numpy] nonzero
URL: https://github.com/apache/incubator-mxnet/pull/15838#discussion_r324023180
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1744,6 +1744,44 @@ def test_indexing_mode(sampler, set_size, samples_size, 
replace, weight=None):
 test_indexing_mode(test_choice_weighted, num_classes, num_classes 
// 2, replace, weight)
 
 
+@with_seed()
+@use_np
+def test_np_nonzero():
+class TestNonzero(HybridBlock):
+def __init__(self):
+super(TestNonzero, self).__init__()
+
+def hybrid_forward(self, F, x):
+return F.npx.nonzero(x)
+
+types = ['int32', 'int64', 'float64', 'float32', 'float16']
+for hybridize in [True, False]:
+for shape in [(),
+  (1,),
+  (1, 1),
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #15838: [numpy] nonzero

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #15838: [numpy] nonzero
URL: https://github.com/apache/incubator-mxnet/pull/15838#discussion_r324022978
 
 

 ##
 File path: python/mxnet/_numpy_op_doc.py
 ##
 @@ -100,6 +100,54 @@ def _np_cumsum(a, axis=None, dtype=None, out=None):
 >>> np.cumsum(a,axis=1)  # sum over columns for each of the 2 rows
 array([[ 1,  3,  6],
[ 4,  9, 15]])
+"""
+pass
+
+
+def _npx_nonzero(a):
 
 Review comment:
   Because Wu Jun told me to register nonzero with prefix _npx_.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #15838: [numpy] nonzero

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #15838: [numpy] nonzero
URL: https://github.com/apache/incubator-mxnet/pull/15838#discussion_r324023059
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1744,6 +1744,44 @@ def test_indexing_mode(sampler, set_size, samples_size, 
replace, weight=None):
 test_indexing_mode(test_choice_weighted, num_classes, num_classes 
// 2, replace, weight)
 
 
+@with_seed()
+@use_np
+def test_np_nonzero():
+class TestNonzero(HybridBlock):
+def __init__(self):
+super(TestNonzero, self).__init__()
+
+def hybrid_forward(self, F, x):
+return F.npx.nonzero(x)
+
+types = ['int32', 'int64', 'float64', 'float32', 'float16']
+for hybridize in [True, False]:
+for shape in [(),
+  (1,),
+  (1, 1),
+  (1, 2, 3),
+  (1, 0),
+  (2, 0, 3)
+  ]:
+for oneType in types:
+rtol=1e-3
+atol=1e-5
+test_nonzero = TestNonzero()
+if hybridize:
+test_nonzero.hybridize()
+x = rand_ndarray(shape, dtype=oneType).as_np_ndarray()
+np_out = _np.nonzero(x.asnumpy())
+np_out = _np.transpose(np_out)
+mx_out = test_nonzero(x)
+assert mx_out.shape == np_out.shape
+assert_almost_equal(mx_out.asnumpy(), np_out, rtol, atol)
+
+# Test imperative once again
+mx_out = npx.nonzero(x)
+np_out = _np.nonzero(x.asnumpy())
+np_out = _np.transpose(np_out)
+assert_almost_equal(mx_out.asnumpy(), np_out, rtol, atol)
+
 
 Review comment:
   Added, thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] thomelane commented on issue #16161: Update train_gluon.md

2019-09-12 Thread GitBox
thomelane commented on issue #16161: Update train_gluon.md
URL: https://github.com/apache/incubator-mxnet/pull/16161#issuecomment-531082314
 
 
   thanks for the fix @sad- looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #15901: [Numpy] operator hypot

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #15901: [Numpy] operator 
hypot
URL: https://github.com/apache/incubator-mxnet/pull/15901#discussion_r324018604
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -2432,3 +2432,52 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 else:
 raise ValueError("The dimensions must be sequence of ints")
 # pylint: enable=redefined-outer-name
+
+
+@set_module('mxnet.ndarray.numpy')
+def hypot(x1, x2, out=None):
+r"""
+hypot(x1, x2, out=None)
 
 Review comment:
   Done. Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #15901: [Numpy] operator hypot

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #15901: [Numpy] operator 
hypot
URL: https://github.com/apache/incubator-mxnet/pull/15901#discussion_r324018575
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -2748,4 +2748,39 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 # pylint: enable=redefined-outer-name
 
 
+@set_module('mxnet.symbol.numpy')
+def hypot(x1, x2, out=None):
+r"""
+hypot(x1, x2, out=None)
 
 Review comment:
   Done. Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sad- opened a new pull request #16161: Update train_gluon.md

2019-09-12 Thread GitBox
sad- opened a new pull request #16161: Update train_gluon.md
URL: https://github.com/apache/incubator-mxnet/pull/16161
 
 
   Fix wrong naming for sparse tutorial
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r324017644
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1323,6 +1323,39 @@ def hybrid_forward(self, F, a, *args):
 assert same(mx_out.asnumpy(), np_out)
 
 
+@with_seed()
+@use_np
+def test_np_ravel():
+class TestRavel(HybridBlock):
+def __init__(self):
+super(TestRavel, self).__init__()
+
+def hybrid_forward(self, F, a):
+return F.np.ravel(a)
+
+types = ['float64', 'float32', 'float16', 'int64', 'int32', 'int8']
+for oneType in types:
+for hybridize in [True, False]:
+for shape in [(), (2,), (2, 2), (1, 2, 3), (3, 0), (1, 0, 2)]:
+test_ravel = TestRavel()
+if hybridize:
+test_ravel.hybridize()
+x = rand_ndarray(shape, dtype=oneType).as_np_ndarray()
+x.attach_grad()
+np_out = _np.ravel(x.asnumpy())
+with mx.autograd.record():
+mx_out = test_ravel(x)
+assert mx_out.shape == np_out.shape
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r324017499
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -3873,3 +3873,47 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 """
 return _mx_nd_np.indices(dimensions=dimensions, dtype=dtype, ctx=ctx)
 # pylint: enable=redefined-outer-name
+
+
+@set_module('mxnet.numpy')
+def ravel(x, order='C'):
+r"""
+ravel(x)
+
+Return a contiguous flattened array.
+A 1-D array, containing the elements of the input, is returned.  A copy is
+made only if needed.
+
+Parameters
+--
+x : ndarray
+Input array.  The elements in `x` are read in row-major, C-style order 
and
+packed as a 1-D array.
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r324016975
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -2748,4 +2748,44 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 # pylint: enable=redefined-outer-name
 
 
+@set_module('mxnet.symbol.numpy')
+def ravel(x, order='C'):
+r"""
+ravel(x)
+
+Return a contiguous flattened array.
+A 1-D array, containing the elements of the input, is returned.  A copy is
+made only if needed.
+
+Parameters
+--
+x : ndarray
+Input array.  The elements in `x` are read in row-major, C-style order 
and
+packed as a 1-D array.
+out : ndarray or None, optional
+A location into which the result is stored. If not provided or `None`,
+a freshly-allocated array is returned.
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sad- opened a new pull request #16160: Update custom_layer.md tutorial to fix breaking tests

2019-09-12 Thread GitBox
sad- opened a new pull request #16160: Update custom_layer.md tutorial to fix 
breaking tests
URL: https://github.com/apache/incubator-mxnet/pull/16160
 
 
   Serialization in custom layer definition no longer needed. Remove to fix the 
notebook
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] iblis17 commented on issue #16138: julia: fix `mx.forward` kwargs checking

2019-09-12 Thread GitBox
iblis17 commented on issue #16138: julia: fix `mx.forward` kwargs checking
URL: https://github.com/apache/incubator-mxnet/pull/16138#issuecomment-531073601
 
 
   > 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/master/1034/pipeline
   
   hmm, I don't think it's related to this PR.
   
   > And make sure that, CI passes
   
   yeah, it passed
   https://github.com/apache/incubator-mxnet/pull/16138/commits
   
   > and there is at least one review(just another pair of eyes looking at 
changes)
   
   Okay, I will wait for someone review it next time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-12 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new fbf30fa  Bump the publish timestamp.
fbf30fa is described below

commit fbf30fa118c5acaae931d70b685122512bacbbb4
Author: mxnet-ci 
AuthorDate: Fri Sep 13 01:37:22 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..6059037
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Sep 13 01:37:22 UTC 2019



[GitHub] [incubator-mxnet] Vikas-kum opened a new pull request #16159: fixing test for model compatibility checker

2019-09-12 Thread GitBox
Vikas-kum opened a new pull request #16159: fixing test for model compatibility 
checker
URL: https://github.com/apache/incubator-mxnet/pull/16159
 
 
   ## Description ##
   adding libtvm.so to test
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Vikas-kum closed pull request #16158: backward compatibilty checker test fix

2019-09-12 Thread GitBox
Vikas-kum closed pull request #16158: backward compatibilty checker test fix
URL: https://github.com/apache/incubator-mxnet/pull/16158
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Vikas-kum opened a new pull request #16158: backward compatibilty checker test fix

2019-09-12 Thread GitBox
Vikas-kum opened a new pull request #16158: backward compatibilty checker test 
fix
URL: https://github.com/apache/incubator-mxnet/pull/16158
 
 
   ## Description ##
   adding tvm.so to backward compat checker.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ShownX commented on issue #9686: [Discussion] MXNet 2.0 Roadmap (was: APIs that might be a good idea to break in 2.0)

2019-09-12 Thread GitBox
ShownX commented on issue #9686: [Discussion] MXNet 2.0 Roadmap (was: APIs that 
might be a good idea to break in 2.0)
URL: 
https://github.com/apache/incubator-mxnet/issues/9686#issuecomment-531062192
 
 
   Expect more image operations: adjust_colors (not random), rotate, and more


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil opened a new pull request #15885: [WIP] New Website: Remove Old Content [2/3]

2019-09-12 Thread GitBox
ThomasDelteil opened a new pull request #15885: [WIP] New Website: Remove Old 
Content [2/3]
URL: https://github.com/apache/incubator-mxnet/pull/15885
 
 
   This specific commit can be reviewed here in isolation here for better 
readability: https://github.com/ThomasDelteil/incubator-mxnet/pull/5
   
   - [x] https://github.com/apache/incubator-mxnet/pull/15884 merged
   
   This removes the old website
   
   New website visible here: https://mxnet-beta.staged.apache.org/
   
   Follow-up PR to add CI for the new docs and website: 
https://github.com/apache/incubator-mxnet/pull/15883


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham closed pull request #15885: [WIP] New Website: Remove Old Content [2/3]

2019-09-12 Thread GitBox
aaronmarkham closed pull request #15885: [WIP] New Website: Remove Old Content 
[2/3]
URL: https://github.com/apache/incubator-mxnet/pull/15885
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on issue #15657: Eliminate common expressions

2019-09-12 Thread GitBox
ptrendx commented on issue #15657: Eliminate common expressions
URL: https://github.com/apache/incubator-mxnet/pull/15657#issuecomment-531046183
 
 
   Thanks @DickJC123! Your thoroughness is amazing as always :-). I will think 
about the other points you made, but point 5 is especially interesting. I did 
not think about it before and you are right that this is a potential problem. I 
will introduce an additional check for that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (c5383f7 -> ccd24a8)

2019-09-12 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c5383f7  CD Fixes (#16127)
 add ccd24a8  avoid test relu at the origin due to discontinuous gradient 
(#16133)

No new revisions were added by this update.

Summary of changes:
 tests/python/mkl/test_mkldnn.py | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)



[GitHub] [incubator-mxnet] wkcn merged pull request #16133: avoid test relu at the origin due to discontinuous gradient

2019-09-12 Thread GitBox
wkcn merged pull request #16133: avoid test relu at the origin due to 
discontinuous gradient
URL: https://github.com/apache/incubator-mxnet/pull/16133
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #16133: avoid test relu at the origin due to discontinuous gradient

2019-09-12 Thread GitBox
wkcn commented on issue #16133: avoid test relu at the origin due to 
discontinuous gradient
URL: https://github.com/apache/incubator-mxnet/pull/16133#issuecomment-531039186
 
 
   Merged. Thank you : )


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 commented on issue #15657: Eliminate common expressions

2019-09-12 Thread GitBox
DickJC123 commented on issue #15657: Eliminate common expressions
URL: https://github.com/apache/incubator-mxnet/pull/15657#issuecomment-531030164
 
 
   This will be an awesome addition.  Some things to consider in polishing it:
   
   1.  The crux is in the definition of functionally equal nodes.  Do you think 
your method `bool NodeEqual(const Node * n, const Node * m)` belongs in the CSE 
code, or in the Node class?
   2.  It's probably best to be conservative with the NodeEqual, but by 
comparing the attrs.dict, you will miss some equivalent nodes.  To illustrate 
my point, I instrumented the reduce operator to print out the dict:
   ```
   >>> x = mx.nd.array([1,2,3,4])
   >>> mx.nd.sum(x)
   [21:32:38] src/operator/tensor/./broadcast_reduce_op.h:621: node attr dict = 
{}
   [10.] 
   >>> mx.nd.sum(x, axis=())
   [21:32:53] src/operator/tensor/./broadcast_reduce_op.h:621: node attr dict = 
{{axis,()}}
   [10.] 
   >>> mx.nd.sum(x, axis=0)
   [21:33:07] src/operator/tensor/./broadcast_reduce_op.h:621: node attr dict = 
{{axis,0}}
   [10.] 
   >>> mx.nd.sum(x, axis=(0,))
   [21:33:39] src/operator/tensor/./broadcast_reduce_op.h:621: node attr dict = 
{{axis,(0,)}}
   [10.] 
   ```
   All sum operators are functionally equivalent, but none would compare as 
equal with the approach that includes comparing the attrs.dict (the 
map from the python operator).
   Any chance you could move the equality to comparing the parameter struct?
   3.  I see you consider operators that have resources as never equal.  Might 
you maintain a 'white list' of resources that are OK for equal nodes to have, 
e.g. the commonly used tempspace resource?
   4.  I'd prefer to see a test for each of the reasons you deny node equality 
(e.g. having mutable inputs).  You could either count the nodes, to prove the 
CSE did not happen, or test against a golden copy since performing the CSE 
would break functionality.  I see you have a test for a case where CSE should 
happen, but it only looks for the reduced node count without testing 
functionality.
   5.  Since CSE combines output nodes, those output nodes should not be 
mutable inputs of downstream nodes, e.g.
   ```
   x = Variable('x')
   xcopy1 = x.copy()
   xcopy2 = x.copy()
   y = SomeOp(...,some_mutable_input=xcopy1,...)
   z = SomeOp(...,some_mutable_input=xcopy2,...)
   ```
   If you combine equal nodes xcopy1 and xcopy2, then the y and z nodes will 
start seeing the effects of each other, which they shouldn't as originally 
coded.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce opened a new pull request #16157: Fix remaining errors reported by D2L

2019-09-12 Thread GitBox
reminisce opened a new pull request #16157: Fix remaining errors reported by D2L
URL: https://github.com/apache/incubator-mxnet/pull/16157
 
 
   ## Description ##
   After this PR is merged, D2L can depend on MXNet master branch.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] adis300 commented on issue #15303: Fix amalgamation failure.

2019-09-12 Thread GitBox
adis300 commented on issue #15303: Fix amalgamation failure.
URL: https://github.com/apache/incubator-mxnet/pull/15303#issuecomment-531023979
 
 
   Thanks @marcoabreu .
   
   @TaoLv I will try to rebase this branch and test in the next few days. Since 
it is very behind now. 
   
   Hope the merge request can be handled responsively after rebasing because 
rebasing. I have rebased it a few times already and it takes some decent amount 
of effort.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
ChaiBapchya edited a comment on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-531011526
 
 
   Subsequent run (commenting the ones already tested)
   ```
   nosetests -s -v tests/nightly/test_large_vector.py
   test_large_vector.test_ndarray_random_negative_binomial ... ok
   test_large_vector.test_ndarray_random_normal ... ok
   test_large_vector.test_ndarray_random_poisson ... ok
   test_large_vector.test_ndarray_random_randn ... ok
   test_large_vector.test_ndarray_random_shuffle ... ok
   test_large_vector.test_exponent_logarithm_operators ... ok
   test_large_vector.test_power_operators ... ok
   test_large_vector.test_sequence_mask ... ERROR
   test_large_vector.test_sequence_reverse ... ok
   test_large_vector.test_sequence_last ... ok
   test_large_vector.test_layer_norm ... ok
   test_large_vector.test_batchnorm ... ok
   test_large_vector.test_add ... ERROR
   test_large_vector.test_sub ... ok
   test_large_vector.test_rsub ... ok
   test_large_vector.test_neg ... ok
   test_large_vector.test_mul ... ok
   test_large_vector.test_div ... ok
   test_large_vector.test_rdiv ... ok
   test_large_vector.test_mod ... ok
   test_large_vector.test_rmod ... ok
   test_large_vector.test_pow ... ok
   test_large_vector.test_rpow ... ok
   test_large_vector.test_shape ... ok
   test_large_vector.test_size ... ok
   test_large_vector.test_copy ... ok
   test_large_vector.test_copy_to ... ok
   test_large_vector.test_zeros_like ... ok
   test_large_vector.test_ones_like ... ok
   test_large_vector.test_concat ... ok
   test_large_vector.test_sum ... ERROR
   test_large_vector.test_prod ...
   ok
   test_large_vector.test_min ... ok
   test_large_vector.test_max ... ok
   test_large_vector.test_argmax ... ok
   test_large_vector.test_iadd ... ok
   test_large_vector.test_isub ... ok
   test_large_vector.test_imul ... ok
   test_large_vector.test_idiv ... ok
   test_large_vector.test_imod ... ok
   test_large_vector.test_eq ... ok
   test_large_vector.test_neq ... Killed
   ```
   
   Individually all the error tests work
   ```
   nosetests -s -v tests/nightly/test_large_vector.py:test_sequence_mask
   test_large_vector.test_sequence_mask ... ok
   
   --
   Ran 1 test in 93.567s
   
   OK
   ```
   ```
   nosetests -s -v tests/nightly/test_large_vector.py:test_add
   test_large_vector.test_add ... ok
   
   --
   Ran 1 test in 7.926s
   
   OK
   ```
   ```
   nosetests -s -v tests/nightly/test_large_vector.py:test_neq
   test_large_vector.test_neq ... ok
   
   --
   Ran 1 test in 10.074s
   
   OK
   ```
   ```
   nosetests -s -v tests/nightly/test_large_vector.py:test_sum
   test_large_vector.test_sum ... ok
   
   --
   Ran 1 test in 175.573s
   
   OK
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
ChaiBapchya edited a comment on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-531001265
 
 
   For C5.18xl instance (similar to the one used to run Nightly build)
   
   topk,argsort,sort - ERROR
   mxnet.base.MXNetError: [21:29:51] src/storage/./cpu_device_storage.h:75: 
Failed to allocate CPU Memory
   
   multinomial - fails due to memory shortage
   
   ```
   nosetests -s -v tests/nightly/test_large_vector.py
   test_large_vector.test_slice ... ok
   test_large_vector.test_ndarray_zeros ... ok
   test_large_vector.test_ndarray_ones ... ok
   test_large_vector.test_ndarray_random_uniform ... ok
   test_large_vector.test_ndarray_random_randint ... [INFO] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=1806641233 to reproduce.
   FAIL
   test_large_vector.test_ndarray_empty ... ok
   test_large_vector.test_elementwise ... ok
   test_large_vector.test_clip ... ok
   test_large_vector.test_argmin ... ok
   test_large_vector.test_take ... ok
   test_large_vector.test_slice_assign ... ok
   test_large_vector.test_expand_dims ... ok
   test_large_vector.test_squeeze ... ok
   test_large_vector.test_broadcast_div ... ok
   test_large_vector.test_Dense ... ok
   test_large_vector.test_argsort ... ERROR
   test_large_vector.test_sort ... ERROR
   test_large_vector.test_topk ... ERROR
   test_large_vector.test_mean ... ok
   test_large_vector.test_ndarray_random_exponential ... ok
   test_large_vector.test_ndarray_random_gamma ... ok
   test_large_vector.test_ndarray_random_generalized_negative_binomial ... ok
   test_large_vector.test_ndarray_random_multinomial ... *** Error in 
`/home/ubuntu/anaconda3/bin/python': double free or corruption (fasttop):
   ```
   
   Upon running multinomial individually it passes
   ```
   nosetests -s -v 
tests/nightly/test_large_vector.py:test_ndarray_random_multinomial
   test_large_vector.test_ndarray_random_multinomial ... ok
   
   --
   Ran 1 test in 50.371s
   
   OK
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil opened a new pull request #15885: [WIP] New Website: Remove Old Content [2/3]

2019-09-12 Thread GitBox
ThomasDelteil opened a new pull request #15885: [WIP] New Website: Remove Old 
Content [2/3]
URL: https://github.com/apache/incubator-mxnet/pull/15885
 
 
   This specific commit can be reviewed here in isolation here for better 
readability: https://github.com/ThomasDelteil/incubator-mxnet/pull/5
   
   - [x] https://github.com/apache/incubator-mxnet/pull/15884 merged
   
   This removes the old website
   
   New website visible here: https://mxnet-beta.staged.apache.org/
   
   Follow-up PR to add CI for the new docs and website: 
https://github.com/apache/incubator-mxnet/pull/15883


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil closed pull request #15885: [WIP] New Website: Remove Old Content [2/3]

2019-09-12 Thread GitBox
ThomasDelteil closed pull request #15885: [WIP] New Website: Remove Old Content 
[2/3]
URL: https://github.com/apache/incubator-mxnet/pull/15885
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
ChaiBapchya commented on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-531011526
 
 
   Subsequent run (commenting the ones already tested)
   ```
   nosetests -s -v tests/nightly/test_large_vector.py
   test_large_vector.test_ndarray_random_negative_binomial ... ok
   test_large_vector.test_ndarray_random_normal ... ok
   test_large_vector.test_ndarray_random_poisson ... ok
   test_large_vector.test_ndarray_random_randn ... ok
   test_large_vector.test_ndarray_random_shuffle ... ok
   test_large_vector.test_exponent_logarithm_operators ... ok
   test_large_vector.test_power_operators ... ok
   test_large_vector.test_sequence_mask ... ERROR
   test_large_vector.test_sequence_reverse ... ok
   test_large_vector.test_sequence_last ... ok
   test_large_vector.test_layer_norm ... ok
   test_large_vector.test_batchnorm ... ok
   test_large_vector.test_add ... ERROR
   test_large_vector.test_sub ... ok
   test_large_vector.test_rsub ... ok
   test_large_vector.test_neg ... ok
   test_large_vector.test_mul ... ok
   test_large_vector.test_div ... ok
   test_large_vector.test_rdiv ... ok
   test_large_vector.test_mod ... ok
   test_large_vector.test_rmod ... ok
   test_large_vector.test_pow ... ok
   test_large_vector.test_rpow ... ok
   test_large_vector.test_shape ... ok
   test_large_vector.test_size ... ok
   test_large_vector.test_copy ... ok
   test_large_vector.test_copy_to ... ok
   test_large_vector.test_zeros_like ... ok
   test_large_vector.test_ones_like ... ok
   test_large_vector.test_concat ... ok
   test_large_vector.test_sum ... ERROR
   test_large_vector.test_prod ...
   ok
   test_large_vector.test_min ... ok
   test_large_vector.test_max ... ok
   test_large_vector.test_argmax ... ok
   test_large_vector.test_iadd ... ok
   test_large_vector.test_isub ... ok
   test_large_vector.test_imul ... ok
   test_large_vector.test_idiv ... ok
   test_large_vector.test_imod ... ok
   test_large_vector.test_eq ... ok
   test_large_vector.test_neq ... Killed
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha closed issue #16135: Sampling fails on mxnet==1.5.0 on Linux?

2019-09-12 Thread GitBox
szha closed issue #16135: Sampling fails on mxnet==1.5.0 on Linux?
URL: https://github.com/apache/incubator-mxnet/issues/16135
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (b4b7bfb -> c5383f7)

2019-09-12 Thread zachgk
This is an automated email from the ASF dual-hosted git repository.

zachgk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b4b7bfb  Update env_var.md (#16145)
 add c5383f7  CD Fixes (#16127)

No new revisions were added by this update.

Summary of changes:
 cd/Jenkinsfile_cd_pipeline| 2 +-
 cd/README.md  | 4 ++--
 ci/docker/runtime_functions.sh| 2 ++
 python/mxnet/test_utils.py| 5 +
 tests/python/unittest/test_library_loading.py | 3 ++-
 5 files changed, 12 insertions(+), 4 deletions(-)



[GitHub] [incubator-mxnet] zachgk merged pull request #16127: CD Fixes

2019-09-12 Thread GitBox
zachgk merged pull request #16127: CD Fixes
URL: https://github.com/apache/incubator-mxnet/pull/16127
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
ChaiBapchya edited a comment on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-531001265
 
 
   For C5.18xl instance (similar to the one used to run Nightly build)
   topk,argsort,sort - ERROR
   multinomial - fails due to memory shortage
   
   ```
   nosetests -s -v tests/nightly/test_large_vector.py
   test_large_vector.test_slice ... ok
   test_large_vector.test_ndarray_zeros ... ok
   test_large_vector.test_ndarray_ones ... ok
   test_large_vector.test_ndarray_random_uniform ... ok
   test_large_vector.test_ndarray_random_randint ... [INFO] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=1806641233 to reproduce.
   FAIL
   test_large_vector.test_ndarray_empty ... ok
   test_large_vector.test_elementwise ... ok
   test_large_vector.test_clip ... ok
   test_large_vector.test_argmin ... ok
   test_large_vector.test_take ... ok
   test_large_vector.test_slice_assign ... ok
   test_large_vector.test_expand_dims ... ok
   test_large_vector.test_squeeze ... ok
   test_large_vector.test_broadcast_div ... ok
   test_large_vector.test_Dense ... ok
   test_large_vector.test_argsort ... ERROR
   test_large_vector.test_sort ... ERROR
   test_large_vector.test_topk ... ERROR
   test_large_vector.test_mean ... ok
   test_large_vector.test_ndarray_random_exponential ... ok
   test_large_vector.test_ndarray_random_gamma ... ok
   test_large_vector.test_ndarray_random_generalized_negative_binomial ... ok
   test_large_vector.test_ndarray_random_multinomial ... *** Error in 
`/home/ubuntu/anaconda3/bin/python': double free or corruption (fasttop):
   ```
   
   Upon running multinomial individually it passes
   ```
   nosetests -s -v 
tests/nightly/test_large_vector.py:test_ndarray_random_multinomial
   test_large_vector.test_ndarray_random_multinomial ... ok
   
   --
   Ran 1 test in 50.371s
   
   OK
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
ChaiBapchya commented on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-531001265
 
 
   For C5 instance
   topk,argsort,sort - ERROR
   multinomial - fails due to memory shortage
   
   ```
   nosetests -s -v tests/nightly/test_large_vector.py
   test_large_vector.test_slice ... ok
   test_large_vector.test_ndarray_zeros ... ok
   test_large_vector.test_ndarray_ones ... ok
   test_large_vector.test_ndarray_random_uniform ... ok
   test_large_vector.test_ndarray_random_randint ... [INFO] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=1806641233 to reproduce.
   FAIL
   test_large_vector.test_ndarray_empty ... ok
   test_large_vector.test_elementwise ... ok
   test_large_vector.test_clip ... ok
   test_large_vector.test_argmin ... ok
   test_large_vector.test_take ... ok
   test_large_vector.test_slice_assign ... ok
   test_large_vector.test_expand_dims ... ok
   test_large_vector.test_squeeze ... ok
   test_large_vector.test_broadcast_div ... ok
   test_large_vector.test_Dense ... ok
   test_large_vector.test_argsort ... ERROR
   test_large_vector.test_sort ... ERROR
   test_large_vector.test_topk ... ERROR
   test_large_vector.test_mean ... ok
   test_large_vector.test_ndarray_random_exponential ... ok
   test_large_vector.test_ndarray_random_gamma ... ok
   test_large_vector.test_ndarray_random_generalized_negative_binomial ... ok
   test_large_vector.test_ndarray_random_multinomial ... *** Error in 
`/home/ubuntu/anaconda3/bin/python': double free or corruption (fasttop):
   ```
   
   Upon running multinomial individually it passes
   ```
   nosetests -s -v 
tests/nightly/test_large_vector.py:test_ndarray_random_multinomial
   test_large_vector.test_ndarray_random_multinomial ... ok
   
   --
   Ran 1 test in 50.371s
   
   OK
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit edited a comment on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
access2rohit edited a comment on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-530979994
 
 
   @ChaiBapchya Can you also run the full suite of tests again and paste the 
results here once they re done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit edited a comment on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
access2rohit edited a comment on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-530979994
 
 
   @ChaiBapchya Can you also run the full suite of tests again and paste the 
results here once they are done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
access2rohit commented on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-530979994
 
 
   @ChaiBapchya Can you also run the full suite again and paste the results too 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] igolan closed issue #16130: Imperative execution in MXNET with multiple GPUs does not run in parallel

2019-09-12 Thread GitBox
igolan closed issue #16130: Imperative execution in MXNET with multiple GPUs 
does not run in parallel
URL: https://github.com/apache/incubator-mxnet/issues/16130
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] igolan commented on issue #16130: Imperative execution in MXNET with multiple GPUs does not run in parallel

2019-09-12 Thread GitBox
igolan commented on issue #16130: Imperative execution in MXNET with multiple 
GPUs does not run in parallel
URL: 
https://github.com/apache/incubator-mxnet/issues/16130#issuecomment-530972213
 
 
   Hi,
   If I use a larger model (cifar_wideresnet40_8) it does run in parallel.
   This issue can be closed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #15901: [Numpy] operator hypot

2019-09-12 Thread GitBox
Laurawly commented on a change in pull request #15901: [Numpy] operator hypot
URL: https://github.com/apache/incubator-mxnet/pull/15901#discussion_r323904977
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -2748,4 +2748,39 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 # pylint: enable=redefined-outer-name
 
 
+@set_module('mxnet.symbol.numpy')
+def hypot(x1, x2, out=None):
+r"""
+hypot(x1, x2, out=None)
 
 Review comment:
   Same here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #15901: [Numpy] operator hypot

2019-09-12 Thread GitBox
Laurawly commented on a change in pull request #15901: [Numpy] operator hypot
URL: https://github.com/apache/incubator-mxnet/pull/15901#discussion_r323903524
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -2432,3 +2432,52 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 else:
 raise ValueError("The dimensions must be sequence of ints")
 # pylint: enable=redefined-outer-name
+
+
+@set_module('mxnet.ndarray.numpy')
+def hypot(x1, x2, out=None):
+r"""
+hypot(x1, x2, out=None)
 
 Review comment:
   Remove this line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #15816: Numpy Tril (Lower triangle) operator

2019-09-12 Thread GitBox
Laurawly commented on a change in pull request #15816: Numpy Tril (Lower 
triangle) operator
URL: https://github.com/apache/incubator-mxnet/pull/15816#discussion_r323902000
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -92,6 +92,65 @@ def is_int(dtype):
 assert_almost_equal(mx_out.asnumpy(), np_out, 
rtol=1e-3, atol=1e-5)
 
 
+@with_seed()
+@use_np
+def test_np_tril():
+config = [
+((4, 2), 3),
+((4, 2), 9),
+((4, 2), 0),
+((4, 2), 2),
+((4, 5, 6), 0),
+((4, 5, 6), 5),
+((4, 5, 6), 2),
+((4, 5, 6), 20),
+((7, 8, 10, 9), 0),
+((7, 8, 10, 9), 5),
+((7, 8, 10, 9), 9),
+((7, 8, 10, 9), 13),
+((4, 0), 0),
+((4, 0), 2),
+((4, 0), 4),
+((4, 0), 7),
+((3, ), 0),
+((3, ), 2),
+((3, ), 5)
 
 Review comment:
   Can we add test cases when k < 0?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #15816: Numpy Tril (Lower triangle) operator

2019-09-12 Thread GitBox
Laurawly commented on a change in pull request #15816: Numpy Tril (Lower 
triangle) operator
URL: https://github.com/apache/incubator-mxnet/pull/15816#discussion_r323902000
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -92,6 +92,65 @@ def is_int(dtype):
 assert_almost_equal(mx_out.asnumpy(), np_out, 
rtol=1e-3, atol=1e-5)
 
 
+@with_seed()
+@use_np
+def test_np_tril():
+config = [
+((4, 2), 3),
+((4, 2), 9),
+((4, 2), 0),
+((4, 2), 2),
+((4, 5, 6), 0),
+((4, 5, 6), 5),
+((4, 5, 6), 2),
+((4, 5, 6), 20),
+((7, 8, 10, 9), 0),
+((7, 8, 10, 9), 5),
+((7, 8, 10, 9), 9),
+((7, 8, 10, 9), 13),
+((4, 0), 0),
+((4, 0), 2),
+((4, 0), 4),
+((4, 0), 7),
+((3, ), 0),
+((3, ), 2),
+((3, ), 5)
 
 Review comment:
   Can we add test cases when k < 0?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #15816: Numpy Tril (Lower triangle) operator

2019-09-12 Thread GitBox
Laurawly commented on a change in pull request #15816: Numpy Tril (Lower 
triangle) operator
URL: https://github.com/apache/incubator-mxnet/pull/15816#discussion_r323901845
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -92,6 +92,65 @@ def is_int(dtype):
 assert_almost_equal(mx_out.asnumpy(), np_out, 
rtol=1e-3, atol=1e-5)
 
 
+@with_seed()
+@use_np
+def test_np_tril():
+config = [
 
 Review comment:
   Can we add an empty shape in the test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #15303: Fix amalgamation failure.

2019-09-12 Thread GitBox
marcoabreu commented on issue #15303: Fix amalgamation failure.
URL: https://github.com/apache/incubator-mxnet/pull/15303#issuecomment-530960961
 
 
   Hey @TaoLv since we have an approval, could you assist the author to bring 
this PR to completion and assist them with CI and other questions?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #15303: Fix amalgamation failure.

2019-09-12 Thread GitBox
marcoabreu commented on issue #15303: Fix amalgamation failure.
URL: https://github.com/apache/incubator-mxnet/pull/15303#issuecomment-530960352
 
 
   I think we're still running Amalagamation tests: 
https://github.com/apache/incubator-mxnet/blob/master/tests/nightly/Jenkinsfile#L76
 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #15816: Numpy Tril (Lower triangle) operator

2019-09-12 Thread GitBox
Laurawly commented on a change in pull request #15816: Numpy Tril (Lower 
triangle) operator
URL: https://github.com/apache/incubator-mxnet/pull/15816#discussion_r323899143
 
 

 ##
 File path: src/operator/numpy/np_tril_op.cc
 ##
 @@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+* Copyright (c) 2019 by Contributors
+* \file np_tril_op.cc
+* \brief CPU implementation of numpy tril operator
+*/
+
+#include "./np_tril_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+DMLC_REGISTER_PARAMETER(TrilParam);
+
+NNVM_REGISTER_OP(_npi_tril)
+.set_attr_parser(ParamParser)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+return std::vector{"data"};
+  })
+.set_attr("FInferShape", TrilOpShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", TrilOpForward)
+.set_attr("FInplaceOption",
+  [](const NodeAttrs& attrs) {
+return std::vector >{{0, 0}};
+  })
+.set_attr("FGradient", ElemwiseGradUseNone{"_backward_tril"})
+.add_argument("data", "NDArray-or-Symbol", "Input ndarray")
+.add_arguments(TrilParam::__FIELDS__());
+
+
+NNVM_REGISTER_OP(_backward_tril)
+.set_attr_parser(ParamParser)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr("TIsBackward", true)
+.set_attr("FCompute", TrilOpBackward);
+
+
 
 Review comment:
   Remove this blank line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #15568: julia: rename build env var `MXNET_HOME` to `MXNET_ROOT`

2019-09-12 Thread GitBox
aaronmarkham commented on issue #15568: julia: rename build env var 
`MXNET_HOME` to `MXNET_ROOT`
URL: https://github.com/apache/incubator-mxnet/pull/15568#issuecomment-530958976
 
 
   > oh, @aaronmarkham what are the recent changes on the website builds?
   
   The [code isn't merged 
yet](https://github.com/apache/incubator-mxnet/pull/15883), but we're close. 
Info is here: 
https://cwiki.apache.org/confluence/display/MXNET/Building+the+New+Website
   
   Basically, with the new flow, after you've generated the MXNet binary (with 
a similar single command), you can run this to test the Julia docs build. Then 
you don't have to worry about Sphinx or any other dependency than the MXNet 
binary and just focus on Julia.
   ```
   ci/build.py --docker-registry mxnetci --platform ubuntu_cpu_julia 
/work/runtime_functions.sh build_julia_docs
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
ChaiBapchya commented on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-530953036
 
 
   ```
   nosetests -s -v tests/nightly/test_large_vector.py:test_sequence_last
   test_large_vector.test_sequence_last ... ok
   
   --
   Ran 1 test in 5.177s
   
   OK
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
ChaiBapchya commented on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-530952767
 
 
   @access2rohit @apeforest take a look. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16156: Sequence last fix

2019-09-12 Thread GitBox
ChaiBapchya commented on issue #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156#issuecomment-530952650
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #16134: Incorrect subtraction

2019-09-12 Thread GitBox
marcoabreu commented on issue #16134: Incorrect subtraction
URL: 
https://github.com/apache/incubator-mxnet/issues/16134#issuecomment-530952395
 
 
   You could maybe provide a Dockerfile that recreates the environment? That 
would make it easier to replicate and debug the error.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #16154: Deprecated MXNET website is still online ?

2019-09-12 Thread GitBox
marcoabreu commented on issue #16154: Deprecated MXNET website is still online ?
URL: 
https://github.com/apache/incubator-mxnet/issues/16154#issuecomment-530950875
 
 
   
![image](https://user-images.githubusercontent.com/18629099/64810866-7bca9800-d59c-11e9-8276-2b3835771f5d.png)
   
   You're right, thanks!
   
   @aaronmarkham 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] HahTK commented on a change in pull request #16131: Fix for duplicate subgraph inputs/outputs

2019-09-12 Thread GitBox
HahTK commented on a change in pull request #16131: Fix for duplicate subgraph 
inputs/outputs
URL: https://github.com/apache/incubator-mxnet/pull/16131#discussion_r323889132
 
 

 ##
 File path: src/operator/subgraph/subgraph_property.h
 ##
 @@ -296,8 +296,20 @@ class SubgraphProperty {
*/
   virtual void ConnectSubgraphOutputs(const nnvm::NodePtr subgraph_node,
   std::vector* 
output_entries) const {
+// Collapse output_entries pointing to same NodeEntry
 
 Review comment:
   The pseudo code looks functionally correct. However, we probably do not want 
to do it that way. The reason it is not desirable becomes clearer if I were to 
make a more abstracted version of that same pseudo code
   
   *Highly* Abstracted pseudo code of @ZhennanQin 's proposal -
   ```
   split_output_entries_into_dupes_and_unique() //also maps them to each other
   subg_prop->ConnectSubgraphUNIQUEOutputs() //but the function name is kept as 
ConnectSubgraphOutputs() 
   connect_dupe_output_entries()
   ```
   
   So it basically -
   - hobbles ConnectSubgraphOutputs() because it not longer connects all output 
entries and must depend on some external undefined helper code to use the 
function correctly. (If outputs are it is not unique then it will do the wrong 
thing by duping the connections. So we also keep the bug)
   - creates a misnomer because ConnectSubgraphOutputs() is really 
ConnectSubgraphUNIQUEOutputs()
   - the effort to split the unique and dupe entries is not cheaper than doing 
it in subgraph_property as per the original proposal. Possibly more expensive
   - the dupe work involved in setting sym.outputs is still there and not 
eliminated by this proposal.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Vikas-kum commented on issue #15837: Numpy add numpy op indices

2019-09-12 Thread GitBox
Vikas-kum commented on issue #15837: Numpy add numpy op indices
URL: https://github.com/apache/incubator-mxnet/pull/15837#issuecomment-530935264
 
 
   backward compatibility check is failing for this PR: 
   Can you check this and make sure that it passes - 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/restricted-backwards-compatibility-checker/detail/restricted-backwards-compatibility-checker/778/pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Vikas-kum commented on issue #16138: julia: fix `mx.forward` kwargs checking

2019-09-12 Thread GitBox
Vikas-kum commented on issue #16138: julia: fix `mx.forward` kwargs checking
URL: https://github.com/apache/incubator-mxnet/pull/16138#issuecomment-530934646
 
 
   @iblis17 the ci is failing on this PR. Please take a look at this - 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/master/1034/pipeline
 
   
   And make sure that, CI passes and there is at least one review(just another 
pair of eyes looking at changes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on issue #16114: improve dataloader signals and messages

2019-09-12 Thread GitBox
zhreshold commented on issue #16114: improve dataloader signals and messages
URL: https://github.com/apache/incubator-mxnet/pull/16114#issuecomment-530934136
 
 
   @leezu the timeout is for dataloader workers, not including the network 
training on the main thread. Is there any use case where each batch on cpu can 
take up to 2min?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #16156: Sequence last fix

2019-09-12 Thread GitBox
ChaiBapchya opened a new pull request #16156: Sequence last fix
URL: https://github.com/apache/incubator-mxnet/pull/16156
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #16155: set fixed seed for profiler

2019-09-12 Thread GitBox
apeforest commented on issue #16155: set fixed seed for profiler
URL: https://github.com/apache/incubator-mxnet/pull/16155#issuecomment-530918599
 
 
   @ChaiBapchya review please


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest opened a new pull request #16155: set fixed seed for profiler

2019-09-12 Thread GitBox
apeforest opened a new pull request #16155: set fixed seed for profiler
URL: https://github.com/apache/incubator-mxnet/pull/16155
 
 
   ## Description ##
   Use a fixed seed for profiler to reduce run-to-run variation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] matteosal edited a comment on issue #16143: Failure of MKL-DNN Convolution from C API

2019-09-12 Thread GitBox
matteosal edited a comment on issue #16143: Failure of MKL-DNN Convolution from 
C API
URL: 
https://github.com/apache/incubator-mxnet/issues/16143#issuecomment-530901486
 
 
   I've discovered an example which is not fixed by the above patch. This is 
time it involves a more complex symbol with multiple ops, and it's not 
reproduceable by making it simpler. Again, this doesn't happen in python:
   ```
   #include 
   
   #include "mxnet/c_api.h"
   #include "nnvm/c_api.h"
   
   int main() {
   
 SymbolHandle sym;
 char json[] = 
 "{\"nodes\":[{\"op\":\"null\",\"name\":\"input\",\"inputs\":[]},{\"op\
   \":\"null\",\"name\":\"w1\",\"inputs\":[]},{\"op\":\"null\",\"name\":\
   \"b1\",\"inputs\":[]},{\"op\":\"Convolution\",\"name\":\"conv1\",\"\
   attrs\":{\"cudnn_off\":\"0\",\"dilate\":\"(1, 1)\",\"kernel\":\"(1, \
   1)\",\"layout\":\"None\",\"no_bias\":\"False\",\"num_filter\":\"1\",\"\
   num_group\":\"1\",\"pad\":\"(0, 0)\",\"stride\":\"(1, \
   1)\"},\"inputs\":[[0,0,0],[1,0,0],[2,0,0]]},{\"op\":\"null\",\"name\":\
   \"w2\",\"inputs\":[]},{\"op\":\"null\",\"name\":\"b2\",\"inputs\":[]},\
   {\"op\":\"Deconvolution\",\"name\":\"deconv\",\"attrs\":{\"dilate\":\"\
   (1, 1)\",\"kernel\":\"(1, \
   1)\",\"no_bias\":\"False\",\"num_filter\":\"8\",\"num_group\":\"1\",\"\
   pad\":\"(0, 0)\",\"stride\":\"(1, \
   1)\"},\"inputs\":[[3,0,0],[4,0,0],[5,0,0]]},{\"op\":\"null\",\"name\":\
   \"w3\",\"inputs\":[]},{\"op\":\"null\",\"name\":\"b3\",\"inputs\":[]},\
   {\"op\":\"Convolution\",\"name\":\"conv2\",\"attrs\":{\"cudnn_off\":\"\
   0\",\"dilate\":\"(1, 1)\",\"kernel\":\"(1, \
   1)\",\"layout\":\"None\",\"no_bias\":\"False\",\"num_filter\":\"8\",\"\
   num_group\":\"1\",\"pad\":\"(0, 0)\",\"stride\":\"(1, \
   1)\"},\"inputs\":[[6,0,0],[7,0,0],[8,0,0]]},{\"op\":\"_copy\",\"name\"\
   :\"out\",\"inputs\":[[9,0,0]]}],\"arg_nodes\":[0,1,2,4,5,7,8],\"node_\
   row_ptr\":[0,1,2,3,4,5,6,7,8,9,10,11],\"heads\":[[10,0,0]],\"attrs\":{\
   \"mxnet_version\":[\"int\",10500]}}";
 
 MXSymbolCreateFromJSON(json, );
 
 /* Create NDArrays for arguments */
 int dev_type = 1;
 int dev_id = 0; 
   
 mx_uint in_shape[4] = {1, 3, 10, 10};
 NDArrayHandle in_arg_arr;
 MXNDArrayCreateEx(in_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint w1_shape[4] = {1, 3, 1, 1};
 NDArrayHandle w1_arg_arr, w1_grad_arr;
 MXNDArrayCreateEx(w1_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 MXNDArrayCreateEx(w1_shape, 4, dev_type, dev_id, 0, 0, _grad_arr);
 mx_uint b1_shape[1] = {1};
 NDArrayHandle b1_arg_arr;
 MXNDArrayCreateEx(b1_shape, 1, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint w2_shape[4] = {1, 8, 1, 1};
 NDArrayHandle w2_arg_arr;
 MXNDArrayCreateEx(w2_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint b2_shape[1] = {8};
 NDArrayHandle b2_arg_arr;
 MXNDArrayCreateEx(b2_shape, 1, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint w3_shape[4] = {8, 8, 1, 1};
 NDArrayHandle w3_arg_arr;
 MXNDArrayCreateEx(w3_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint b3_shape[1] = {8};
 NDArrayHandle b3_arg_arr;
 MXNDArrayCreateEx(b3_shape, 1, dev_type, dev_id, 0, 0, _arg_arr);
   
 mx_uint outgrad_shape[4] = {1, 8, 10, 10};
 NDArrayHandle outgrad_arr;
 MXNDArrayCreateEx(outgrad_shape, 4, dev_type, dev_id, 0, 0, _arr);
   
 /* Create and bind executor */
 ExecutorHandle ex;
 NDArrayHandle arg[7] = {in_arg_arr, w1_arg_arr, b1_arg_arr, w2_arg_arr, 
   b2_arg_arr, w3_arg_arr, b3_arg_arr};
 NDArrayHandle grad[7] = {NULL, w1_grad_arr, NULL, NULL, NULL, NULL,NULL};
 NDArrayHandle *aux = NULL;
 mx_uint req[7] = {0, 1, 0, 0, 0, 0, 0};
 MXExecutorBind(sym, dev_type, dev_id, 7, arg, grad, req, 0, aux, );
 
 /* Forward, backward */
 NDArrayHandle outgrad_vec[1] = {outgrad_arr};
 MXExecutorForward(ex, 1);
 MXExecutorBackward(ex, 1, outgrad_vec);
 
 /* Read output */
 void *data;
 if(MXNDArrayWaitToRead(w1_grad_arr) != 0)
   printf("%s\n", MXGetLastError());
 else
   printf("Ok!\n");
 return 0;
   }
   ```
   It fails with the same error, but at `MXNDArrayWaitToRead` instead of 
`MXNDArrayGetData`. Complete error message:
   ```
   [18:21:33] src/ndarray/ndarray.cc:757: Check failed: !IsMKLDNNData(): We 
can't generate TBlob for MKLDNN data. Please use Reorder2Default() to generate 
a new NDArray first
   Stack trace:
 [bt] (0) libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x43) 
[0x7f3feefb6ac3]
 [bt] (1) libmxnet.so(mxnet::NDArray::SetTBlob() const+0x2fc) 
[0x7f3ff16c9f4c]
 [bt] (2) 
libmxnet.so(mxnet::op::MKLDNNDeconvolutionBackward(nnvm::NodeAttrs const&, 
mxnet::OpContext const&, std::vector > const&, std::vector > const&, std::vector > const&)+0x5e7) [0x7f3fef0a9b67]
 [bt] (3) libmxnet.so(+0x26f2022) [0x7f3ff10cf022]
 [bt] (4) 
libmxnet.so(mxnet::exec::FComputeExExecutor::Run(mxnet::RunContext, 
bool)+0x2d1) 

[GitHub] [incubator-mxnet] comaniac commented on a change in pull request #15815: Numpy add numpy op hanning, hamming, blackman

2019-09-12 Thread GitBox
comaniac commented on a change in pull request #15815: Numpy add numpy op 
hanning, hamming, blackman
URL: https://github.com/apache/incubator-mxnet/pull/15815#discussion_r323835532
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -2145,3 +2145,275 @@ def argmax(a, axis=None, out=None):
 array([2., 2.])
 """
 return _npi.argmax(a, axis=axis, keepdims=False, out=out)
+
+
+@set_module('mxnet.ndarray.numpy')
+def hanning(M, dtype=_np.float64, ctx=None):
+r"""Return the Hanning window.
+
+The Hanning window is a taper formed by using a weighted cosine.
+
+Parameters
+--
+M : int
+Number of points in the output window. If zero or less, an
+empty array is returned.
+dtype : str or numpy.dtype, optional
+An optional value type. Default is `numpy.float64`. Note that you need
+select numpy.float32 or float64 in this operator.
+ctx : Context, optional
+An optional device context (default is the current default context).
+
+Returns
+---
+out : ndarray, shape(M,)
+The window, with the maximum value normalized to one (the value
+one appears only if `M` is odd).
+
+See Also
+
+blackman, hamming
+
+Notes
+-
+The Hanning window is defined as
+
+.. math::  w(n) = 0.5 - 0.5cos\left(\frac{2\pi{n}}{M-1}\right)
+   \qquad 0 \leq n \leq M-1
+
+The Hanning was named for Julius von Hann, an Austrian meteorologist.
+It is also known as the Cosine Bell. Some authors prefer that it be
+called a Hann window, to help avoid confusion with the very similar
+Hamming window.
+
+Most references to the Hanning window come from the signal processing
+literature, where it is used as one of many windowing functions for
+smoothing values.  It is also known as an apodization (which means
+"removing the foot", i.e. smoothing discontinuities at the beginning
+and end of the sampled signal) or tapering function.
+
+References
+--
+.. [1] Blackman, R.B. and Tukey, J.W., (1958) The measurement of power
+   spectra, Dover Publications, New York.
+.. [2] E.R. Kanasewich, "Time Sequence Analysis in Geophysics",
+   The University of Alberta Press, 1975, pp. 106-108.
+.. [3] Wikipedia, "Window function",
+   http://en.wikipedia.org/wiki/Window_function
+.. [4] W.H. Press,  B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling,
+   "Numerical Recipes", Cambridge University Press, 1986, page 425.
+
+Examples
+
+>>> np.hanning(12)
+array([0.e+00, 7.93732437e-02, 2.92292528e-01, 5.71157416e-01,
+   8.27430424e-01, 9.79746513e-01, 9.79746489e-01, 8.27430268e-01,
+   5.71157270e-01, 2.92292448e-01, 7.93731320e-02, 1.06192832e-13], 
dtype=float64)
+
+Plot the window and its frequency response:
+
+>>> import matplotlib.pyplot as plt
+>>> window = np.hanning(51)
+>>> plt.plot(window.asnumpy())
+[]
+>>> plt.title("Hann window")
+Text(0.5, 1.0, 'Hann window')
+>>> plt.ylabel("Amplitude")
+Text(0, 0.5, 'Amplitude')
+>>> plt.xlabel("Sample")
+Text(0.5, 0, 'Sample')
+>>> plt.show()
+"""
+if dtype is None:
+dtype = _np.float64
+if ctx is None:
+ctx = current_context()
+return _npi.hanning(M, dtype=dtype, ctx=ctx)
+
+
+@set_module('mxnet.ndarray.numpy')
+def hamming(M, dtype=_np.float64, ctx=None):
 
 Review comment:
   Yes the dtype parameter is required for this API, and what I have known is 
our default type would be float32. Please double confirm with @haojin2. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] matteosal edited a comment on issue #16143: Failure of MKL-DNN Convolution from C API

2019-09-12 Thread GitBox
matteosal edited a comment on issue #16143: Failure of MKL-DNN Convolution from 
C API
URL: 
https://github.com/apache/incubator-mxnet/issues/16143#issuecomment-530901486
 
 
   I've discovered an example which is not fixed by the above patch. Again, I 
couldn't reproduce this problem using python:
   ```
   #include 
   
   #include "mxnet/c_api.h"
   #include "nnvm/c_api.h"
   
   int main() {
   
 SymbolHandle sym;
 char json[] = 
 "{\"nodes\":[{\"op\":\"null\",\"name\":\"input\",\"inputs\":[]},{\"op\
   \":\"null\",\"name\":\"w1\",\"inputs\":[]},{\"op\":\"null\",\"name\":\
   \"b1\",\"inputs\":[]},{\"op\":\"Convolution\",\"name\":\"conv1\",\"\
   attrs\":{\"cudnn_off\":\"0\",\"dilate\":\"(1, 1)\",\"kernel\":\"(1, \
   1)\",\"layout\":\"None\",\"no_bias\":\"False\",\"num_filter\":\"1\",\"\
   num_group\":\"1\",\"pad\":\"(0, 0)\",\"stride\":\"(1, \
   1)\"},\"inputs\":[[0,0,0],[1,0,0],[2,0,0]]},{\"op\":\"null\",\"name\":\
   \"w2\",\"inputs\":[]},{\"op\":\"null\",\"name\":\"b2\",\"inputs\":[]},\
   {\"op\":\"Deconvolution\",\"name\":\"deconv\",\"attrs\":{\"dilate\":\"\
   (1, 1)\",\"kernel\":\"(1, \
   1)\",\"no_bias\":\"False\",\"num_filter\":\"8\",\"num_group\":\"1\",\"\
   pad\":\"(0, 0)\",\"stride\":\"(1, \
   1)\"},\"inputs\":[[3,0,0],[4,0,0],[5,0,0]]},{\"op\":\"null\",\"name\":\
   \"w3\",\"inputs\":[]},{\"op\":\"null\",\"name\":\"b3\",\"inputs\":[]},\
   {\"op\":\"Convolution\",\"name\":\"conv2\",\"attrs\":{\"cudnn_off\":\"\
   0\",\"dilate\":\"(1, 1)\",\"kernel\":\"(1, \
   1)\",\"layout\":\"None\",\"no_bias\":\"False\",\"num_filter\":\"8\",\"\
   num_group\":\"1\",\"pad\":\"(0, 0)\",\"stride\":\"(1, \
   1)\"},\"inputs\":[[6,0,0],[7,0,0],[8,0,0]]},{\"op\":\"_copy\",\"name\"\
   :\"out\",\"inputs\":[[9,0,0]]}],\"arg_nodes\":[0,1,2,4,5,7,8],\"node_\
   row_ptr\":[0,1,2,3,4,5,6,7,8,9,10,11],\"heads\":[[10,0,0]],\"attrs\":{\
   \"mxnet_version\":[\"int\",10500]}}";
 
 MXSymbolCreateFromJSON(json, );
 
 /* Create NDArrays for arguments */
 int dev_type = 1;
 int dev_id = 0; 
   
 mx_uint in_shape[4] = {1, 3, 10, 10};
 NDArrayHandle in_arg_arr;
 MXNDArrayCreateEx(in_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint w1_shape[4] = {1, 3, 1, 1};
 NDArrayHandle w1_arg_arr, w1_grad_arr;
 MXNDArrayCreateEx(w1_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 MXNDArrayCreateEx(w1_shape, 4, dev_type, dev_id, 0, 0, _grad_arr);
 mx_uint b1_shape[1] = {1};
 NDArrayHandle b1_arg_arr;
 MXNDArrayCreateEx(b1_shape, 1, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint w2_shape[4] = {1, 8, 1, 1};
 NDArrayHandle w2_arg_arr;
 MXNDArrayCreateEx(w2_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint b2_shape[1] = {8};
 NDArrayHandle b2_arg_arr;
 MXNDArrayCreateEx(b2_shape, 1, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint w3_shape[4] = {8, 8, 1, 1};
 NDArrayHandle w3_arg_arr;
 MXNDArrayCreateEx(w3_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint b3_shape[1] = {8};
 NDArrayHandle b3_arg_arr;
 MXNDArrayCreateEx(b3_shape, 1, dev_type, dev_id, 0, 0, _arg_arr);
   
 mx_uint outgrad_shape[4] = {1, 8, 10, 10};
 NDArrayHandle outgrad_arr;
 MXNDArrayCreateEx(outgrad_shape, 4, dev_type, dev_id, 0, 0, _arr);
   
 /* Create and bind executor */
 ExecutorHandle ex;
 NDArrayHandle arg[7] = {in_arg_arr, w1_arg_arr, b1_arg_arr, w2_arg_arr, 
   b2_arg_arr, w3_arg_arr, b3_arg_arr};
 NDArrayHandle grad[7] = {NULL, w1_grad_arr, NULL, NULL, NULL, NULL,NULL};
 NDArrayHandle *aux = NULL;
 mx_uint req[7] = {0, 1, 0, 0, 0, 0, 0};
 MXExecutorBind(sym, dev_type, dev_id, 7, arg, grad, req, 0, aux, );
 
 /* Forward, backward */
 NDArrayHandle outgrad_vec[1] = {outgrad_arr};
 MXExecutorForward(ex, 1);
 MXExecutorBackward(ex, 1, outgrad_vec);
 
 /* Read output */
 void *data;
 if(MXNDArrayWaitToRead(w1_grad_arr) != 0)
   printf("%s\n", MXGetLastError());
 else
   printf("Ok!\n");
 return 0;
   }
   ```
   It fails with the same error, but at `MXNDArrayWaitToRead` instead of 
`MXNDArrayGetData`. Complete error message:
   ```
   [18:21:33] src/ndarray/ndarray.cc:757: Check failed: !IsMKLDNNData(): We 
can't generate TBlob for MKLDNN data. Please use Reorder2Default() to generate 
a new NDArray first
   Stack trace:
 [bt] (0) libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x43) 
[0x7f3feefb6ac3]
 [bt] (1) libmxnet.so(mxnet::NDArray::SetTBlob() const+0x2fc) 
[0x7f3ff16c9f4c]
 [bt] (2) 
libmxnet.so(mxnet::op::MKLDNNDeconvolutionBackward(nnvm::NodeAttrs const&, 
mxnet::OpContext const&, std::vector > const&, std::vector > const&, std::vector > const&)+0x5e7) [0x7f3fef0a9b67]
 [bt] (3) libmxnet.so(+0x26f2022) [0x7f3ff10cf022]
 [bt] (4) 
libmxnet.so(mxnet::exec::FComputeExExecutor::Run(mxnet::RunContext, 
bool)+0x2d1) [0x7f3ff151f4d1]
 [bt] (5) libmxnet.so(+0x2aff246) [0x7f3ff14dc246]
 [bt] (6) 

[GitHub] [incubator-mxnet] matteosal commented on issue #16143: Failure of MKL-DNN Convolution from C API

2019-09-12 Thread GitBox
matteosal commented on issue #16143: Failure of MKL-DNN Convolution from C API
URL: 
https://github.com/apache/incubator-mxnet/issues/16143#issuecomment-530901486
 
 
   I've discovered an example which is not fixed by the above patch. Again, I 
couldn't reproduce this problem using python:
   ```
   #include 
   
   #include "mxnet/c_api.h"
   #include "nnvm/c_api.h"
   
   int main() {
   
 SymbolHandle sym;
 char json[] = 
 "{\"nodes\":[{\"op\":\"null\",\"name\":\"input\",\"inputs\":[]},{\"op\
   \":\"null\",\"name\":\"w1\",\"inputs\":[]},{\"op\":\"null\",\"name\":\
   \"b1\",\"inputs\":[]},{\"op\":\"Convolution\",\"name\":\"conv1\",\"\
   attrs\":{\"cudnn_off\":\"0\",\"dilate\":\"(1, 1)\",\"kernel\":\"(1, \
   1)\",\"layout\":\"None\",\"no_bias\":\"False\",\"num_filter\":\"1\",\"\
   num_group\":\"1\",\"pad\":\"(0, 0)\",\"stride\":\"(1, \
   1)\"},\"inputs\":[[0,0,0],[1,0,0],[2,0,0]]},{\"op\":\"null\",\"name\":\
   \"w2\",\"inputs\":[]},{\"op\":\"null\",\"name\":\"b2\",\"inputs\":[]},\
   {\"op\":\"Deconvolution\",\"name\":\"deconv\",\"attrs\":{\"dilate\":\"\
   (1, 1)\",\"kernel\":\"(1, \
   1)\",\"no_bias\":\"False\",\"num_filter\":\"8\",\"num_group\":\"1\",\"\
   pad\":\"(0, 0)\",\"stride\":\"(1, \
   1)\"},\"inputs\":[[3,0,0],[4,0,0],[5,0,0]]},{\"op\":\"null\",\"name\":\
   \"w3\",\"inputs\":[]},{\"op\":\"null\",\"name\":\"b3\",\"inputs\":[]},\
   {\"op\":\"Convolution\",\"name\":\"conv2\",\"attrs\":{\"cudnn_off\":\"\
   0\",\"dilate\":\"(1, 1)\",\"kernel\":\"(1, \
   1)\",\"layout\":\"None\",\"no_bias\":\"False\",\"num_filter\":\"8\",\"\
   num_group\":\"1\",\"pad\":\"(0, 0)\",\"stride\":\"(1, \
   1)\"},\"inputs\":[[6,0,0],[7,0,0],[8,0,0]]},{\"op\":\"_copy\",\"name\"\
   :\"out\",\"inputs\":[[9,0,0]]}],\"arg_nodes\":[0,1,2,4,5,7,8],\"node_\
   row_ptr\":[0,1,2,3,4,5,6,7,8,9,10,11],\"heads\":[[10,0,0]],\"attrs\":{\
   \"mxnet_version\":[\"int\",10500]}}";
 
 MXSymbolCreateFromJSON(json, );
 
 /* Create NDArrays for arguments */
 int dev_type = 1;
 int dev_id = 0; 
   
 mx_uint in_shape[4] = {1, 3, 10, 10};
 NDArrayHandle in_arg_arr;
 MXNDArrayCreateEx(in_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint w1_shape[4] = {1, 3, 1, 1};
 NDArrayHandle w1_arg_arr, w1_grad_arr;
 MXNDArrayCreateEx(w1_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 MXNDArrayCreateEx(w1_shape, 4, dev_type, dev_id, 0, 0, _grad_arr);
 mx_uint b1_shape[1] = {1};
 NDArrayHandle b1_arg_arr;
 MXNDArrayCreateEx(b1_shape, 1, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint w2_shape[4] = {1, 8, 1, 1};
 NDArrayHandle w2_arg_arr;
 MXNDArrayCreateEx(w2_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint b2_shape[1] = {8};
 NDArrayHandle b2_arg_arr;
 MXNDArrayCreateEx(b2_shape, 1, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint w3_shape[4] = {8, 8, 1, 1};
 NDArrayHandle w3_arg_arr;
 MXNDArrayCreateEx(w3_shape, 4, dev_type, dev_id, 0, 0, _arg_arr);
 mx_uint b3_shape[1] = {8};
 NDArrayHandle b3_arg_arr;
 MXNDArrayCreateEx(b3_shape, 1, dev_type, dev_id, 0, 0, _arg_arr);
   
 mx_uint outgrad_shape[4] = {1, 8, 10, 10};
 NDArrayHandle outgrad_arr;
 MXNDArrayCreateEx(outgrad_shape, 4, dev_type, dev_id, 0, 0, _arr);
   
 /* Create and bind executor */
 ExecutorHandle ex;
 NDArrayHandle arg[7] = {in_arg_arr, w1_arg_arr, b1_arg_arr, w2_arg_arr, 
   b2_arg_arr, w3_arg_arr, b3_arg_arr};
 NDArrayHandle grad[7] = {NULL, w1_grad_arr, NULL, NULL, NULL, NULL,NULL};
 NDArrayHandle *aux = NULL;
 mx_uint req[7] = {0, 1, 0, 0, 0, 0, 0};
 MXExecutorBind(sym, dev_type, dev_id, 7, arg, grad, req, 0, aux, );
 
 /* Forward, backward */
 NDArrayHandle outgrad_vec[1] = {outgrad_arr};
 MXExecutorForward(ex, 1);
 MXExecutorBackward(ex, 1, outgrad_vec);
 
 /* Read output */
 void *data;
 if(MXNDArrayWaitToRead(w1_grad_arr) != 0)
   printf("%s\n", MXGetLastError());
 else
   printf("Ok!\n");
 return 0;
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
sxjscience commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r323832234
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -3873,3 +3873,47 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 """
 return _mx_nd_np.indices(dimensions=dimensions, dtype=dtype, ctx=ctx)
 # pylint: enable=redefined-outer-name
+
+
+@set_module('mxnet.numpy')
+def ravel(x, order='C'):
+r"""
+ravel(x)
+
+Return a contiguous flattened array.
+A 1-D array, containing the elements of the input, is returned.  A copy is
+made only if needed.
+
+Parameters
+--
+x : ndarray
+Input array.  The elements in `x` are read in row-major, C-style order 
and
+packed as a 1-D array.
 
 Review comment:
   Add docstring for order


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
sxjscience commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r323832082
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1323,6 +1323,39 @@ def hybrid_forward(self, F, a, *args):
 assert same(mx_out.asnumpy(), np_out)
 
 
+@with_seed()
+@use_np
+def test_np_ravel():
+class TestRavel(HybridBlock):
+def __init__(self):
+super(TestRavel, self).__init__()
+
+def hybrid_forward(self, F, a):
+return F.np.ravel(a)
+
+types = ['float64', 'float32', 'float16', 'int64', 'int32', 'int8']
+for oneType in types:
+for hybridize in [True, False]:
+for shape in [(), (2,), (2, 2), (1, 2, 3), (3, 0), (1, 0, 2)]:
+test_ravel = TestRavel()
+if hybridize:
+test_ravel.hybridize()
+x = rand_ndarray(shape, dtype=oneType).as_np_ndarray()
+x.attach_grad()
+np_out = _np.ravel(x.asnumpy())
+with mx.autograd.record():
+mx_out = test_ravel(x)
+assert mx_out.shape == np_out.shape
 
 Review comment:
   I find that there are two `test_np_ravel`s in this file. It's safe to remove 
this one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
sxjscience commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r323831244
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -2748,4 +2748,44 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 # pylint: enable=redefined-outer-name
 
 
+@set_module('mxnet.symbol.numpy')
+def ravel(x, order='C'):
+r"""
+ravel(x)
+
+Return a contiguous flattened array.
+A 1-D array, containing the elements of the input, is returned.  A copy is
+made only if needed.
+
+Parameters
+--
+x : ndarray
+Input array.  The elements in `x` are read in row-major, C-style order 
and
+packed as a 1-D array.
+out : ndarray or None, optional
+A location into which the result is stored. If not provided or `None`,
+a freshly-allocated array is returned.
 
 Review comment:
   Fix docstring


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #15851: [Numpy] Numpy copysign

2019-09-12 Thread GitBox
sxjscience commented on a change in pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851#discussion_r323830449
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -149,6 +149,66 @@ def hybrid_forward(self, F, a):
 assert same(a.grad.asnumpy(), expected_grad)
 
 
+@with_seed()
+@use_np
+def test_np_copysign():
+class TestCopysign(HybridBlock):
+def __init__(self):
+super(TestCopysign, self).__init__()
+
+def hybrid_forward(self, F, a1, a2):
+   return F.np.copysign(a1, a2)
+
+def get_grad(a1, a2):
+sign = _np.logical_or(_np.logical_and(a1 < 0, a2 < 0),
+  _np.logical_and(a1 >= 0, a2 >= 0))
+sign = 2 * sign.astype(int) - 1
+sign = sign.reshape(-1, *a1.shape)
+sign = _np.sum(sign, axis=0)
+return sign, _np.zeros_like(a2)
+
+shapes = [
+(),
+(1),
+(2, 1),
+(3, 2, 1),
+(4, 3, 2, 1),
+(2, 4, 3, 2, 1)
+]
+types = ['float16', 'float32', 'float64', 'int8', 'int32', 'int64']
+for a1shape in shapes:
+for a2shape in shapes:
+for hybridize in [True, False]:
+for dtype in types:
+test_copysign = TestCopysign()
+if hybridize:
+test_copysign.hybridize()
+rtol = 1e-3
+atol = 1e-5
+a1_np = _np.array(_np.random.uniform(-1.0, 1.0, a1shape), 
dtype=dtype)
+a2_np = _np.array(_np.random.uniform(-1.0, 1.0, a2shape), 
dtype=dtype)
+a1 = mx.nd.array(a1_np).as_np_ndarray()
 
 Review comment:
   Currently the dtype of the original numpy array will not be passed to 
construct the mxnet numpy array.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16154: Deprecated MXNET website is still online ?

2019-09-12 Thread GitBox
mxnet-label-bot commented on issue #16154: Deprecated MXNET website is still 
online ?
URL: 
https://github.com/apache/incubator-mxnet/issues/16154#issuecomment-530892870
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] igolan opened a new issue #16154: Deprecated MXNET website is still online ?

2019-09-12 Thread GitBox
igolan opened a new issue #16154: Deprecated MXNET website is still online ?
URL: https://github.com/apache/incubator-mxnet/issues/16154
 
 
   Hi,
   I searched for "install mxnet" in Google, and the first results was this 
link:
   http://mxnet.incubator.apache.org/test/get_started/install.html
   (note the /test/ )
   which looks like a deprecated version of MXNET website (if you go to the 
homepage, you'll find news about MXNET0.1 release).
   
   Although it's nice to go back in time, I find it very confusing.
   If that website is intentionally online, I think it's worth adding a big 
notice that this is an old version of MXNET, otherwise, maybe it's worth 
redirecting this url to the current version.
   
   (I know that Google results are different per user, but it came up first 
also in my friend's computer, and in any case, that link should redirect to the 
latest version).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] chongruo opened a new pull request #16153: Add operator for Modulated Deformable Convolution

2019-09-12 Thread GitBox
chongruo opened a new pull request #16153: Add operator for Modulated 
Deformable Convolution
URL: https://github.com/apache/incubator-mxnet/pull/16153
 
 
   ## Description ##
   As title
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - Changes are complete (i.e. I finished coding on this PR)
   - All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (d9364ba -> b4b7bfb)

2019-09-12 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d9364ba  julia: fix `mx.forward` kwargs checking (#16138)
 add b4b7bfb  Update env_var.md (#16145)

No new revisions were added by this update.

Summary of changes:
 docs/faq/env_var.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[GitHub] [incubator-mxnet] TaoLv merged pull request #16145: [DOC] Update env_var.md

2019-09-12 Thread GitBox
TaoLv merged pull request #16145: [DOC] Update env_var.md
URL: https://github.com/apache/incubator-mxnet/pull/16145
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch ib/backport-jl updated (aa3677d -> 6559b80)

2019-09-12 Thread iblis
This is an automated email from the ASF dual-hosted git repository.

iblis pushed a change to branch ib/backport-jl
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from aa3677d  update julia install doc (#15609)
 add 6559b80  julia: fix `mx.forward` kwargs checking (#16138)

No new revisions were added by this update.

Summary of changes:
 julia/src/executor.jl   |  2 +-
 julia/test/unittest/bind.jl | 15 +++
 2 files changed, 16 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (287e3b5 -> d9364ba)

2019-09-12 Thread iblis
This is an automated email from the ASF dual-hosted git repository.

iblis pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 287e3b5  Numpy add numpy op indices (#15837)
 add d9364ba  julia: fix `mx.forward` kwargs checking (#16138)

No new revisions were added by this update.

Summary of changes:
 julia/src/executor.jl   |  2 +-
 julia/test/unittest/bind.jl | 15 +++
 2 files changed, 16 insertions(+), 1 deletion(-)



[incubator-mxnet] branch ib/fix-forward deleted (was bb8a6f4)

2019-09-12 Thread iblis
This is an automated email from the ASF dual-hosted git repository.

iblis pushed a change to branch ib/fix-forward
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


 was bb8a6f4  julia: fix `mx.forward` kwargs checking

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[GitHub] [incubator-mxnet] iblis17 merged pull request #16138: julia: fix `mx.forward` kwargs checking

2019-09-12 Thread GitBox
iblis17 merged pull request #16138: julia: fix `mx.forward` kwargs checking
URL: https://github.com/apache/incubator-mxnet/pull/16138
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-12 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 7490aec  Bump the publish timestamp.
7490aec is described below

commit 7490aec5931d966f9e19136e0e828c6b9fd039ca
Author: mxnet-ci 
AuthorDate: Thu Sep 12 13:33:44 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..6a4d512
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Sep 12 13:33:44 UTC 2019



[GitHub] [incubator-mxnet] iblis17 commented on issue #15568: julia: rename build env var `MXNET_HOME` to `MXNET_ROOT`

2019-09-12 Thread GitBox
iblis17 commented on issue #15568: julia: rename build env var `MXNET_HOME` to 
`MXNET_ROOT`
URL: https://github.com/apache/incubator-mxnet/pull/15568#issuecomment-530812290
 
 
   oh, @aaronmarkham what are the recent changes on the website builds?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16126: [numpy] operator around

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16126: [numpy] operator 
around
URL: https://github.com/apache/incubator-mxnet/pull/16126#discussion_r323716956
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1647,6 +1647,40 @@ def hybrid_forward(self, F, a):
 assert_almost_equal(mx_out.asnumpy(), np_out, 
rtol=1e-3, atol=1e-5)
 
 
+@with_seed()
+@use_np
+def test_np_around():
+class TestAround(HybridBlock):
+def __init__(self, decimals):
+super(TestAround, self).__init__()
+# necessary initializations
+self.decimals = decimals
+
+def hybrid_forward(self, F, x):
+return F.np.around(x, self.decimals)
+
+shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, 
remember to include zero-dim shape and zero-size shapes
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16126: [numpy] operator around

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16126: [numpy] operator 
around
URL: https://github.com/apache/incubator-mxnet/pull/16126#discussion_r323716994
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1647,6 +1647,40 @@ def hybrid_forward(self, F, a):
 assert_almost_equal(mx_out.asnumpy(), np_out, 
rtol=1e-3, atol=1e-5)
 
 
+@with_seed()
+@use_np
+def test_np_around():
+class TestAround(HybridBlock):
+def __init__(self, decimals):
+super(TestAround, self).__init__()
+# necessary initializations
+self.decimals = decimals
+
+def hybrid_forward(self, F, x):
+return F.np.around(x, self.decimals)
+
+shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, 
remember to include zero-dim shape and zero-size shapes
+types = ['int32', 'int64', 'float32', 'double']
+for hybridize in [True, False]:
+for oneType in types:
+rtol=1e-3
+atol=1e-5
+for shape in shapes:
+for d in range(-10, 11):
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16126: [numpy] operator around

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16126: [numpy] operator 
around
URL: https://github.com/apache/incubator-mxnet/pull/16126#discussion_r323716861
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1647,6 +1647,40 @@ def hybrid_forward(self, F, a):
 assert_almost_equal(mx_out.asnumpy(), np_out, 
rtol=1e-3, atol=1e-5)
 
 
+@with_seed()
+@use_np
+def test_np_around():
+class TestAround(HybridBlock):
+def __init__(self, decimals):
+super(TestAround, self).__init__()
+# necessary initializations
+self.decimals = decimals
+
+def hybrid_forward(self, F, x):
+return F.np.around(x, self.decimals)
+
+shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, 
remember to include zero-dim shape and zero-size shapes
+types = ['int32', 'int64', 'float32', 'double']
+for hybridize in [True, False]:
+for oneType in types:
+rtol=1e-3
+atol=1e-5
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16126: [numpy] operator around

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16126: [numpy] operator 
around
URL: https://github.com/apache/incubator-mxnet/pull/16126#discussion_r323716913
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1647,6 +1647,40 @@ def hybrid_forward(self, F, a):
 assert_almost_equal(mx_out.asnumpy(), np_out, 
rtol=1e-3, atol=1e-5)
 
 
+@with_seed()
+@use_np
+def test_np_around():
+class TestAround(HybridBlock):
+def __init__(self, decimals):
+super(TestAround, self).__init__()
+# necessary initializations
+self.decimals = decimals
+
+def hybrid_forward(self, F, x):
+return F.np.around(x, self.decimals)
+
+shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, 
remember to include zero-dim shape and zero-size shapes
+types = ['int32', 'int64', 'float32', 'double']
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16126: [numpy] operator around

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16126: [numpy] operator 
around
URL: https://github.com/apache/incubator-mxnet/pull/16126#discussion_r323710250
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op.h
 ##
 @@ -560,6 +560,101 @@ struct ReshapeLikeParam : public 
dmlc::Parameter {
   }
 };
 
+struct AroundParam : public dmlc::Parameter {
+  int decimals;
+  DMLC_DECLARE_PARAMETER(AroundParam) {
+DMLC_DECLARE_FIELD(decimals)
+  .set_default(0)
+  .describe("Number of decimal places to round to.");
+  }
+};
+
+template
+struct around_forwardint{
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out_data, const DType* in_data,
+  const int decimals) {
+KERNEL_ASSIGN(out_data[i], req, in_data[i]);
+  }
+};
+
+template
+struct around_forward {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out_data, const DType* in_data,
+  const int decimals) {
+int d = 0;
+DType temp = in_data[i];
+DType roundtemp;
+while (d != decimals) {
+  if (decimals > 0) {
+d++;
+temp *= 10;
+  } else {
+d--;
+temp /= 10;
+  }
+}
+roundtemp = (DType)round(static_cast(temp));
+// If temp is x.5 and roundtemp is odd number, decrease or increase 
roundtemp by 1.
+// For example, in numpy, around(0.5) should be 0 but in c, round(0.5) is 
1.
+if (roundtemp - temp == 0.5 && (static_cast(roundtemp)) % 2 != 0) {
+  roundtemp -= 1;
+} else if (temp - roundtemp == 0.5 && (static_cast(roundtemp)) % 2 != 
0) {
+  roundtemp += 1;
+}
+while (d != 0) {
+  if (roundtemp == 0) {
+break;
+  }
+  if (decimals > 0) {
+d--;
+roundtemp /= 10;
+  } else {
+d++;
+roundtemp *= 10;
+  }
+}
+KERNEL_ASSIGN(out_data[i], req, roundtemp);
+  }
+};
+
+template
+void AroundOpForward(const nnvm::NodeAttrs& attrs,
+ const OpContext& ctx,
+ const std::vector& inputs,
+ const std::vector& req,
+ const std::vector& outputs) {
+  CHECK_EQ(inputs.size(), 1U);
+  CHECK_EQ(outputs.size(), 1U);
+  CHECK_EQ(req.size(), 1U);
+  mshadow::Stream *s = ctx.get_stream();
+  const TBlob& in_data = inputs[0];
+  const TBlob& out_data = outputs[0];
+  const AroundParam& param = nnvm::get(attrs.parsed);
+  using namespace mxnet_op;
+  // if the type is uint8, int8, int32 or int64 and decimals is greater than 0
+  // we simply return the number back.
+  if (in_data.type_flag_ >= mshadow::kUint8 && in_data.type_flag_ <= 
mshadow::kInt64 \
+ && param.decimals > 0) {
+MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+  MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+Kernel, xpu>::Launch(
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16126: [numpy] operator around

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16126: [numpy] operator 
around
URL: https://github.com/apache/incubator-mxnet/pull/16126#discussion_r323703197
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op.h
 ##
 @@ -560,6 +560,101 @@ struct ReshapeLikeParam : public 
dmlc::Parameter {
   }
 };
 
+struct AroundParam : public dmlc::Parameter {
+  int decimals;
+  DMLC_DECLARE_PARAMETER(AroundParam) {
+DMLC_DECLARE_FIELD(decimals)
+  .set_default(0)
+  .describe("Number of decimal places to round to.");
+  }
+};
+
+template
+struct around_forwardint{
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r323694953
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -2363,3 +2363,55 @@ def var(a, axis=None, dtype=None, out=None, ddof=0, 
keepdims=False):  # pylint:
 0.2025
 """
 return _npi.var(a, axis=axis, dtype=dtype, ddof=ddof, keepdims=keepdims, 
out=out)
+
+
+@set_module('mxnet.ndarray.numpy')
+def ravel(x, out=None):
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r323690352
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1323,6 +1323,39 @@ def hybrid_forward(self, F, a, *args):
 assert same(mx_out.asnumpy(), np_out)
 
 
+@with_seed()
+@use_np
+def test_np_ravel():
+class TestRavel(HybridBlock):
+def __init__(self):
+super(TestRavel, self).__init__()
+
+def hybrid_forward(self, F, a):
+return F.np.ravel(a)
+
+types = ['float64', 'float32', 'float16', 'int64', 'int32', 'int8']
+for oneType in types:
+for hybridize in [True, False]:
+for shape in [(), (2,), (2, 2), (1, 2, 3), (3, 0), (1, 0, 2)]:
+test_ravel = TestRavel()
+if hybridize:
+test_ravel.hybridize()
+x = rand_ndarray(shape, dtype=oneType).as_np_ndarray()
+x.attach_grad()
+np_out = _np.ravel(x.asnumpy())
+with mx.autograd.record():
+mx_out = test_ravel(x)
+assert mx_out.shape == np_out.shape
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r323690459
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1744,6 +1777,45 @@ def test_indexing_mode(sampler, set_size, samples_size, 
replace, weight=None):
 test_indexing_mode(test_choice_weighted, num_classes, num_classes 
// 2, replace, weight)
 
 
+@with_seed()
+@use_np
+def test_np_ravel():
+class TestRavel(HybridBlock):
+def __init__(self):
+super(TestRavel, self).__init__()
+
+def hybrid_forward(self, F, a):
+return F.np.ravel(a)
+
+types = ['float64', 'float32', 'float16', 'int64', 'int32', 'int8']
+for oneType in types:
+for hybridize in [True, False]:
+for shape in [(),
+  (2,),
+  (2, 2),
+  (1, 2, 3),
+  (3, 0),
+  (1, 0, 2)
+  ]:
+test_ravel = TestRavel()
+if hybridize:
+test_ravel.hybridize()
+x = rand_ndarray(shape, dtype=oneType).as_np_ndarray()
+x.attach_grad()
+np_out = _np.ravel(x.asnumpy())
+with mx.autograd.record():
+mx_out = test_ravel(x)
+assert mx_out.shape == np_out.shape
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r323689966
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -2678,4 +2678,42 @@ def var(a, axis=None, dtype=None, out=None, ddof=0, 
keepdims=False):  # pylint:
 return _npi.var(a, axis=axis, dtype=dtype, ddof=ddof, keepdims=keepdims, 
out=out)
 
 
+@set_module('mxnet.symbol.numpy')
+def ravel(x, out=None):
+r"""
+ravel(x, out=None)
+
+Return a contiguous flattened array.
+A 1-D array, containing the elements of the input, is returned.  A copy is
+made only if needed.
+
+Parameters
+--
+x : ndarray
+Input array.  The elements in `x` are read in row-major, C-style order 
and
+packed as a 1-D array.
+out : ndarray or None, optional
+A location into which the result is stored. If not provided or `None`,
+a freshly-allocated array is returned.
+
+Returns
+---
+y : ndarray
+y is an array of the same subtype as `x`, with shape ``(x.size,)``.
+Note that matrices are special cased for backward compatibility, if `x`
+is a matrix, then y is a 1-D ndarray.
+
+Notes
+-
+This function differs from the original numpy.arange in the following 
aspects:
+- Only support row-major, C-style order.
+"""
+if isinstance(x, numeric_types):
+return _np.reshape(x, -1)
 
 Review comment:
   Removed out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #16016: [numpy] operator 
ravel, derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r323689923
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -2363,3 +2363,55 @@ def var(a, axis=None, dtype=None, out=None, ddof=0, 
keepdims=False):  # pylint:
 0.2025
 """
 return _npi.var(a, axis=axis, dtype=dtype, ddof=ddof, keepdims=keepdims, 
out=out)
+
+
+@set_module('mxnet.ndarray.numpy')
+def ravel(x, out=None):
+r"""
+ravel(x, out=None)
+
+Return a contiguous flattened array.
+A 1-D array, containing the elements of the input, is returned.  A copy is
+made only if needed.
+
+Parameters
+--
+x : ndarray
+Input array.  The elements in `x` are read in row-major, C-style order 
and
+packed as a 1-D array.
+out : ndarray or None, optional
+A location into which the result is stored. If not provided or `None`,
+a freshly-allocated array is returned.
 
 Review comment:
   Removed out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit commented on a change in pull request #16144: added -DCMAKE_BUILD_TYPE=Release to docs for building from source

2019-09-12 Thread GitBox
QueensGambit commented on a change in pull request #16144: added 
-DCMAKE_BUILD_TYPE=Release to docs for building from source
URL: https://github.com/apache/incubator-mxnet/pull/16144#discussion_r323675640
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -182,6 +182,8 @@ There is a configuration file for make,
 
 **NOTE:** When certain set of build flags are set, MXNet archive increases to 
more than 4 GB. Since MXNet uses archive internally archive runs into a bug 
("File Truncated": 
[bugreport](https://sourceware.org/bugzilla/show_bug.cgi?id=14625)) for 
archives greater than 4 GB. Please use ar version 2.27 or greater to overcome 
this bug. Please see https://github.com/apache/incubator-mxnet/issues/15084 for 
more details.
 
+You can specify different cmake compiler configurations with the option 
`CMAKE_BUILD_TYPE`. In most cases you should set this to option to `Release` 
for a smaller and faster binary compared to `Debug`. Alternatively, if you are 
interested in building the smallest binary you can set the option to 
`MinSizeRel`. In the case you are developing MXNet you might choose `Debug` 
instead.
+
 
 Review comment:
   Good point! I just added `RelWithDebInfo`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu opened a new pull request #16152: [Numpy] Random.gamma() implemented

2019-09-12 Thread GitBox
xidulu opened a new pull request #16152: [Numpy] Random.gamma() implemented
URL: https://github.com/apache/incubator-mxnet/pull/16152
 
 
   ## Description ##
   As title, implementation detail is described in 
https://github.com/apache/incubator-mxnet/issues/15928
   
   The current implementation outperforms `ndarray.random.gamma()` on GPU. 
However, there exists some performance issues when the context is `cpu`, more 
specifically, slower than native Numpy.
   Writing a separate version for CPU backend seems quite necessary.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Feature1, tests, (and when applicable, API doc)
   - [x] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #15819: [Numpy]flip

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #15819: [Numpy]flip
URL: https://github.com/apache/incubator-mxnet/pull/15819#discussion_r323651440
 
 

 ##
 File path: src/operator/numpy/np_matrix_op-inl.h
 ##
 @@ -60,6 +60,81 @@ void NumpyTranspose(const nnvm::NodeAttrs& attrs,
   }
 }
 
+struct FlipParam : public dmlc::Parameter {
+  mxnet::Tuple axis;
+  DMLC_DECLARE_PARAMETER(FlipParam) {
+DMLC_DECLARE_FIELD(axis)
+.describe("The axis which to flip elements.");
+  }
+};
+
+struct flip0dim_shared_kernel {
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #15819: [Numpy]flip

2019-09-12 Thread GitBox
tingying2020 commented on a change in pull request #15819: [Numpy]flip
URL: https://github.com/apache/incubator-mxnet/pull/15819#discussion_r323651406
 
 

 ##
 File path: src/operator/numpy/np_matrix_op-inl.h
 ##
 @@ -60,6 +60,81 @@ void NumpyTranspose(const nnvm::NodeAttrs& attrs,
   }
 }
 
+struct FlipParam : public dmlc::Parameter {
+  mxnet::Tuple axis;
+  DMLC_DECLARE_PARAMETER(FlipParam) {
+DMLC_DECLARE_FIELD(axis)
+.describe("The axis which to flip elements.");
+  }
+};
+
+struct flip0dim_shared_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i,
+  DType* out_data,
+  const DType* in_data) {
+out_data[i] = in_data[i];
+  }
+};
+
+#define FLIP_MAX_DIM 10
+#define FLIP_MIN_DIM -1
+
+template
+void NumpyFlipForwardImpl(const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& outputs,
+  const std::vector& stride_,
+  const std::vector& trailing_,
+  const index_t& flip_index);
+
+template
+void NumpyFlipForward(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  const FlipParam& param = nnvm::get(attrs.parsed);
+  mxnet::Tuple axistemp;
+  CHECK_EQ(inputs[0].type_flag_, outputs[0].type_flag_);
+  CHECK_LT(param.axis.ndim(), FLIP_MAX_DIM);
+  CHECK_GE(param.axis.ndim(), FLIP_MIN_DIM);
+  if (param.axis.ndim() == FLIP_MIN_DIM) {
+if (inputs[0].shape_.ndim() == 0) {
+  mshadow::Stream *s = ctx.get_stream();
+  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+mxnet_op::Kernel::Launch(s, 
inputs[0].Size(),
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-12 Thread GitBox
haojin2 commented on a change in pull request #16016: [numpy] operator ravel, 
derive from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#discussion_r323648765
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1323,6 +1323,39 @@ def hybrid_forward(self, F, a, *args):
 assert same(mx_out.asnumpy(), np_out)
 
 
+@with_seed()
+@use_np
+def test_np_ravel():
+class TestRavel(HybridBlock):
+def __init__(self):
+super(TestRavel, self).__init__()
+
+def hybrid_forward(self, F, a):
+return F.np.ravel(a)
+
+types = ['float64', 'float32', 'float16', 'int64', 'int32', 'int8']
+for oneType in types:
+for hybridize in [True, False]:
+for shape in [(), (2,), (2, 2), (1, 2, 3), (3, 0), (1, 0, 2)]:
+test_ravel = TestRavel()
+if hybridize:
+test_ravel.hybridize()
+x = rand_ndarray(shape, dtype=oneType).as_np_ndarray()
+x.attach_grad()
+np_out = _np.ravel(x.asnumpy())
+with mx.autograd.record():
+mx_out = test_ravel(x)
+assert mx_out.shape == np_out.shape
 
 Review comment:
   Okay, @tingying2020 please add the type check


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >