[GitHub] [incubator-mxnet] TaoLv commented on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
TaoLv commented on issue #15420: [R] MKL-DNN support:  "Unknown exception" in 
mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507530866
 
 
   Seems it's another error now. I just checked the code. Do you have 
`USE_OPENCV=1` in you make command line when build the package with MKL-DNN?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support:  "Unknown 
exception" in mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507528863
 
 
   Yes, and it's strange. Installed side by side R 3.6.0 build from source and 
downloaded AWS for 3.5.3.
   
   > Error in mx.varg.io.ImageRecordIter(list(...)) : 
 [07:47:26] c:\incubator-mxnet\src\io\iter_image_recordio_2.cc:254: 
ImageRec need opencv to process
   
   Previously with the same network on R 3.5.3 with no custom MKL-DNN build 
there were no problems, except speed. ;-)
   
   > use 4 threads for decoding..
   Start training with 1 devices
   Batch [17] Speed: 0.623502075662353 samples/sec 
Train-accuracy=0.988970588235294
   Batch [34] Speed: 0.608181449057359 samples/sec 
Train-accuracy=0.988970588235294
   Batch [51] Speed: 0.671458334887314 samples/sec 
Train-accuracy=0.990196078431373
   Batch [68] Speed: 0.649002161683901 samples/sec 
Train-accuracy=0.990349264705882
   Batch [85] Speed: 0.701097383006278 samples/sec 
Train-accuracy=0.991176470588235
   Batch [102] Speed: 0.673339123131478 samples/sec 
Train-accuracy=0.991115196078431
   Batch [119] Speed: 0.673430530321397 samples/sec 
Train-accuracy=0.990808823529412
   Batch [136] Speed: 0.652799241039701 samples/sec 
Train-accuracy=0.991038602941177
   [16] Train-accuracy=0.990875912408759
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Crunchy9 commented on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
Crunchy9 commented on issue #15420: [R] MKL-DNN support:  "Unknown exception" 
in mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507528863
 
 
   Yes, and it's strange. Installed side by side R 3.6.0 build from source and 
downloaded AWS for 3.5.3.
   
   > Error in mx.varg.io.ImageRecordIter(list(...)) : 
 [07:47:26] c:\incubator-mxnet\src\io\iter_image_recordio_2.cc:254: 
ImageRec need opencv to process
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] PierceXiao413 commented on a change in pull request #15387: Add docs for 7 ops

2019-07-01 Thread GitBox
PierceXiao413 commented on a change in pull request #15387: Add docs for 7 ops
URL: https://github.com/apache/incubator-mxnet/pull/15387#discussion_r299310147
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -882,3 +902,123 @@ def sqrt(x, out=None, **kwargs):
 This function only supports input type of float.
 """
 return _unary_func_helper(x, _npi.sqrt, _np.sqrt, out=out, **kwargs)
+
+@set_module('mxnet.ndarray.numpy')
+def arctanh(x, out=None, where=True, **kwargs):
+r"""
+arctanh(x, out=None, where=True)
+
+Inverse hyperbolic tangent element-wise.
+
+Parameters:
+
+---
+x : ndarray 
+Input array.
+out : ndarray, None.
+  A location into which the result is stored. If provided, 
+  it must have a shape that the inputs broadcast to. 
+  If not provided or None, a freshly-allocated array is returned. 
+  A tuple (possible only as a keyword argument) 
+  must have length equal to the number of outputs.
+where : ndarray, optional
+Values of True indicate to calculate the ufunc at that position, 
+values of False indicate to leave the value in the output alone.
+Returns:   
+
+out : ndarray or scalar
+  ndarray of the same shape as x. This is a scalar if x is a scalar.
+Examples
+
+>>> np.arctan(0.7)
+0.8673005276940531
+"""
+
+return _unary_func_helper(x, _npi.arctanh, _np.arctanh, out=out, **kwargs)
+
+@set_module('mxnet.ndarray.numpy')
+def tan(x, out=None, where=True, **kwargs):
+r"""
+tan(x, out=None, where=True, **kwargs)
+
+Compute tangent element-wise.
+Equivalent to np.sin(x)/np.cos(x) element-wise.
+
+Parameters:
+--
+x : array_like
+Input array.
+out : ndarray, None, or tuple of ndarray and None, optional
+  A location into which the result is stored. If provided, 
+  it must have a shape that the inputs broadcast to. If not provided 
or None, 
+  a freshly-allocated array is returned. A tuple (possible only as a 
keyword argument) 
+  must have length equal to the number of outputs.
+where : ndarray, optional
+Values of True indicate to calculate the ufunc at that position, 
+values of False indicate to leave the value in the output alone.
+
+Returns:   
+---
+y : ndarray
+The corresponding tangent values. This is a scalar if x is a scalar.
+
+Examples:
+
+>>> np.tan(0.5)
+0.5463024898437905 
+"""
+
+return _unary_func_helper(x, _npi.tan, _np.tan, out=out, **kwargs)
+
+@set_module('mxnet.ndarray.numpy')
+def fix(x, out=None):
+r"""
+Round an array of floats element-wise to nearest integer towards zero. 
+The rounded values are returned as floats.
+
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sandeep-krishnamurthy commented on a change in pull request #15403: Updating profiler tutorial to include new custom operator profiling

2019-07-01 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #15403: Updating 
profiler tutorial to include new custom operator profiling
URL: https://github.com/apache/incubator-mxnet/pull/15403#discussion_r299308343
 
 

 ##
 File path: docs/tutorials/python/profiler.md
 ##
 @@ -206,6 +206,81 @@ Let's zoom in to check the time taken by operators
 
 The above picture visualizes the sequence in which the operators were executed 
and the time taken by each operator.
 
+### Profiling Custom Operators
+Should the existing NDArray operators fail to meet all your model's needs, 
MXNet supports [Custom 
Operators](https://mxnet.incubator.apache.org/versions/master/tutorials/gluon/customop.html)
 that you can define in Python. In `forward()` and `backward()` of a custom 
operator, there are two kinds of code: "pure Python" code (NumPy operators 
included) and "sub-operators" (NDArray operators called within `forward()` and 
`backward()`). With that said, MXNet can profile the execution time of both 
kinds without additional setup. Specifically, the MXNet profiler will break a 
single custom operator call into a pure Python event and several sub-operator 
events if there are any. Furthermore, all of those events will have a prefix in 
their names, which is, conveniently, the name of the custom operator you called.
+
+Let's try profiling custom operators with the following code example:
+
+```python
+
+import mxnet as mx
+from mxnet import nd
+from mxnet import profiler
+
+class MyAddOne(mx.operator.CustomOp):
+def forward(self, is_train, req, in_data, out_data, aux):  
+self.assign(out_data[0], req[0], in_data[0]+1)
+
+def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
+self.assign(in_grad[0], req[0], out_grad[0])
+
+@mx.operator.register('MyAddOne')
+class CustomAddOneProp(mx.operator.CustomOpProp):
+def __init__(self):
+super(CustomAddOneProp, self).__init__(need_top_grad=True)
+
+def list_arguments(self):
+return ['data']
+
+def list_outputs(self):
+return ['output']
+
+def infer_shape(self, in_shape):
+return [in_shape[0]], [in_shape[0]], []
+
+def create_operator(self, ctx, shapes, dtypes):
+return MyAddOne()
+
+
+inp = mx.nd.zeros(shape=(500, 500))
+
+profiler.set_config(profile_all=True, continuous_dump = True)
+profiler.set_state('run')
+
+w = nd.Custom(inp, op_type="MyAddOne")
+
+mx.nd.waitall()
+
+profiler.set_state('stop')
+profiler.dump()
+```
+
+Here, we have created a custom operator called `MyAddOne`, and within its 
`forward()` function, we simply add one to the input. We can visualize the dump 
file in `chrome://tracing/`:
+
+![Custom Operator Profiling 
Screenshot](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_output_custom_operator_chrome.png)
+
+As shown by the screenshot, in the **Custom Operator** domain where all the 
custom operator-related events fall into, we can easily visualize the execution 
time of each segment of `MyAddOne`. We can tell that `MyAddOne::pure_python` is 
executed first. We also know that `CopyCPU2CPU` and `_plus_scalr` are two 
"sub-operators" of `MyAddOne` and the sequence in which they are executed.
+
+Please note that: to be able to see the previously described information, you 
need to set `profile_imperative` to `True` even when you are using custom 
operators in [symbolic 
mode](https://mxnet.incubator.apache.org/versions/master/tutorials/basic/symbol.html)
 (refer to the code snippet below, which is the symbolic-mode equivelent of the 
code example above). The reason is that within custom operators, pure python 
code and sub-operators are still called imperatively. 
+
+```python 
+# Set profile_all to True
+profiler.set_config(profile_all=True, aggregate_stats=True, continuous_dump = 
True)
+# OR, Explicitly Set profile_symbolic and profile_imperative to True
+profiler.set_config(profile_symbolic = False, profile_imperative = False, \
 
 Review comment:
   True right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on issue #15371: numpy compatible dsplit operator

2019-07-01 Thread GitBox
zoeygxy commented on issue #15371: numpy compatible dsplit operator
URL: https://github.com/apache/incubator-mxnet/pull/15371#issuecomment-507524882
 
 
   Please resolve the sanity check failure.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
ckt624 commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299305422
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -1549,10 +1613,161 @@ def sqrt(x, out=None, **kwargs):
 square-root of each element in `x`. This is a scalar if `x` is a 
scalar.
 
 Notes
-
+-
 This function only supports input type of float.
 """
 return _unary_func_helper(x, _npi.sqrt, _np.sqrt, out=out, **kwargs)
 
 
+@set_module('mxnet.symbol.numpy')
+def ceil(x, out=None, **kwargs):
+r"""
+Return the ceiling of the input, element-wise.
+
+The ceil of the ndarray `x` is the smallest integer `i`, such that
+`i >= x`.  It is often denoted as :math:`\lceil x \rceil`.
+
+Parameters
+--
+x : _Symbol or scalar
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs fill into. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output and input must be the same.
+
+Returns
+---
+y :
+_Symbol or scalar
+The ceiling of each element in `x`, with `float` dtype.
+This is a scalar if `x` is a scalar.
+
+Examples
+
+>>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
+>>> np.ceil(a)
+array([-1., -1., -0.,  1.,  2.,  2.,  2.])
+
+>>> #if you use parameter out, x and out must be ndarray. if not, you will 
get an error!
+>>> a = np.array(1)
+>>> np.ceil(np.array(3.5), a)
+array(4.)
+>>> a
+array(4.)
+
+"""
+return _unary_func_helper(x, _npi.ceil, _np.ceil, out=out, **kwargs)
+
+
+@set_module('mxnet.symbol.numpy')
+def log1p(x, out=None, **kwargs):
+"""
+Return the natural logarithm of one plus the input array, element-wise.
+
+Calculates ``log(1 + x)``.
+
+Parameters
+--
+x : _Symbol or scalar
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs fill into. If not provided
 
 Review comment:
   Based on templates:
   out : _Symbol or None
   Dummy parameter to keep the consistency with the ndarray counterpart.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
ckt624 commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299305445
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -1549,10 +1613,161 @@ def sqrt(x, out=None, **kwargs):
 square-root of each element in `x`. This is a scalar if `x` is a 
scalar.
 
 Notes
-
+-
 This function only supports input type of float.
 """
 return _unary_func_helper(x, _npi.sqrt, _np.sqrt, out=out, **kwargs)
 
 
+@set_module('mxnet.symbol.numpy')
+def ceil(x, out=None, **kwargs):
+r"""
+Return the ceiling of the input, element-wise.
+
+The ceil of the ndarray `x` is the smallest integer `i`, such that
+`i >= x`.  It is often denoted as :math:`\lceil x \rceil`.
+
+Parameters
+--
+x : _Symbol or scalar
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs fill into. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output and input must be the same.
+
+Returns
+---
+y :
+_Symbol or scalar
+The ceiling of each element in `x`, with `float` dtype.
+This is a scalar if `x` is a scalar.
+
+Examples
+
+>>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
+>>> np.ceil(a)
+array([-1., -1., -0.,  1.,  2.,  2.,  2.])
+
+>>> #if you use parameter out, x and out must be ndarray. if not, you will 
get an error!
+>>> a = np.array(1)
+>>> np.ceil(np.array(3.5), a)
+array(4.)
+>>> a
+array(4.)
+
+"""
+return _unary_func_helper(x, _npi.ceil, _np.ceil, out=out, **kwargs)
+
+
+@set_module('mxnet.symbol.numpy')
+def log1p(x, out=None, **kwargs):
+"""
+Return the natural logarithm of one plus the input array, element-wise.
+
+Calculates ``log(1 + x)``.
+
+Parameters
+--
+x : _Symbol or scalar
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs fill into. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output and input must be the same.
+
+Returns
+---
+y : _Symbol or scalar
+Natural logarithm of 1 + x, element-wise. This is a scalar
+if x is a scalar.
+
+Notes
+-
+
+For real-valued input, `log1p` is accurate also for `x` so small
+that `1 + x == 1` in floating-point accuracy.
+
+Logarithm is a multivalued function: for each `x` there is an infinite
+number of `z` such that `exp(z) = 1 + x`. The convention is to return
+the `z` whose imaginary part lies in `[-pi, pi]`.
+
+For real-valued input data types, `log1p` always returns real output.
+For each value that cannot be expressed as a real number or infinity,
+it yields ``nan`` and sets the `invalid` floating point error flag.
+
+cannot support complex-valued input.
+
+Examples
+
+>>> np.log1p(1e-99)
+1e-99
+>>> a = np.array([3, 4, 5])
+>>> np.log1p(a)
+array([1.3862944, 1.609438 , 1.7917595])
+
+"""
+return _unary_func_helper(x, _npi.log1p, _np.log1p, out=out, **kwargs)
+
+
+@set_module('mxnet.symbol.numpy')
+def tanh(x, out=None, **kwargs):
+"""
+Compute hyperbolic tangent element-wise.
+
+Equivalent to ``np.sinh(x)/np.cosh(x)``.
+
+Parameters
+--
+x : _Symbol
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs fill into. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output and input must be the same.
 
 Review comment:
   Based on templates:
   out : _Symbol or None
   Dummy parameter to keep the consistency with the ndarray counterpart.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
ckt624 commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299305345
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -1549,10 +1613,161 @@ def sqrt(x, out=None, **kwargs):
 square-root of each element in `x`. This is a scalar if `x` is a 
scalar.
 
 Notes
-
+-
 This function only supports input type of float.
 """
 return _unary_func_helper(x, _npi.sqrt, _np.sqrt, out=out, **kwargs)
 
 
+@set_module('mxnet.symbol.numpy')
+def ceil(x, out=None, **kwargs):
+r"""
+Return the ceiling of the input, element-wise.
+
+The ceil of the ndarray `x` is the smallest integer `i`, such that
+`i >= x`.  It is often denoted as :math:`\lceil x \rceil`.
+
+Parameters
+--
+x : _Symbol or scalar
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
 
 Review comment:
   Based on templates:
   out : _Symbol or None
   Dummy parameter to keep the consistency with the ndarray counterpart.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15427: [TUTORIAL] Gluon performance tips and tricks

2019-07-01 Thread GitBox
pengzhao-intel commented on issue #15427: [TUTORIAL] Gluon performance tips and 
tricks
URL: https://github.com/apache/incubator-mxnet/pull/15427#issuecomment-507521440
 
 
   > Would be great to get CPU specific tricks and tips @pengzhao-intel. Given 
the length of this tutorial already let's add those in another tutorial as a 
follow up. @xinyu-intel are you able to write about some performance tricks for 
CPU and follow up in another PR? When we have that we can think about splitting 
this up into generic tips, GPU specific and CPU specific tricks.
   
   Agree. I see lots of tips and tricks are common for both CPU and GPU; so we 
can make the user aware of this :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on issue #15258: Numpy Trace

2019-07-01 Thread GitBox
hzfan commented on issue #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#issuecomment-507520994
 
 
   docs added, conflicts resolved. @haojin2


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
TaoLv commented on issue #15420: [R] MKL-DNN support:  "Unknown exception" in 
mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507520541
 
 
   @Crunchy9 We notice that the document for Windows R setup says:
   
   > Note: packages for 3.6.x are not yet available. Install 3.5.x of R from 
CRAN.
   
   
http://mxnet.incubator.apache.org/versions/master/install/windows_setup.html#install-the-mxnet-package-for-r
   
   Have you ever tried a 3.5.x version of R?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
gyshi commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303636
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -1318,25 +1318,59 @@ def array(object, dtype=None, ctx=None):
 @set_module('mxnet.numpy')
 def zeros(shape, dtype=_np.float32, **kwargs):
 """Return a new array of given shape and type, filled with zeros.
-This function currently only supports storing multi-dimensional data
-in row-major (C-style).
 
 Parameters
 --
-shape : int or tuple of int
-The shape of the empty array.
-dtype : str or numpy.dtype, optional
-An optional value type (default is `numpy.float32`). Note that this
-behavior is different from NumPy's `ones` function where `float64`
-is the default value, because `float32` is considered as the default
-data type in deep learning.
+shape : int , tuple of ints or list of ints
+Shape of the new array, e.g., ``(2, 3)`` or ``2``.
+dtype : data-type, optional
+The desired data-type for the array, e.g., `numpy.int8`.  Default is
+`numpy.float32`.
 ctx : Context, optional
 An optional device context (default is the current default context).
 
 Returns
 ---
 out : ndarray
-Array of zeros with the given shape, dtype, and ctx.
+Array of zeros with the given shape, dtype, and order.
+
+Notes
+-
+- Not support ndarray type
+- Not support custom dtype
 
 Review comment:
   resolve


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
gyshi commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303603
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -680,7 +739,7 @@ def linspace(start, stop, num=50, endpoint=True, 
retstep=False, dtype=None, axis
 axis could only be 0 now.
 """
 if isinstance(start, (list, _np.ndarray, NDArray)) or \
-   isinstance(stop, (list, _np.ndarray, NDArray)):
+isinstance(stop, (list, _np.ndarray, NDArray)):
 
 Review comment:
   resolve


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #15424: fixed config.mk and Makefile bugs for installing mkl

2019-07-01 Thread GitBox
TaoLv commented on issue #15424: fixed config.mk and Makefile bugs for 
installing mkl
URL: https://github.com/apache/incubator-mxnet/pull/15424#issuecomment-507519413
 
 
   @nuslq Have you ever tried `USE_BLAS=mkl` in the make command line? I would 
expect MKL to be statically linked in this case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
gyshi commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303363
 
 

 ##
 File path: python/mxnet/_numpy_op_doc.py
 ##
 @@ -173,3 +212,73 @@ def _np_cumsum(a, axis=None, dtype=None, out=None):
 `axis` is not None or `a` is a 1-d array.
 """
 pass
+
+
+def _np_max(axis=None, keepdims=False, initial=None, out=None):
+"""
+Return the maximum of an array or maximum along an axis.
+
+Parameters
+--
+a : ndarray
+Input data.
+axis : None or int or tuple of ints, optional
+Axis or axes along which to operate.  By default, flattened input is
+used.
+
+If this is a tuple of ints, the maximum is selected over multiple axes,
+instead of a single axis or all the axes as before.
+
+keepdims : bool, optional
+If this is set to True, the axes which are reduced are left
+in the result as dimensions with size one. With this option,
+the result will broadcast correctly against the input array.
+
+If the default value is passed, then `keepdims` will not be
+passed through to the `amax` method of sub-classes of
+`ndarray`, however any non-default value will be.  If the
+sub-class' method does not implement `keepdims` any
+exceptions will be raised.
+
+initial :
+Parameter initial is not supported yet, we will support it in the 
future.
+now it must be None.
+
+out : ndarray, optional
 
 Review comment:
   resolve,


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
gyshi commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303390
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -28,31 +28,65 @@
 __all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'arange', 'argmax',
'add', 'subtract', 'multiply', 'divide', 'mod', 'power', 
'concatenate',
'clip', 'split', 'swapaxes', 'expand_dims', 'tile', 'linspace',
-   'sin', 'cos', 'sinh', 'cosh', 'log10', 'sqrt']
+   'sin', 'cos', 'sinh', 'cosh', 'log10', 'sqrt', 'ceil', 'log1p', 
'tanh']
 
 
 @set_module('mxnet.ndarray.numpy')
 def zeros(shape, dtype=_np.float32, **kwargs):
 """Return a new array of given shape and type, filled with zeros.
-This function currently only supports storing multi-dimensional data
-in row-major (C-style).
 
 Parameters
 --
-shape : int or tuple of int
-The shape of the empty array.
-dtype : str or numpy.dtype, optional
-An optional value type. Default is `numpy.float32`. Note that this
-behavior is different from NumPy's `ones` function where `float64`
-is the default value, because `float32` is considered as the default
-data type in deep learning.
+shape : int , tuple of ints or list of ints
+Shape of the new array, e.g., ``(2, 3)`` or ``2``.
+dtype : data-type, optional
+The desired data-type for the array, e.g., `numpy.int8`.  Default is
+`numpy.float32`.
 ctx : Context, optional
 An optional device context (default is the current default context).
 
 Returns
 ---
 out : ndarray
-Array of zeros with the given shape, dtype, and ctx.
+Array of zeros with the given shape, dtype, and order.
+
+Notes
+-
+- Not support ndarray type
+- Not support custom dtype
+
+>>> np.zeros((2,), dtype=[('x', 'i4'), ('y', 'i4')]) # custom dtype
+mxnet.base.MXNetError: Invalid Input.
+
+See Also
+
+zeros_like : Return an array of zeros with shape and type of input.
+ones : Return a new array setting values to one.
+
+Examples
+
+>>> np.zeros(5)
+array([0., 0., 0., 0., 0.])
+
+>>> np.zeros((5,), dtype=int)
+array([0, 0, 0, 0, 0])
+
+>>> np.zeros((2, 1))
+array([[0.],
+   [0.]])
+
+>>> s = (2,2)
+>>> np.zeros(s)
+array([[0., 0.],
+   [0., 0.]])
+
+if you install mxnet gpu version, you can use gpu.
 
 Review comment:
   resolve


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
gyshi commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303440
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -1555,4 +1613,156 @@ def sqrt(x, out=None, **kwargs):
 return _unary_func_helper(x, _npi.sqrt, _np.sqrt, out=out, **kwargs)
 
 
+@set_module('mxnet.symbol.numpy')
+def ceil(x, out=None, **kwargs):
+r"""
+Return the ceiling of the input, element-wise.
+
+The ceil of the ndarray `x` is the smallest integer `i`, such that
+`i >= x`.  It is often denoted as :math:`\lceil x \rceil`.
+
+Parameters
+--
+x : _Symbol or scalar
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs broadcast to. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output is the same as that of the input if the input is an ndarray.
+
+Returns
+---
+y :
+_Symbol or scalar
+The ceiling of each element in `x`, with `float` dtype.
+This is a scalar if `x` is a scalar.
+
+Examples
+
+>>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
+>>> np.ceil(a)
+array([-1., -1., -0.,  1.,  2.,  2.,  2.])
+
+>>> #if you use parameter out, x and out must be ndarray. if not, you will 
get an error!
+>>> a = np.array(1)
+>>> np.ceil(np.array(3.5), a)
+array(4.)
+>>> a
+array(4.)
+
+"""
+return _unary_func_helper(x, _npi.ceil, _np.ceil, out=out, **kwargs)
+
+
+@set_module('mxnet.symbol.numpy')
+def log1p(x, out=None, **kwargs):
+"""
+Return the natural logarithm of one plus the input array, element-wise.
+
+Calculates ``log(1 + x)``.
+
+Parameters
+--
+x :
+_Symbol or scalar
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs broadcast to. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output is the same as that of the input if the input is an ndarray.
+
+Returns
+---
+y : _Symbol or scalar
+Natural logarithm of 1 + x, element-wise. This is a scalar
+if x is a scalar.
+
+Notes
+-
+
+For real-valued input, `log1p` is accurate also for `x` so small
+that `1 + x == 1` in floating-point accuracy.
+
+Logarithm is a multivalued function: for each `x` there is an infinite
+number of `z` such that `exp(z) = 1 + x`. The convention is to return
+the `z` whose imaginary part lies in `[-pi, pi]`.
+
+For real-valued input data types, `log1p` always returns real output.
+For each value that cannot be expressed as a real number or infinity,
+it yields ``nan`` and sets the `invalid` floating point error flag.
+
+For complex-valued input, `log1p` is a complex analytical function that
+has a branch cut `[-inf, -1]` and is continuous from above on it.
+`log1p` handles the floating-point negative zero as an infinitesimal
+negative number, conforming to the C99 standard.
+
+Examples
+
+>>> np.log1p(1e-99)
+1e-99
+
+"""
+return _unary_func_helper(x, _npi.log1p, _np.log1p, out=out, **kwargs)
+
+
+@set_module('mxnet.symbol.numpy')
+def tanh(x, out=None, **kwargs):
+"""
+Compute hyperbolic tangent element-wise.
+
+Equivalent to ``np.sinh(x)/np.cosh(x)``.
+
+Parameters
+--
+x :
+_Symbol
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs broadcast to. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output is the same as that of the input if the input is an ndarray.
+Returns
+---
+y : _Symbol
+The corresponding hyperbolic tangent values.
+
+Notes
+-
+If `out` is provided, the function writes the result into it,
+and returns a reference to `out`.  (See Examples)
+
+- Not support complex computation (like imaginary number)
+
+>>> np.tanh(np.pi*1j)
+TypeError: type  not supported
+
+Examples
+
+>>> np.tanh(np.array[0, np.pi]))
 
 Review comment:
   resolve


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
zoeygxy commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303415
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -1555,4 +1613,156 @@ def sqrt(x, out=None, **kwargs):
 return _unary_func_helper(x, _npi.sqrt, _np.sqrt, out=out, **kwargs)
 
 
+@set_module('mxnet.symbol.numpy')
+def ceil(x, out=None, **kwargs):
+r"""
+Return the ceiling of the input, element-wise.
+
+The ceil of the ndarray `x` is the smallest integer `i`, such that
+`i >= x`.  It is often denoted as :math:`\lceil x \rceil`.
+
+Parameters
+--
+x : _Symbol or scalar
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs broadcast to. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output is the same as that of the input if the input is an ndarray.
+
+Returns
+---
+y :
+_Symbol or scalar
+The ceiling of each element in `x`, with `float` dtype.
+This is a scalar if `x` is a scalar.
+
+Examples
+
+>>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
+>>> np.ceil(a)
+array([-1., -1., -0.,  1.,  2.,  2.,  2.])
+
+>>> #if you use parameter out, x and out must be ndarray. if not, you will 
get an error!
+>>> a = np.array(1)
+>>> np.ceil(np.array(3.5), a)
+array(4.)
+>>> a
+array(4.)
+
+"""
+return _unary_func_helper(x, _npi.ceil, _np.ceil, out=out, **kwargs)
+
+
+@set_module('mxnet.symbol.numpy')
+def log1p(x, out=None, **kwargs):
+"""
+Return the natural logarithm of one plus the input array, element-wise.
+
+Calculates ``log(1 + x)``.
+
+Parameters
+--
+x :
+_Symbol or scalar
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs broadcast to. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output is the same as that of the input if the input is an ndarray.
+
+Returns
+---
+y : _Symbol or scalar
+Natural logarithm of 1 + x, element-wise. This is a scalar
+if x is a scalar.
+
+Notes
+-
+
+For real-valued input, `log1p` is accurate also for `x` so small
+that `1 + x == 1` in floating-point accuracy.
+
+Logarithm is a multivalued function: for each `x` there is an infinite
+number of `z` such that `exp(z) = 1 + x`. The convention is to return
+the `z` whose imaginary part lies in `[-pi, pi]`.
+
+For real-valued input data types, `log1p` always returns real output.
+For each value that cannot be expressed as a real number or infinity,
+it yields ``nan`` and sets the `invalid` floating point error flag.
+
+For complex-valued input, `log1p` is a complex analytical function that
+has a branch cut `[-inf, -1]` and is continuous from above on it.
+`log1p` handles the floating-point negative zero as an infinitesimal
+negative number, conforming to the C99 standard.
+
+Examples
+
+>>> np.log1p(1e-99)
+1e-99
+
+"""
+return _unary_func_helper(x, _npi.log1p, _np.log1p, out=out, **kwargs)
+
+
+@set_module('mxnet.symbol.numpy')
+def tanh(x, out=None, **kwargs):
+"""
+Compute hyperbolic tangent element-wise.
+
+Equivalent to ``np.sinh(x)/np.cosh(x)``.
+
+Parameters
+--
+x :
+_Symbol
+Input array.
+out : _Symbol or None
+A location into which the result is stored. If provided, it
+must have a shape that the inputs broadcast to. If not provided
+or None, a freshly-allocated array is returned. The dtype of the
+output is the same as that of the input if the input is an ndarray.
+Returns
+---
+y : _Symbol
+The corresponding hyperbolic tangent values.
+
+Notes
+-
+If `out` is provided, the function writes the result into it,
+and returns a reference to `out`.  (See Examples)
+
+- Not support complex computation (like imaginary number)
+
+>>> np.tanh(np.pi*1j)
+TypeError: type  not supported
+
+Examples
+
+>>> np.tanh(np.array[0, np.pi]))
 
 Review comment:
   @gyshi You have omitted one left parenthesis here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
gyshi commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303483
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -864,7 +925,7 @@ def sqrt(x, out=None, **kwargs):
 
 Parameters
 --
-x : ndarray or scalar
+x : ndarray
 
 Review comment:
   resolve


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
zoeygxy commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303324
 
 

 ##
 File path: python/mxnet/_numpy_op_doc.py
 ##
 @@ -173,3 +212,73 @@ def _np_cumsum(a, axis=None, dtype=None, out=None):
 `axis` is not None or `a` is a 1-d array.
 """
 pass
+
+
+def _np_max(axis=None, keepdims=False, initial=None, out=None):
+"""
+Return the maximum of an array or maximum along an axis.
+
+Parameters
+--
+a : ndarray
+Input data.
+axis : None or int or tuple of ints, optional
+Axis or axes along which to operate.  By default, flattened input is
+used.
+
+If this is a tuple of ints, the maximum is selected over multiple axes,
+instead of a single axis or all the axes as before.
+
+keepdims : bool, optional
+If this is set to True, the axes which are reduced are left
+in the result as dimensions with size one. With this option,
+the result will broadcast correctly against the input array.
+
+If the default value is passed, then `keepdims` will not be
+passed through to the `amax` method of sub-classes of
+`ndarray`, however any non-default value will be.  If the
+sub-class' method does not implement `keepdims` any
+exceptions will be raised.
+
+initial :
+Parameter initial is not supported yet, we will support it in the 
future.
+now it must be None.
+
+out : ndarray, optional
+Alternative output array in which to place the result.  Must
+be of the same shape and buffer length as the expected output.
+
+Returns
+---
+amax : ndarray or scalar
+Maximum of `a`. If `axis` is None, the result is a scalar value.
+If `axis` is given, the result is an array of dimension
+``a.ndim - 1``.
+
+See Also
+
+amax equivalent function
+
+Notes
+-
+- Not support axis < 0.
 
 Review comment:
   @gyshi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
gyshi commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303212
 
 

 ##
 File path: python/mxnet/_numpy_op_doc.py
 ##
 @@ -32,24 +33,70 @@ def _np_reshape(a, newshape, order='C'):
 an integer, then the result will be a 1-D array of that length.
 One shape dimension can be -1. In this case, the value is
 inferred from the length of the array and remaining dimensions.
-order : {'C'}, optional
+order : {'C', 'F', 'A'}, optional
 Read the elements of `a` using this index order, and place the
 elements into the reshaped array using this index order.  'C'
 means to read / write the elements using C-like index order,
 with the last axis index changing fastest, back to the first
-axis index changing slowest. Other order types such as 'F'/'A'
-may be added in the future.
+axis index changing slowest. 'F' means to read / write the
+elements using Fortran-like index order, with the first index
+changing fastest, and the last index changing slowest. Note that
+the 'C' and 'F' options take no account of the memory layout of
+the underlying array, and only refer to the order of indexing.
+'A' means to read / write the elements in Fortran-like index
+order if `a` is Fortran *contiguous* in memory, C-like order
+otherwise.
 
 Returns
 ---
 reshaped_array : ndarray
-It will be always a copy of the original array. This behavior is 
different
-from the official NumPy package where views of the original array may 
be
-generated.
+This will be a new view object if possible; otherwise, it will
+be a copy.  Note there is no guarantee of the *memory layout* (C- or
+Fortran- contiguous) of the returned array.
 
-See Also
+
+Notes
+-
+It is not always possible to change the shape of an array without
+copying the data. If you want an error to be raised when the data is 
copied,
+you should assign the new shape to the shape attribute of the array::
+
+ >>> a = np.zeros((10, 2))
+ # A transpose makes the array non-contiguous
+ >>> b = a.T
+ # Taking a view makes it possible to modify the shape without modifying
+ # the initial object.
+
+>>> a = np.arange(6).reshape((3, 2))
+>>> a
+array([[0., 1.],
+   [2., 3.],
+   [4., 5.]])
+
+You can think of reshaping as first raveling the array (using the given
+index order), then inserting the elements from the raveled array into the
+new array using the same kind of index ordering as was used for the
+raveling.
+
+>>> np.reshape(a, (2, 3)) # C-like index ordering
+array([[0., 1., 2.],
+   [3., 4., 5.]])
+
+- order only support C-order
+- input not support scalar
+- not support zero-size shape
+
+Examples
 
-ndarray.reshape : Equivalent method.
+>>> a = np.array([[1,2,3], [4,5,6]])
+>>> np.reshape(a, 6)
+array([1., 2., 3., 4., 5., 6.])
+
+>>> np.reshape(a, (3,-1))   # the unspecified value is inferred to be 2
 
 Review comment:
   resolve


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
gyshi commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303191
 
 

 ##
 File path: python/mxnet/_numpy_op_doc.py
 ##
 @@ -21,7 +21,8 @@
 
 
 def _np_reshape(a, newshape, order='C'):
 
 Review comment:
   resolve


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
zoeygxy commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299303052
 
 

 ##
 File path: python/mxnet/_numpy_op_doc.py
 ##
 @@ -173,3 +212,73 @@ def _np_cumsum(a, axis=None, dtype=None, out=None):
 `axis` is not None or `a` is a 1-d array.
 """
 pass
+
+
+def _np_max(axis=None, keepdims=False, initial=None, out=None):
+"""
+Return the maximum of an array or maximum along an axis.
+
+Parameters
+--
+a : ndarray
+Input data.
+axis : None or int or tuple of ints, optional
+Axis or axes along which to operate.  By default, flattened input is
+used.
+
+If this is a tuple of ints, the maximum is selected over multiple axes,
+instead of a single axis or all the axes as before.
+
+keepdims : bool, optional
+If this is set to True, the axes which are reduced are left
+in the result as dimensions with size one. With this option,
+the result will broadcast correctly against the input array.
+
+If the default value is passed, then `keepdims` will not be
+passed through to the `amax` method of sub-classes of
+`ndarray`, however any non-default value will be.  If the
+sub-class' method does not implement `keepdims` any
+exceptions will be raised.
+
+initial :
+Parameter initial is not supported yet, we will support it in the 
future.
+now it must be None.
 
 Review comment:
   @gyshi  Please captalize the first word in a sentence


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
zoeygxy commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299302943
 
 

 ##
 File path: python/mxnet/_numpy_op_doc.py
 ##
 @@ -32,24 +33,70 @@ def _np_reshape(a, newshape, order='C'):
 an integer, then the result will be a 1-D array of that length.
 One shape dimension can be -1. In this case, the value is
 inferred from the length of the array and remaining dimensions.
-order : {'C'}, optional
+order : {'C', 'F', 'A'}, optional
 Read the elements of `a` using this index order, and place the
 elements into the reshaped array using this index order.  'C'
 means to read / write the elements using C-like index order,
 with the last axis index changing fastest, back to the first
-axis index changing slowest. Other order types such as 'F'/'A'
-may be added in the future.
+axis index changing slowest. 'F' means to read / write the
+elements using Fortran-like index order, with the first index
+changing fastest, and the last index changing slowest. Note that
+the 'C' and 'F' options take no account of the memory layout of
+the underlying array, and only refer to the order of indexing.
+'A' means to read / write the elements in Fortran-like index
+order if `a` is Fortran *contiguous* in memory, C-like order
+otherwise.
 
 Returns
 ---
 reshaped_array : ndarray
-It will be always a copy of the original array. This behavior is 
different
-from the official NumPy package where views of the original array may 
be
-generated.
+This will be a new view object if possible; otherwise, it will
+be a copy.  Note there is no guarantee of the *memory layout* (C- or
+Fortran- contiguous) of the returned array.
 
-See Also
+
+Notes
+-
+It is not always possible to change the shape of an array without
+copying the data. If you want an error to be raised when the data is 
copied,
+you should assign the new shape to the shape attribute of the array::
+
+ >>> a = np.zeros((10, 2))
+ # A transpose makes the array non-contiguous
+ >>> b = a.T
+ # Taking a view makes it possible to modify the shape without modifying
+ # the initial object.
+
+>>> a = np.arange(6).reshape((3, 2))
+>>> a
+array([[0., 1.],
+   [2., 3.],
+   [4., 5.]])
+
+You can think of reshaping as first raveling the array (using the given
+index order), then inserting the elements from the raveled array into the
+new array using the same kind of index ordering as was used for the
+raveling.
+
+>>> np.reshape(a, (2, 3)) # C-like index ordering
+array([[0., 1., 2.],
+   [3., 4., 5.]])
+
+- order only support C-order
+- input not support scalar
+- not support zero-size shape
+
+Examples
 
-ndarray.reshape : Equivalent method.
+>>> a = np.array([[1,2,3], [4,5,6]])
+>>> np.reshape(a, 6)
+array([1., 2., 3., 4., 5., 6.])
+
+>>> np.reshape(a, (3,-1))   # the unspecified value is inferred to be 2
 
 Review comment:
   @gyshi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299302800
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -708,7 +739,26 @@ The storage type of ``abs`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseIn{"_backward_abs"});
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_abs, 
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_abs, 
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  // ograds[0]: dL/dxgrad
+  // inputs[0]: dL/dy
+  // inputs[1]: y
 
 Review comment:
   Shouldn't this term be x? _backward_abs is using ElemwiseGradUseIn


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15390: [Numpy fix-doc]modify numpy doc

2019-07-01 Thread GitBox
zoeygxy commented on a change in pull request #15390: [Numpy  fix-doc]modify 
numpy doc
URL: https://github.com/apache/incubator-mxnet/pull/15390#discussion_r299302785
 
 

 ##
 File path: python/mxnet/_numpy_op_doc.py
 ##
 @@ -21,7 +21,8 @@
 
 
 def _np_reshape(a, newshape, order='C'):
 
 Review comment:
   @gyshi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299218314
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -689,7 +689,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
+   {n->inputs[1]}, nullptr, );
+
+auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid",
+   {dydx_mul_dldy, nnvm::NodeEntry{fx}}, nullptr, 
);
+
+auto d2ydx2 = MakeNode("_mul_scalar", n->attrs.name + "_d2ydx2",
+   {nnvm::NodeEntry{d2ydx2_mid}}, , );
+
+std::vector ret;
+
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+ {ograds[0], nnvm::NodeEntry{dydx}}, nullptr, ));
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad_inp",
 
 Review comment:
   This term is actually `head_grad_grads` * `head_grads` * `f''(x)` = 
`head_grad_grads` * `head_grads` * -2 * `f'(x)` * `f(x)`, right? Then we don't 
need the explicit dydx node?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299302321
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -106,6 +106,35 @@ def grad_grad_op(x):
 check_second_order_unary(array, log10, grad_grad_op)
 
 
+@with_seed()
+def test_reciprocal():
+def reciprocal(x):
+return nd.reciprocal(x)
+
+def grad_grad_op(x):
+return 2/x**3
+
+for dim in range(1, 5):
+shape = rand_shape_nd(dim)
+array = random_arrays(shape)
+check_second_order_unary(array, reciprocal, grad_grad_op)
+
+
+@with_seed()
+def test_abs():
+def abs(x):
+return nd.abs(x)
+
+def grad_grad_op(x):
+return nd.zeros_like(x)
+
+for dim in range(1, 5):
+shape = rand_shape_nd(dim)
+array = random_arrays(shape)
+check_second_order_unary(array, abs, grad_grad_op)
+
 
 Review comment:
   nit: please remove extra line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support:  "Unknown 
exception" in mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507500789
 
 
   This will be hard, but... `traceback()` 
   
   >  9: stop(list(message = "Unknown exception", call = 
mx.nd.internal.as.array(nd), 
   >cppstack = list(file = "", line = -1L, stack = "C++ stack not 
available on this system")))
   > 8: .External(list(name = "InternalFunction_invoke", address = , 
   >dll = list(name = "Rcpp", path = 
".../R/win-library/3.6/Rcpp/libs/x64/Rcpp.dll", 
   >dynamicLookup = TRUE, handle = , 
   >info = ), numParameters = -1L), 
   >, ...)
   > 7: mx.nd.internal.as.array(nd)
   > 6: as.array.MXNDArray(res)
   > 5: as.array(res)
   > 4: feval(label, pred)
   > 3: metric$update(label = labels[[i]], pred = preds[[i]], state = 
train.metric)
   > 2: mx.model.train(symbol, ctx, input.shape, output.shape, 
params$arg.params, 
   >params$aux.params, begin.round, num.round, optimizer = optimizer, 
   >train.data = X, eval.data = eval.data, metric = eval.metric, 
   >epoch.end.callback = epoch.end.callback, batch.end.callback = 
batch.end.callback, 
   >kvstore = kvstore, fixed.param = fixed.param, verbose = verbose, 
   >metric_cpu = metric_cpu)
   > 1: mx.model.FeedForward.create(softmax, initializer = 
mx.init.Xavier(factor_type = "in", 
   >magnitude = 2), X = dane, ctx = devices, num.round = 300, 
   >begin.round = epoch + 1, eval.data = NULL, optimizer = 
mx.opt.create("sgd", 
   >learning.rate = 0.005, momentum = 0.9, wd = 0, lr_scheduler = 
fs), 
   >eval.metric = mx.metric.accuracy, batch.end.callback = 
mx.callback.log.speedometer(...)), epoch.end.callback = 
mx.callback.save.checkpoint(paste0("...", 
   >...), 1))
   
   Surprisingly, examples from 
[tutorial](https://mxnet.incubator.apache.org/versions/master/tutorials/r/fiveMinutesNeuralNetwork.html)
 works fine.
   Problem occurs when I use `eval.metric` with ` mx.metric.accuracy` arg. If 
it's changed to `mx.metric.logloss` 
   
   > Start training with 1 devices
   Error in mx.nd.internal.dispatch.Ops(.Generic, e1, e2) : 
 [06:03:51] 
c:\build_mxnet\with_mkldnn\incubator-mxnet\src\operator\tensor\../elemwise_op_common.h:135:
 Check failed: assign(, vec.at(i)): Incompatible attr in node  at 1-th 
input: expected [8], got [16]
   Calls: mx.model.FeedForward.create ... Ops.MXNDArray -> 
mx.nd.internal.dispatch.Ops -> .External
   Execution halted
   
   If I set it to NULL, network is learning but saving results is impossible : 
   
   > Start training with 1 devices
   Error in mx.nd.internal.save(ndarray, filename) : Unknown exception
   
   The only thing that's may difer in my network:
   `InstanceBatchNorm <- function(data, name,  eps = 2e-5) {
data_split <- mx.symbol.split(data = data, num_outputs = 2, axis = 1, 
name = paste0(name, "_split"))
data_split_in1   <- mx.symbol.InstanceNorm(data = data_split[[1]], eps= 
eps, name = paste0(name, "_split_in1"))
data_split_bn2   <- mx.symbol.BatchNorm(data = data_split[[2]], eps = 
eps, fix.gamma = FALSE, name = paste0(name, "_split_bn2"))
con   <- mx.symbol.concat(list(data_split_in1, data_split_bn2), 
num.args = 2, dim = 1, name = paste0(name, "_ibn"))
return(con)
   }`
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15288: [MXNET-978] Higher order gradient for sigmoid

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15288: [MXNET-978] Higher 
order gradient for sigmoid
URL: https://github.com/apache/incubator-mxnet/pull/15288#discussion_r299301226
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -121,7 +121,30 @@ The storage type of ``sigmoid`` output is always dense
 .set_attr("FGradient", 
ElemwiseGradUseOut{"_backward_sigmoid"});
 
 MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_sigmoid,
-   
unary_bwd);
+   
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  // n->inputs[0] : y_grad
+  // n->inputs[1] : f(x) = sigmoid(x)
+  // ograds[0] : head_grads
+  // f''(x) = f'(x) * (1 - 2*f(x))
+  auto ones = MakeNode("ones_like", n->attrs.name + "_grad_ones", 
{n->inputs[1]}, nullptr, );
+  const std::unordered_map args = {{"scalar", 
"2.0"}};
+  auto two_y = MakeNode("_mul_scalar", n->attrs.name + "_mul_two", 
{n->inputs[1]}, , );
+  auto one_minus_two_y = MakeNode("elemwise_sub", n->attrs.name + 
"_grad_sub",
+{nnvm::NodeEntry{ones}, 
nnvm::NodeEntry{two_y}}, nullptr, );
+  auto grad_grad_mid = MakeNode("elemwise_mul", n->attrs.name + 
"_grad_mul",
+{n->inputs[0], 
nnvm::NodeEntry{one_minus_two_y}}, nullptr, );
+  // when building gradient graph, the backward node of n->inputs[1] will 
be
+  // added to the graph again, therefore f`(x) will be multiplied
 
 Review comment:
   @kshitij12345 I have fixed the issue. The result can pass your test now. 
Please review again. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #15288: [MXNET-978] Higher order gradient for sigmoid

2019-07-01 Thread GitBox
apeforest commented on issue #15288: [MXNET-978] Higher order gradient for 
sigmoid
URL: https://github.com/apache/incubator-mxnet/pull/15288#issuecomment-507516653
 
 
   I verified the result is the same as pytorch
   
   ```
   import torch
   import numpy as np
   import math
   
   op = lambda x: torch.sigmoid(x)
   grad_op = lambda x: op(x) * (1 - op(x))
   grad_grad_op = lambda x: grad_op(x) * (1 - 2 * op(x))
   grad_grad_grad_op = lambda x: grad_grad_op(x) - 2 * ( grad_op(x)**2 + 
grad_grad_op(x) * op(x))
   
   x = torch.tensor(np.array([1, 2, 3]), dtype=torch.float32)
   head_grads = torch.tensor(np.array([1, 1, 1]), dtype=torch.float32) * 0.5
   head_grad_grads = torch.tensor(np.array([1, 1, 1]), dtype=torch.float32) * 
0.6
   head_grad_grad_grads = torch.tensor(np.array([1, 1, 1]), 
dtype=torch.float32) * 0.7
   x.requires_grad = True
   head_grads.requires_grad = True
   
   y = op(x)
   x_grad = torch.autograd.grad(y, x, grad_outputs= head_grads, 
create_graph=True, retain_graph=True)[0]
   expected_grad_x = (grad_op(x) * head_grads).detach().numpy()
   print('expected_grad_x = {}'.format(expected_grad_x))
   print('grad_x  = {}'.format(x_grad.detach().numpy()))
   x_grad_grad = torch.autograd.grad(x_grad, x, grad_outputs= head_grad_grads, 
create_graph=True, retain_graph=True)[0]
   x_grad_grad.backward(head_grad_grad_grads)
   
   expected_grad_grad_x = (grad_grad_op(x) * head_grads * 
head_grad_grads).detach().numpy()
   expected_head_grad = (grad_op(x) * head_grad_grads).detach().numpy()
   expected_grad_grad_grad_x = (grad_grad_grad_op(x) * head_grads * 
head_grad_grads * head_grad_grad_grads).detach().numpy()
   
   print('expected_grad_grad_x = {}'.format(expected_grad_grad_x))
   print('grad_grad_x  = {}'.format(x_grad_grad.detach().numpy()))
   print('expected_grad_grad_grad_x = {}'.format(expected_grad_grad_grad_x))
   print('grad_grad_grad_x  = {}'.format(x.grad.detach().numpy()))
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hgt312 commented on a change in pull request #15387: Add docs for 7 ops

2019-07-01 Thread GitBox
hgt312 commented on a change in pull request #15387: Add docs for 7 ops
URL: https://github.com/apache/incubator-mxnet/pull/15387#discussion_r299300689
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -882,3 +902,123 @@ def sqrt(x, out=None, **kwargs):
 This function only supports input type of float.
 """
 return _unary_func_helper(x, _npi.sqrt, _np.sqrt, out=out, **kwargs)
+
+@set_module('mxnet.ndarray.numpy')
+def arctanh(x, out=None, where=True, **kwargs):
+r"""
+arctanh(x, out=None, where=True)
+
+Inverse hyperbolic tangent element-wise.
+
+Parameters:
+
+---
+x : ndarray 
+Input array.
+out : ndarray, None.
+  A location into which the result is stored. If provided, 
+  it must have a shape that the inputs broadcast to. 
+  If not provided or None, a freshly-allocated array is returned. 
+  A tuple (possible only as a keyword argument) 
+  must have length equal to the number of outputs.
+where : ndarray, optional
+Values of True indicate to calculate the ufunc at that position, 
+values of False indicate to leave the value in the output alone.
+Returns:   
+
+out : ndarray or scalar
+  ndarray of the same shape as x. This is a scalar if x is a scalar.
+Examples
+
+>>> np.arctan(0.7)
+0.8673005276940531
+"""
+
+return _unary_func_helper(x, _npi.arctanh, _np.arctanh, out=out, **kwargs)
+
+@set_module('mxnet.ndarray.numpy')
+def tan(x, out=None, where=True, **kwargs):
+r"""
+tan(x, out=None, where=True, **kwargs)
+
+Compute tangent element-wise.
+Equivalent to np.sin(x)/np.cos(x) element-wise.
+
+Parameters:
+--
+x : array_like
+Input array.
+out : ndarray, None, or tuple of ndarray and None, optional
+  A location into which the result is stored. If provided, 
+  it must have a shape that the inputs broadcast to. If not provided 
or None, 
+  a freshly-allocated array is returned. A tuple (possible only as a 
keyword argument) 
+  must have length equal to the number of outputs.
+where : ndarray, optional
+Values of True indicate to calculate the ufunc at that position, 
+values of False indicate to leave the value in the output alone.
+
+Returns:   
+---
+y : ndarray
+The corresponding tangent values. This is a scalar if x is a scalar.
+
+Examples:
+
+>>> np.tan(0.5)
+0.5463024898437905 
+"""
+
+return _unary_func_helper(x, _npi.tan, _np.tan, out=out, **kwargs)
+
+@set_module('mxnet.ndarray.numpy')
+def fix(x, out=None):
+r"""
+Round an array of floats element-wise to nearest integer towards zero. 
+The rounded values are returned as floats.
+
 
 Review comment:
   remove whitespaces


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support:  "Unknown 
exception" in mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507500789
 
 
   This will be hard, but... `traceback()` 
   
   >  9: stop(list(message = "Unknown exception", call = 
mx.nd.internal.as.array(nd), 
   >cppstack = list(file = "", line = -1L, stack = "C++ stack not 
available on this system")))
   > 8: .External(list(name = "InternalFunction_invoke", address = , 
   >dll = list(name = "Rcpp", path = 
".../R/win-library/3.6/Rcpp/libs/x64/Rcpp.dll", 
   >dynamicLookup = TRUE, handle = , 
   >info = ), numParameters = -1L), 
   >, ...)
   > 7: mx.nd.internal.as.array(nd)
   > 6: as.array.MXNDArray(res)
   > 5: as.array(res)
   > 4: feval(label, pred)
   > 3: metric$update(label = labels[[i]], pred = preds[[i]], state = 
train.metric)
   > 2: mx.model.train(symbol, ctx, input.shape, output.shape, 
params$arg.params, 
   >params$aux.params, begin.round, num.round, optimizer = optimizer, 
   >train.data = X, eval.data = eval.data, metric = eval.metric, 
   >epoch.end.callback = epoch.end.callback, batch.end.callback = 
batch.end.callback, 
   >kvstore = kvstore, fixed.param = fixed.param, verbose = verbose, 
   >metric_cpu = metric_cpu)
   > 1: mx.model.FeedForward.create(softmax, initializer = 
mx.init.Xavier(factor_type = "in", 
   >magnitude = 2), X = dane, ctx = devices, num.round = 300, 
   >begin.round = epoch + 1, eval.data = NULL, optimizer = 
mx.opt.create("sgd", 
   >learning.rate = 0.005, momentum = 0.9, wd = 0, lr_scheduler = 
fs), 
   >eval.metric = mx.metric.accuracy, batch.end.callback = 
mx.callback.log.speedometer(...)), epoch.end.callback = 
mx.callback.save.checkpoint(paste0("...", 
   >...), 1))
   
   Surprisingly, examples from 
[tutorial](https://mxnet.incubator.apache.org/versions/master/tutorials/r/fiveMinutesNeuralNetwork.html)
 works fine.
   Problem occurs when I use `eval.metric` with ` mx.metric.accuracy` arg. If 
it's changed to `mx.metric.logloss` 
   
   > Start training with 1 devices
   Error in mx.nd.internal.dispatch.Ops(.Generic, e1, e2) : 
 [06:03:51] 
c:\build_mxnet\with_mkldnn\incubator-mxnet\src\operator\tensor\../elemwise_op_common.h:135:
 Check failed: assign(, vec.at(i)): Incompatible attr in node  at 1-th 
input: expected [8], got [16]
   Calls: mx.model.FeedForward.create ... Ops.MXNDArray -> 
mx.nd.internal.dispatch.Ops -> .External
   Execution halted
   
   If I set it to NULL, network is learning but saving results is impossible : 
   
   > Start training with 1 devices
   Error in mx.nd.internal.save(ndarray, filename) : Unknown exception
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] thomelane commented on issue #15427: [TUTORIAL] Gluon performance tips and tricks

2019-07-01 Thread GitBox
thomelane commented on issue #15427: [TUTORIAL] Gluon performance tips and 
tricks
URL: https://github.com/apache/incubator-mxnet/pull/15427#issuecomment-507514091
 
 
   Would be great to get CPU specific tricks and tips @pengzhao-intel. Given 
the length of this tutorial already let's add those in another tutorial as a 
follow up. @xinyu-intel are you able to write about some performance tricks for 
CPU and follow up in another PR? When we have that we can think about splitting 
this up into generic tips, GPU specific and CPU specific tricks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support:  "Unknown 
exception" in mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507500789
 
 
   This will be hard, but... `traceback()` 
   
   >  9: stop(list(message = "Unknown exception", call = 
mx.nd.internal.as.array(nd), 
   >cppstack = list(file = "", line = -1L, stack = "C++ stack not 
available on this system")))
   > 8: .External(list(name = "InternalFunction_invoke", address = , 
   >dll = list(name = "Rcpp", path = 
".../R/win-library/3.6/Rcpp/libs/x64/Rcpp.dll", 
   >dynamicLookup = TRUE, handle = , 
   >info = ), numParameters = -1L), 
   >, ...)
   > 7: mx.nd.internal.as.array(nd)
   > 6: as.array.MXNDArray(res)
   > 5: as.array(res)
   > 4: feval(label, pred)
   > 3: metric$update(label = labels[[i]], pred = preds[[i]], state = 
train.metric)
   > 2: mx.model.train(symbol, ctx, input.shape, output.shape, 
params$arg.params, 
   >params$aux.params, begin.round, num.round, optimizer = optimizer, 
   >train.data = X, eval.data = eval.data, metric = eval.metric, 
   >epoch.end.callback = epoch.end.callback, batch.end.callback = 
batch.end.callback, 
   >kvstore = kvstore, fixed.param = fixed.param, verbose = verbose, 
   >metric_cpu = metric_cpu)
   > 1: mx.model.FeedForward.create(softmax, initializer = 
mx.init.Xavier(factor_type = "in", 
   >magnitude = 2), X = dane, ctx = devices, num.round = 300, 
   >begin.round = epoch + 1, eval.data = NULL, optimizer = 
mx.opt.create("sgd", 
   >learning.rate = 0.005, momentum = 0.9, wd = 0, lr_scheduler = 
fs), 
   >eval.metric = mx.metric.accuracy, batch.end.callback = 
mx.callback.log.speedometer(...)), epoch.end.callback = 
mx.callback.save.checkpoint(paste0("...", 
   >...), 1))
   
   Surprisingly, examples from 
[tutorial](https://mxnet.incubator.apache.org/versions/master/tutorials/r/fiveMinutesNeuralNetwork.html)
 works fine.
   Problem occurs when I use `eval.metric` with ` mx.metric.accuracy` arg. If 
it's changed to `mx.metric.logloss` 
   
   > Start training with 1 devices
   Error in mx.nd.internal.dispatch.Ops(.Generic, e1, e2) : 
 [06:03:51] 
c:\build_mxnet\with_mkldnn\incubator-mxnet\src\operator\tensor\../elemwise_op_common.h:135:
 Check failed: assign(, vec.at(i)): Incompatible attr in node  at 1-th 
input: expected [8], got [16]
   
   If I set it to NULL, network is learning but saving results is impossible : 
   
   > Start training with 1 devices
   Error in mx.nd.internal.save(ndarray, filename) : Unknown exception
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15427: [TUTORIAL] Gluon performance tips and tricks

2019-07-01 Thread GitBox
pengzhao-intel commented on issue #15427: [TUTORIAL] Gluon performance tips and 
tricks
URL: https://github.com/apache/incubator-mxnet/pull/15427#issuecomment-507511281
 
 
   It's great to have these documents. I think we still need to cover CPU 
performance.
   @xinyu-intel will help to provide the additional information about CPU


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299297193
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -689,7 +689,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
 
 Review comment:
   Now I see that you need this node for the first output "_backward_grad_grad"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel edited a comment on issue #15427: [TUTORIAL] Gluon performance tips and tricks

2019-07-01 Thread GitBox
pengzhao-intel edited a comment on issue #15427: [TUTORIAL] Gluon performance 
tips and tricks
URL: https://github.com/apache/incubator-mxnet/pull/15427#issuecomment-507511281
 
 
   It's great to have these documents. I think we still need to cover CPU 
performance.
   @xinyu-intel will help to provide additional information about CPU


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15381: [Numpy] Add Documentations

2019-07-01 Thread GitBox
ckt624 commented on a change in pull request #15381: [Numpy] Add Documentations
URL: https://github.com/apache/incubator-mxnet/pull/15381#discussion_r299296105
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -31,13 +31,95 @@
 
 __all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'concatenate', 
'arange', 'argmax',
'clip', 'add', 'subtract', 'multiply', 'divide', 'mod', 'power', 
'split', 'swapaxes',
-   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt']
+   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt', 
+   'absolute', 'cbrt', 'arccos']
 
+@set_module('mxnet.symbol.numpy')
 
 Review comment:
   Thx. It also works.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support:  "Unknown 
exception" in mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507500789
 
 
   This will be hard, but... `traceback()` 
   
   >  9: stop(list(message = "Unknown exception", call = 
mx.nd.internal.as.array(nd), 
   >cppstack = list(file = "", line = -1L, stack = "C++ stack not 
available on this system")))
   > 8: .External(list(name = "InternalFunction_invoke", address = , 
   >dll = list(name = "Rcpp", path = 
".../R/win-library/3.6/Rcpp/libs/x64/Rcpp.dll", 
   >dynamicLookup = TRUE, handle = , 
   >info = ), numParameters = -1L), 
   >, ...)
   > 7: mx.nd.internal.as.array(nd)
   > 6: as.array.MXNDArray(res)
   > 5: as.array(res)
   > 4: feval(label, pred)
   > 3: metric$update(label = labels[[i]], pred = preds[[i]], state = 
train.metric)
   > 2: mx.model.train(symbol, ctx, input.shape, output.shape, 
params$arg.params, 
   >params$aux.params, begin.round, num.round, optimizer = optimizer, 
   >train.data = X, eval.data = eval.data, metric = eval.metric, 
   >epoch.end.callback = epoch.end.callback, batch.end.callback = 
batch.end.callback, 
   >kvstore = kvstore, fixed.param = fixed.param, verbose = verbose, 
   >metric_cpu = metric_cpu)
   > 1: mx.model.FeedForward.create(softmax, initializer = 
mx.init.Xavier(factor_type = "in", 
   >magnitude = 2), X = dane, ctx = devices, num.round = 300, 
   >begin.round = epoch + 1, eval.data = NULL, optimizer = 
mx.opt.create("sgd", 
   >learning.rate = 0.005, momentum = 0.9, wd = 0, lr_scheduler = 
fs), 
   >eval.metric = mx.metric.accuracy, batch.end.callback = 
mx.callback.log.speedometer(...)), epoch.end.callback = 
mx.callback.save.checkpoint(paste0("...", 
   >...), 1))
   
   Surprisingly, examples from 
[tutorial](https://mxnet.incubator.apache.org/versions/master/tutorials/r/fiveMinutesNeuralNetwork.html)
 works fine.
   Problem occurs when I use `eval.metric` with ` mx.metric.accuracy` arg. If 
it's changed to `mx.metric.logloss` 
   
   > Start training with 1 devices
   Error in mx.nd.internal.dispatch.Ops(.Generic, e1, e2) : 
 [06:03:51] 
c:\build_mxnet\with_mkldnn\incubator-mxnet\src\operator\tensor\../elemwise_op_common.h:135:
 Check failed: assign(, vec.at(i)): Incompatible attr in node  at 1-th 
input: expected [8], got [16]
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15381: [Numpy] Add Documentations

2019-07-01 Thread GitBox
ckt624 commented on a change in pull request #15381: [Numpy] Add Documentations
URL: https://github.com/apache/incubator-mxnet/pull/15381#discussion_r299295648
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -31,13 +31,95 @@
 
 __all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'concatenate', 
'arange', 'argmax',
'clip', 'add', 'subtract', 'multiply', 'divide', 'mod', 'power', 
'split', 'swapaxes',
-   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt']
+   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt', 
+   'absolute', 'cbrt', 'arccos']
 
+@set_module('mxnet.symbol.numpy')
+def absolute(x, out=None, **kwargs):
+r"""
+Calculate the absolute value element-wise.
+np.abs is a shorthand for this function.
+
+Parameters
+--
+x : _Symbol
+Input array.
 
 Review comment:
   Changed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15381: [Numpy] Add Documentations

2019-07-01 Thread GitBox
ckt624 commented on a change in pull request #15381: [Numpy] Add Documentations
URL: https://github.com/apache/incubator-mxnet/pull/15381#discussion_r299295628
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -31,13 +31,95 @@
 
 __all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'concatenate', 
'arange', 'argmax',
'clip', 'add', 'subtract', 'multiply', 'divide', 'mod', 'power', 
'split', 'swapaxes',
-   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt']
+   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt', 
+   'absolute', 'cbrt', 'arccos']
 
+@set_module('mxnet.symbol.numpy')
+def absolute(x, out=None, **kwargs):
+r"""
+Calculate the absolute value element-wise.
+np.abs is a shorthand for this function.
+
+Parameters
+--
+x : _Symbol
+Input array.
+
+out : _Symbol or None
+Dummy parameter to keep the consistency with the ndarray counterpart.
 
 Review comment:
   Changed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hgt312 commented on a change in pull request #15381: [Numpy] Add Documentations

2019-07-01 Thread GitBox
hgt312 commented on a change in pull request #15381: [Numpy] Add Documentations
URL: https://github.com/apache/incubator-mxnet/pull/15381#discussion_r299294116
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -31,13 +31,95 @@
 
 __all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'concatenate', 
'arange', 'argmax',
'clip', 'add', 'subtract', 'multiply', 'divide', 'mod', 'power', 
'split', 'swapaxes',
-   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt']
+   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt', 
+   'absolute', 'cbrt', 'arccos']
 
+@set_module('mxnet.symbol.numpy')
+def absolute(x, out=None, **kwargs):
+r"""
+Calculate the absolute value element-wise.
+np.abs is a shorthand for this function.
+
+Parameters
+--
+x : _Symbol
+Input array.
+
+out : _Symbol or None
+Dummy parameter to keep the consistency with the ndarray counterpart.
 
 Review comment:
   indent here
   ```
   out : _Symbol or None
   Dummy parameter to keep the consistency with the ndarray counterpart.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hgt312 commented on a change in pull request #15381: [Numpy] Add Documentations

2019-07-01 Thread GitBox
hgt312 commented on a change in pull request #15381: [Numpy] Add Documentations
URL: https://github.com/apache/incubator-mxnet/pull/15381#discussion_r299294268
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -31,13 +31,95 @@
 
 __all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'concatenate', 
'arange', 'argmax',
'clip', 'add', 'subtract', 'multiply', 'divide', 'mod', 'power', 
'split', 'swapaxes',
-   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt']
+   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt', 
+   'absolute', 'cbrt', 'arccos']
 
+@set_module('mxnet.symbol.numpy')
 
 Review comment:
   There unary ops' location may need to be at the bottom of this file


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hgt312 commented on a change in pull request #15381: [Numpy] Add Documentations

2019-07-01 Thread GitBox
hgt312 commented on a change in pull request #15381: [Numpy] Add Documentations
URL: https://github.com/apache/incubator-mxnet/pull/15381#discussion_r299294051
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -31,13 +31,95 @@
 
 __all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'concatenate', 
'arange', 'argmax',
'clip', 'add', 'subtract', 'multiply', 'divide', 'mod', 'power', 
'split', 'swapaxes',
-   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt']
+   'expand_dims', 'tile', 'linspace', 'sin', 'cos', 'sinh', 'cosh', 
'log10', 'sqrt', 
+   'absolute', 'cbrt', 'arccos']
 
+@set_module('mxnet.symbol.numpy')
+def absolute(x, out=None, **kwargs):
+r"""
+Calculate the absolute value element-wise.
+np.abs is a shorthand for this function.
+
+Parameters
+--
+x : _Symbol
+Input array.
 
 Review comment:
   indent here
   ```
   x : _Symbol
   Input array.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 edited a comment on issue #15369: Fix build with system's openmp

2019-07-01 Thread GitBox
cjolivier01 edited a comment on issue #15369: Fix build with system's openmp
URL: https://github.com/apache/incubator-mxnet/pull/15369#issuecomment-507501848
 
 
   Using cmake is another subject altogether.  I think you're missing the point.
   How is this PR not basically what was already blocked?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 edited a comment on issue #15369: Fix build with system's openmp

2019-07-01 Thread GitBox
cjolivier01 edited a comment on issue #15369: Fix build with system's openmp
URL: https://github.com/apache/incubator-mxnet/pull/15369#issuecomment-507501977
 
 
   This is basically a duplicate of turning off llvm openmp.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 closed pull request #15369: Fix build with system's openmp

2019-07-01 Thread GitBox
cjolivier01 closed pull request #15369: Fix build with system's openmp
URL: https://github.com/apache/incubator-mxnet/pull/15369
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 commented on issue #15369: Fix build with system's openmp

2019-07-01 Thread GitBox
cjolivier01 commented on issue #15369: Fix build with system's openmp
URL: https://github.com/apache/incubator-mxnet/pull/15369#issuecomment-507501977
 
 
   This is basically a duplicate of turning on llvm openmp.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 commented on issue #15369: Fix build with system's openmp

2019-07-01 Thread GitBox
cjolivier01 commented on issue #15369: Fix build with system's openmp
URL: https://github.com/apache/incubator-mxnet/pull/15369#issuecomment-507501848
 
 
   Using cmake is another subject altogether.
   How is this PR not basically what was already blocked?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hubutui commented on issue #15369: Fix build with system's openmp

2019-07-01 Thread GitBox
hubutui commented on issue #15369: Fix build with system's openmp
URL: https://github.com/apache/incubator-mxnet/pull/15369#issuecomment-507501230
 
 
   @cjolivier01 #9686 suggests transitioning fully to cmake, it would help a 
lot. Maybe dropping the Makefile someday?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
Crunchy9 edited a comment on issue #15420: [R] MKL-DNN support:  "Unknown 
exception" in mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507500789
 
 
   This will be hard, but... `traceback()` 
   
   > 9: stop(list(message = "Unknown exception", call = 
mx.nd.internal.as.array(nd), 
  cppstack = list(file = "", line = -1L, stack = "C++ stack not 
available on this system")))
   8: .External(list(name = "InternalFunction_invoke", address = , 
  dll = list(name = "Rcpp", path = 
".../R/win-library/3.6/Rcpp/libs/x64/Rcpp.dll", 
  dynamicLookup = TRUE, handle = , 
  info = ), numParameters = -1L), 
  , ...)
   7: mx.nd.internal.as.array(nd)
   6: as.array.MXNDArray(res)
   5: as.array(res)
   4: feval(label, pred)
   3: metric$update(label = labels[[i]], pred = preds[[i]], state = 
train.metric)
   2: mx.model.train(symbol, ctx, input.shape, output.shape, params$arg.params, 
  params$aux.params, begin.round, num.round, optimizer = optimizer, 
  train.data = X, eval.data = eval.data, metric = eval.metric, 
  epoch.end.callback = epoch.end.callback, batch.end.callback = 
batch.end.callback, 
  kvstore = kvstore, fixed.param = fixed.param, verbose = verbose, 
  metric_cpu = metric_cpu)
   1: mx.model.FeedForward.create(softmax, initializer = 
mx.init.Xavier(factor_type = "in", 
  magnitude = 2), X = dane, ctx = devices, num.round = 300, 
  begin.round = epoch + 1, eval.data = NULL, optimizer = 
mx.opt.create("sgd", 
  learning.rate = 0.005, momentum = 0.9, wd = 0, lr_scheduler = 
fs), 
  eval.metric = mx.metric.accuracy, batch.end.callback = 
mx.callback.log.speedometer(...)), epoch.end.callback = 
mx.callback.save.checkpoint(paste0("...", 
  ...), 1))
   Surprisingly, examples from 
[tutorial](https://mxnet.incubator.apache.org/versions/master/tutorials/r/fiveMinutesNeuralNetwork.html)
 works fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Crunchy9 commented on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
Crunchy9 commented on issue #15420: [R] MKL-DNN support:  "Unknown exception" 
in mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507500789
 
 
   This will be hard, but... `traceback()` 
   
   > 9: stop(list(message = "Unknown exception", call = 
mx.nd.internal.as.array(nd), 
  cppstack = list(file = "", line = -1L, stack = "C++ stack not 
available on this system")))
   8: .External(list(name = "InternalFunction_invoke", address = , 
  dll = list(name = "Rcpp", path = 
".../R/win-library/3.6/Rcpp/libs/x64/Rcpp.dll", 
  dynamicLookup = TRUE, handle = , 
  info = ), numParameters = -1L), 
  , ...)
   7: mx.nd.internal.as.array(nd)
   6: as.array.MXNDArray(res)
   5: as.array(res)
   4: feval(label, pred)
   3: metric$update(label = labels[[i]], pred = preds[[i]], state = 
train.metric)
   2: mx.model.train(symbol, ctx, input.shape, output.shape, params$arg.params, 
  params$aux.params, begin.round, num.round, optimizer = optimizer, 
  train.data = X, eval.data = eval.data, metric = eval.metric, 
  epoch.end.callback = epoch.end.callback, batch.end.callback = 
batch.end.callback, 
  kvstore = kvstore, fixed.param = fixed.param, verbose = verbose, 
  metric_cpu = metric_cpu)
   1: mx.model.FeedForward.create(softmax, initializer = 
mx.init.Xavier(factor_type = "in", 
  magnitude = 2), X = dane, ctx = devices, num.round = 300, 
  begin.round = epoch + 1, eval.data = NULL, optimizer = 
mx.opt.create("sgd", 
  learning.rate = 0.005, momentum = 0.9, wd = 0, lr_scheduler = 
fs), 
  eval.metric = mx.metric.accuracy, batch.end.callback = 
mx.callback.log.speedometer(...)), epoch.end.callback = 
mx.callback.save.checkpoint(paste0("...", 
  ...), 1))
   Surprisingly, examples 
[tutorial](https://mxnet.incubator.apache.org/versions/master/tutorials/r/fiveMinutesNeuralNetwork.html)
 works fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Soonhwan-Kwon edited a comment on issue #15372: language Model

2019-07-01 Thread GitBox
Soonhwan-Kwon edited a comment on issue #15372: language Model
URL: 
https://github.com/apache/incubator-mxnet/issues/15372#issuecomment-507498181
 
 
   No it does not but we did implemented kenlm based decoding with similar 
project before so it is possible to do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Soonhwan-Kwon edited a comment on issue #15372: language Model

2019-07-01 Thread GitBox
Soonhwan-Kwon edited a comment on issue #15372: language Model
URL: 
https://github.com/apache/incubator-mxnet/issues/15372#issuecomment-507498181
 
 
   No it does not but we did implemented kenlm based decoding with similar 
project before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Soonhwan-Kwon commented on issue #15372: language Model

2019-07-01 Thread GitBox
Soonhwan-Kwon commented on issue #15372: language Model
URL: 
https://github.com/apache/incubator-mxnet/issues/15372#issuecomment-507498181
 
 
   No it does not but we did implemented kenlm based decoding with similar 
project.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15322: Can Mxnet support Atlas 200 DK AI in forward process?

2019-07-01 Thread GitBox
pengzhao-intel commented on issue #15322: Can Mxnet support Atlas 200 DK AI in 
forward process? 
URL: 
https://github.com/apache/incubator-mxnet/issues/15322#issuecomment-507494442
 
 
   Do you know how it supports Caffe?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15420: [R] MKL-DNN support: "Unknown exception" in mx.nd.internal.as.array

2019-07-01 Thread GitBox
pengzhao-intel commented on issue #15420: [R] MKL-DNN support:  "Unknown 
exception" in mx.nd.internal.as.array
URL: 
https://github.com/apache/incubator-mxnet/issues/15420#issuecomment-507492651
 
 
   @Crunchy9 could you provide a reproducible case? 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] thomelane commented on issue #15427: [TUTORIAL] Gluon performance tips and tricks

2019-07-01 Thread GitBox
thomelane commented on issue #15427: [TUTORIAL] Gluon performance tips and 
tricks
URL: https://github.com/apache/incubator-mxnet/pull/15427#issuecomment-507491626
 
 
   @aaronmarkham would be great if you could review this new tutorial too, 
cheers! i'll make adjustments to my sparse tutorial tomorrow.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] thomelane opened a new pull request #15427: [TUTORIAL] Gluon performance tips and tricks

2019-07-01 Thread GitBox
thomelane opened a new pull request #15427: [TUTORIAL] Gluon performance tips 
and tricks
URL: https://github.com/apache/incubator-mxnet/pull/15427
 
 
   Summary of performance features provided in MXNet that are relevant for 
Gluon users. Useful for a beginner audience who might not know what features to 
search for in the first place.
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort

2019-07-01 Thread GitBox
mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r299279487
 
 

 ##
 File path: src/operator/tensor/ordering_op-inl.h
 ##
 @@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs& attrs,
  const std::vector& req,
  const std::vector& outputs) {
   const ArgSortParam& param = nnvm::get(attrs.parsed);
-  TopKParam topk_param;
-  topk_param.axis = param.axis;
-  topk_param.is_ascend = param.is_ascend;
-  topk_param.k = 0;
-  topk_param.dtype = param.dtype;
-  topk_param.ret_typ = topk_enum::kReturnIndices;
-  MXNET_NO_FLOAT16_TYPE_SWITCH(inputs[0].type_flag_, DType, {
-MSHADOW_TYPE_SWITCH(param.dtype, IDType, {
-  TopKImpl(ctx.run_ctx,
-   ctx.requested[0], req, inputs[0], outputs, 
topk_param);
+
+  if (inputs[0].shape_.ndim() == 0) {
+  // Scalar tensor only accept axis of value 0, -1 or None
+CHECK(!static_cast(param.axis) || param.axis.value() == -1 || 
param.axis.value() == 0)
+  << "Axis can only be -1 or 0 for scalor tensor";
+MSHADOW_TYPE_SWITCH(param.dtype, DType, {
+  Stream *s = ctx.get_stream();
+  Tensor outdata = outputs[0].get_with_shape(Shape1(1), s);
+  ASSIGN_DISPATCH(outdata, OpReqType::kWriteTo, 0);
 });
-  });
+  } else if (inputs[0].shape_.Size() == 0) {
+if (static_cast(param.axis)) {
+  int axis = param.axis.value();
+  if (axis < 0) axis += inputs[0].shape_.ndim();
+  CHECK(axis >= 0 && axis < inputs[0].shape_.ndim())
+<< "Axis must be within the range of input tensor's dimension";
 
 Review comment:
   This `else if` branch is targeting the zero-size tensor. 
   When the input is of zero-size, no actual computation needs to be done. The 
zero-size output is prepared during shape inference, so in the forward pass, we 
only need to check if the axis fall within the provided shape. If yes (CHECK 
passed), we are finished.
   
   > What happens if all checks pass inside this else if()?
   
   This `else if` branch is targeting the zero-size tensor. 
   When the input is of zero-size, no actual computation needs to be done. The 
zero-size output is prepared during shape inference, so in the forward pass, we 
only need to check if the axis fall within the provided shape. If yes (CHECK 
passed), we are finished.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort

2019-07-01 Thread GitBox
mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r299279487
 
 

 ##
 File path: src/operator/tensor/ordering_op-inl.h
 ##
 @@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs& attrs,
  const std::vector& req,
  const std::vector& outputs) {
   const ArgSortParam& param = nnvm::get(attrs.parsed);
-  TopKParam topk_param;
-  topk_param.axis = param.axis;
-  topk_param.is_ascend = param.is_ascend;
-  topk_param.k = 0;
-  topk_param.dtype = param.dtype;
-  topk_param.ret_typ = topk_enum::kReturnIndices;
-  MXNET_NO_FLOAT16_TYPE_SWITCH(inputs[0].type_flag_, DType, {
-MSHADOW_TYPE_SWITCH(param.dtype, IDType, {
-  TopKImpl(ctx.run_ctx,
-   ctx.requested[0], req, inputs[0], outputs, 
topk_param);
+
+  if (inputs[0].shape_.ndim() == 0) {
+  // Scalar tensor only accept axis of value 0, -1 or None
+CHECK(!static_cast(param.axis) || param.axis.value() == -1 || 
param.axis.value() == 0)
+  << "Axis can only be -1 or 0 for scalor tensor";
+MSHADOW_TYPE_SWITCH(param.dtype, DType, {
+  Stream *s = ctx.get_stream();
+  Tensor outdata = outputs[0].get_with_shape(Shape1(1), s);
+  ASSIGN_DISPATCH(outdata, OpReqType::kWriteTo, 0);
 });
-  });
+  } else if (inputs[0].shape_.Size() == 0) {
+if (static_cast(param.axis)) {
+  int axis = param.axis.value();
+  if (axis < 0) axis += inputs[0].shape_.ndim();
+  CHECK(axis >= 0 && axis < inputs[0].shape_.ndim())
+<< "Axis must be within the range of input tensor's dimension";
 
 Review comment:
   > What happens if all checks pass inside this else if()?
   
   This `else if` branch is targeting the zero-size tensor. 
   When the input is of zero-size, no actual computation needs to be done. The 
zero-size output is prepared during shape inference, so in the forward pass, we 
only need to check if the axis fall within the provided shape. If yes (CHECK 
passed), we are finished.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15382: [numpy][doc-fix] sum, copy, tile, argmax, sign, log, degrees

2019-07-01 Thread GitBox
zoeygxy commented on a change in pull request #15382: [numpy][doc-fix] sum, 
copy, tile, argmax, sign, log, degrees
URL: https://github.com/apache/incubator-mxnet/pull/15382#discussion_r299279034
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -900,6 +977,66 @@ def sqrt(x, out=None, **kwargs):
 return _unary_func_helper(x, _npi.sqrt, _np.sqrt, out=out, **kwargs)
 
 
+@set_module('mxnet.ndarray.numpy')
+def sign(x, out=None, **kwargs):
+r"""
 
 Review comment:
   Contents for rendering are from multiarray.py. All overridden already. But 
will try to fix _op.py. Thx!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mikemwx commented on a change in pull request #15370: [numpy][doc-fix] mean, transpose, stack, split, log2, rint and radians

2019-07-01 Thread GitBox
mikemwx commented on a change in pull request #15370: [numpy][doc-fix] mean, 
transpose, stack, split, log2, rint and radians
URL: https://github.com/apache/incubator-mxnet/pull/15370#discussion_r299273610
 
 

 ##
 File path: python/mxnet/_numpy_op_doc.py
 ##
 @@ -223,3 +223,46 @@ def _np_dot(a, b, out=None):
 array(29884.)
 """
 pass
+
+
+def _np_transpose(a, axes=None):
+"""
+transpose(a, axes=None)
+
+Permute the dimensions of an array.
+
+Parameters
+--
+a : ndarray
+Input array.
+axes : list of ints, optional
+By default, reverse the dimensions,
+otherwise permute the axes according to the values given.
+
+Returns
+---
+p : ndarray
+a with its axes permuted.
+
+Notes
+-
+This function differs from the original `numpy.transpose
+
`_ in
+the following way(s):
+
+- only ndarray is accepted as valid input, python iterables are not 
supported
+
+Examples
+
+>>> x = np.arange(4).reshape((2,2))
+>>> x
+array([[0., 1.],
 
 Review comment:
   > dtype=float32 is the default type and will not appear in the up to date 
version.
   
   Thank you for reminding. Will fix that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mikemwx commented on a change in pull request #15370: [numpy][doc-fix] mean, transpose, stack, split, log2, rint and radians

2019-07-01 Thread GitBox
mikemwx commented on a change in pull request #15370: [numpy][doc-fix] mean, 
transpose, stack, split, log2, rint and radians
URL: https://github.com/apache/incubator-mxnet/pull/15370#discussion_r299273164
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -178,6 +180,68 @@ def minimum(x1, x2, out=None):
 return _ufunc_helper(x1, x2, _npi.minimum, _np.minimum, 
_npi.minimum_scalar, None, out)
 
 
+@set_module('mxnet.ndarray.numpy')
+def mean(a, axis=None, dtype=None, out=None, keepdims=False):  # pylint: 
disable=arguments-differ
+"""
+mean(a, axis=None, dtype=None, out=None, keepdims=None)
 
 Review comment:
   I have wrapped this function to only expose the above signature


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 commented on issue #15369: Fix build with system's openmp

2019-07-01 Thread GitBox
cjolivier01 commented on issue #15369: Fix build with system's openmp
URL: https://github.com/apache/incubator-mxnet/pull/15369#issuecomment-507481585
 
 
   I don't think this is a good idea. we should standardize on one
   implementation.  Like I have pointed out, the default make build uses
   llvm/intel version.  cmake builds us llvm/intel version.  if libgomp is
   still being included, removing it should be investigated.  Trying to
   flip-flop between them on a whim is asking for trouble, and currently I
   know of no advantage to using libgomp.
   
   On Mon, Jul 1, 2019 at 5:33 PM Pedro Larroy 
   wrote:
   
   > @hubutui  would it be possible to add a CMake
   > option to choose the OpenMP implementation that one wishes for? There's
   > three options, MKL, 3rdparty OpenMP and the one found in the platform. So I
   > think the pr goes in the good direction but we would need a commandline
   > option.
   >
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-07-01 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a497676  Bump the publish timestamp.
a497676 is described below

commit a49767665027cc32c1a5c0b1e1623357144180a8
Author: mxnet-ci 
AuthorDate: Tue Jul 2 01:16:13 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..09d9f38
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Jul  2 01:16:13 UTC 2019



[GitHub] [incubator-mxnet] larroy opened a new pull request #15426: Add -R option to ci/build.py to avoid rebuilding containers

2019-07-01 Thread GitBox
larroy opened a new pull request #15426: Add -R option to ci/build.py to avoid 
rebuilding containers
URL: https://github.com/apache/incubator-mxnet/pull/15426
 
 
   There are a couple of problems which trigger continuous rebuild of the 
containers which have difficult solutions, this option enables a workaround.
   https://github.com/docker/docker.github.io/issues/8886
   
   @marcoabreu 
   
   
   In one ec2 machine:
   
   ```
   Sending build context to Docker daemon  264.2kB  

   [73/1937]
   Step 1/41 : FROM ubuntu:16.04
---> 13c9f1285025  
   Step 2/41 : WORKDIR /work/deps
---> Using cache 
---> fd89693ec7d8  
   Step 3/41 : COPY install/ubuntu_core.sh /work/
---> Using cache 
---> acc84b2e4ad2
   Step 4/41 : RUN /work/ubuntu_core.sh
---> Using cache 
---> 97d47815b79e  
   Step 5/41 : COPY install/deb_ubuntu_ccache.sh /work/
---> Using cache 
---> 458bcdc2b9e2
   ```
   
   
   Different machine:
   
   ```
   build.py: 2019-07-02 01:05:05,098Z INFO Running command: 'docker build -f 
docker/Dockerfile.build.ubuntu_cpu --build-arg USER_ID=624416249 --build-arg 
GROUP_ID=600260513 --cache-from mxnetci/build.ubuntu_cpu -t 
mxnetci/build.ubuntu_cpu docker'
   Sending build context to Docker daemon  264.2kB
   Step 1/41 : FROM ubuntu:16.04
---> 13c9f1285025
   Step 2/41 : WORKDIR /work/deps
---> Using cache
---> 44a97319ecd8
   Step 3/41 : COPY install/ubuntu_core.sh /work/
---> d0284928a471
   Step 4/41 : RUN /work/ubuntu_core.sh
---> Running in cc6d4d02a210
   + apt-get update
   Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
   Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
   Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
   Get:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]
   Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB]
   ```
   
   I haven't found a solution for this and I think is related to different user 
id's and problems with docker caching.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #15403: Updating profiler tutorial to include new custom operator profiling

2019-07-01 Thread GitBox
Zha0q1 commented on a change in pull request #15403: Updating profiler tutorial 
to include new custom operator profiling
URL: https://github.com/apache/incubator-mxnet/pull/15403#discussion_r299266300
 
 

 ##
 File path: docs/tutorials/python/profiler.md
 ##
 @@ -206,6 +206,63 @@ Let's zoom in to check the time taken by operators
 
 The above picture visualizes the sequence in which the operators were executed 
and the time taken by each operator.
 
+### Profiling Custom Operators
+Should the existing NDArray operators fail to meet all your model's needs, 
MXNet supports [Custom 
Operators](https://mxnet.incubator.apache.org/versions/master/tutorials/gluon/customop.html)
 that you can define in Python. In `forward()` and `backward()` of a custom 
operator, there are two kinds of code: "pure Python" code (NumPy operators 
included) and "sub-operators" (NDArray operators called within `forward()` and 
`backward()`). With that said, MXNet can profile the execution time of both 
kinds without additional setup. Specifically, the MXNet profiler will break a 
single custom operator call into a pure Python event and several sub-operator 
events if there are any. Furthermore, all of those events will have a prefix in 
their names, which is, conveniently, the name of the custom operator you called.
+
+Let's try profiling custom operators with the following code example:
+
+```python
+
+import mxnet as mx
+from mxnet import nd
+from mxnet import profiler
+
+class MyAddOne(mx.operator.CustomOp):
+def forward(self, is_train, req, in_data, out_data, aux):  
+self.assign(out_data[0], req[0], in_data[0]+1)
+
+def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
+self.assign(in_grad[0], req[0], out_grad[0])
+
+@mx.operator.register('MyAddOne')
+class CustomAddOneProp(mx.operator.CustomOpProp):
+def __init__(self):
+super(CustomAddOneProp, self).__init__(need_top_grad=True)
+
+def list_arguments(self):
+return ['data']
+
+def list_outputs(self):
+return ['output']
+
+def infer_shape(self, in_shape):
+return [in_shape[0]], [in_shape[0]], []
+
+def create_operator(self, ctx, shapes, dtypes):
+return MyAddOne()
+
+
+inp = mx.nd.zeros(shape=(500, 500))
+
+profiler.set_config(profile_all=True, continuous_dump = True)
+profiler.set_state('run')
+
+w = nd.Custom(inp, op_type="MyAddOne")
+
+mx.nd.waitall()
+
+profiler.set_state('stop')
+profiler.dump()
+```
+
+Here, we have created a custom operator called `MyAddOne`, and within its 
`foward()` function, we simply add one to the input. We can visualize the dump 
file in `chrome://tracing/`:
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] lanking520 commented on issue #15415: documentation link is broken, goes to spam site

2019-07-01 Thread GitBox
lanking520 commented on issue #15415: documentation link is broken, goes to 
spam site
URL: 
https://github.com/apache/incubator-mxnet/issues/15415#issuecomment-507473697
 
 
   Find it! 
   ```
   julia/README.md
   77:[documentation](https://dmlc.github.io/MXNet.jl/latest) and 
[examples](examples).
   ```
   @aaronmarkham let's get rid of that


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #15369: Fix build with system's openmp

2019-07-01 Thread GitBox
larroy commented on issue #15369: Fix build with system's openmp
URL: https://github.com/apache/incubator-mxnet/pull/15369#issuecomment-507472965
 
 
   @hubutui would it be possible to add a CMake option to choose the OpenMP 
implementation that one wishes for?  There's three options, MKL, 3rdparty 
OpenMP and the one found in the platform. So I think the pr goes in the good 
direction but we would need a commandline option.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Zha0q1 commented on issue #15403: Updating profiler tutorial to include new custom operator profiling

2019-07-01 Thread GitBox
Zha0q1 commented on issue #15403: Updating profiler tutorial to include new 
custom operator profiling
URL: https://github.com/apache/incubator-mxnet/pull/15403#issuecomment-507468692
 
 
   https://github.com/apache/incubator-mxnet/pull/15403#discussion_r299256224
   
   OK, I can add another code snippet and make sure we have this line:
   profiler.set_config(profile_symbolic = False, aggregate_stats = True, 
profile_imperative = False)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sandeep-krishnamurthy commented on a change in pull request #15403: Updating profiler tutorial to include new custom operator profiling

2019-07-01 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #15403: Updating 
profiler tutorial to include new custom operator profiling
URL: https://github.com/apache/incubator-mxnet/pull/15403#discussion_r299256224
 
 

 ##
 File path: docs/tutorials/python/profiler.md
 ##
 @@ -206,6 +206,15 @@ Let's zoom in to check the time taken by operators
 
 The above picture visualizes the sequence in which the operators were executed 
and the time taken by each operator.
 
+### Profiling Custom Operators
+Should the existing NDArray operators fail to meet all your model's needs, 
MXNet supports [Custom 
Operators](https://mxnet.incubator.apache.org/versions/master/tutorials/gluon/customop.html)
 that you can define in Python. In `forward()` and `backward()` of a custom 
operator, there are two kinds of code: "pure Python" code (NumPy operators 
included) and "sub-operators" (NDArray operators called within `forward()` and 
`backward()`). With that said, MXNet can profile the execution time of both 
kinds without additional setup. Specifically, the MXNet profiler will break a 
single custom operator call into a pure Python event and several sub-operator 
events if there are any. Furthermore, all of those events will have a prefix in 
their names, which is, conveniently, the name of the custom operator you called.
+
+![Custom Operator Profiling 
Screenshot](https://cwiki.apache.org/confluence/download/attachments/118172065/image2019-6-14_15-23-42.png?version=1=1560551022000=v2)
+
+As shown by the screenshot, in the **Custom Operator** domain where all the 
custom operator-related events fall into, you can easily visualize the 
execution time of each segment of your custom operator. For example, we know 
that `CustomAddTwo::sqrt` is a sub-operator of custom operator `CustomAddTwo`, 
and we also know when it is executed accurately.
+
+Please note that: to be able to see the previously described information, you 
need to set `profile_imperative` to `True` even when you are using custom 
operators in [symbolic 
mode](https://mxnet.incubator.apache.org/versions/master/tutorials/basic/symbol.html).
 The reason is that within custom operators, pure python code and sub-operators 
are still called imperatively.  
 
 Review comment:
   Can we please have a code example for showing 'profile_imperative' option 
when using in Symbolic Mode. It might not be clear for users where to set 
'profile_imperative' to True.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sandeep-krishnamurthy commented on a change in pull request #15403: Updating profiler tutorial to include new custom operator profiling

2019-07-01 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #15403: Updating 
profiler tutorial to include new custom operator profiling
URL: https://github.com/apache/incubator-mxnet/pull/15403#discussion_r299255870
 
 

 ##
 File path: docs/tutorials/python/profiler.md
 ##
 @@ -206,6 +206,63 @@ Let's zoom in to check the time taken by operators
 
 The above picture visualizes the sequence in which the operators were executed 
and the time taken by each operator.
 
+### Profiling Custom Operators
+Should the existing NDArray operators fail to meet all your model's needs, 
MXNet supports [Custom 
Operators](https://mxnet.incubator.apache.org/versions/master/tutorials/gluon/customop.html)
 that you can define in Python. In `forward()` and `backward()` of a custom 
operator, there are two kinds of code: "pure Python" code (NumPy operators 
included) and "sub-operators" (NDArray operators called within `forward()` and 
`backward()`). With that said, MXNet can profile the execution time of both 
kinds without additional setup. Specifically, the MXNet profiler will break a 
single custom operator call into a pure Python event and several sub-operator 
events if there are any. Furthermore, all of those events will have a prefix in 
their names, which is, conveniently, the name of the custom operator you called.
+
+Let's try profiling custom operators with the following code example:
+
+```python
+
+import mxnet as mx
+from mxnet import nd
+from mxnet import profiler
+
+class MyAddOne(mx.operator.CustomOp):
+def forward(self, is_train, req, in_data, out_data, aux):  
+self.assign(out_data[0], req[0], in_data[0]+1)
+
+def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
+self.assign(in_grad[0], req[0], out_grad[0])
+
+@mx.operator.register('MyAddOne')
+class CustomAddOneProp(mx.operator.CustomOpProp):
+def __init__(self):
+super(CustomAddOneProp, self).__init__(need_top_grad=True)
+
+def list_arguments(self):
+return ['data']
+
+def list_outputs(self):
+return ['output']
+
+def infer_shape(self, in_shape):
+return [in_shape[0]], [in_shape[0]], []
+
+def create_operator(self, ctx, shapes, dtypes):
+return MyAddOne()
+
+
+inp = mx.nd.zeros(shape=(500, 500))
+
+profiler.set_config(profile_all=True, continuous_dump = True)
+profiler.set_state('run')
+
+w = nd.Custom(inp, op_type="MyAddOne")
+
+mx.nd.waitall()
+
+profiler.set_state('stop')
+profiler.dump()
+```
+
+Here, we have created a custom operator called `MyAddOne`, and within its 
`foward()` function, we simply add one to the input. We can visualize the dump 
file in `chrome://tracing/`:
+
+![Custom Operator Profiling 
Screenshot](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_output_custom_operator_chrome.png)
+
+As shown by the screenshot, in the **Custom Operator** domain where all the 
custom operator-related events fall into, we can easily visualize the execution 
time of each segment of `MyAddOne`. We can tell that `MyAddOne::pure_python` is 
executed first. We also know that `CopyCPU2CPU` and `_plus_scalr` are two 
"sub-operators" of `MyAddOne` and the sequence in which they are exectued.
 
 Review comment:
   nit: spell check. 'exectued' -> 'executed'


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sandeep-krishnamurthy commented on a change in pull request #15403: Updating profiler tutorial to include new custom operator profiling

2019-07-01 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #15403: Updating 
profiler tutorial to include new custom operator profiling
URL: https://github.com/apache/incubator-mxnet/pull/15403#discussion_r299255615
 
 

 ##
 File path: docs/tutorials/python/profiler.md
 ##
 @@ -206,6 +206,63 @@ Let's zoom in to check the time taken by operators
 
 The above picture visualizes the sequence in which the operators were executed 
and the time taken by each operator.
 
+### Profiling Custom Operators
+Should the existing NDArray operators fail to meet all your model's needs, 
MXNet supports [Custom 
Operators](https://mxnet.incubator.apache.org/versions/master/tutorials/gluon/customop.html)
 that you can define in Python. In `forward()` and `backward()` of a custom 
operator, there are two kinds of code: "pure Python" code (NumPy operators 
included) and "sub-operators" (NDArray operators called within `forward()` and 
`backward()`). With that said, MXNet can profile the execution time of both 
kinds without additional setup. Specifically, the MXNet profiler will break a 
single custom operator call into a pure Python event and several sub-operator 
events if there are any. Furthermore, all of those events will have a prefix in 
their names, which is, conveniently, the name of the custom operator you called.
+
+Let's try profiling custom operators with the following code example:
+
+```python
+
+import mxnet as mx
+from mxnet import nd
+from mxnet import profiler
+
+class MyAddOne(mx.operator.CustomOp):
+def forward(self, is_train, req, in_data, out_data, aux):  
+self.assign(out_data[0], req[0], in_data[0]+1)
+
+def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
+self.assign(in_grad[0], req[0], out_grad[0])
+
+@mx.operator.register('MyAddOne')
+class CustomAddOneProp(mx.operator.CustomOpProp):
+def __init__(self):
+super(CustomAddOneProp, self).__init__(need_top_grad=True)
+
+def list_arguments(self):
+return ['data']
+
+def list_outputs(self):
+return ['output']
+
+def infer_shape(self, in_shape):
+return [in_shape[0]], [in_shape[0]], []
+
+def create_operator(self, ctx, shapes, dtypes):
+return MyAddOne()
+
+
+inp = mx.nd.zeros(shape=(500, 500))
+
+profiler.set_config(profile_all=True, continuous_dump = True)
+profiler.set_state('run')
+
+w = nd.Custom(inp, op_type="MyAddOne")
+
+mx.nd.waitall()
+
+profiler.set_state('stop')
+profiler.dump()
+```
+
+Here, we have created a custom operator called `MyAddOne`, and within its 
`foward()` function, we simply add one to the input. We can visualize the dump 
file in `chrome://tracing/`:
 
 Review comment:
nit: spell check. 'foward()' -> 'forward()'


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on issue #15411: Need register_backward_hook() function in mxnet

2019-07-01 Thread GitBox
anirudh2290 commented on issue #15411: Need register_backward_hook() function 
in mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/15411#issuecomment-507463667
 
 
   does register_forward_hook work for hybridized models ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #9686: [Discussion] MXNet 2.0 Roadmap (was: APIs that might be a good idea to break in 2.0)

2019-07-01 Thread GitBox
apeforest commented on issue #9686: [Discussion] MXNet 2.0 Roadmap (was: APIs 
that might be a good idea to break in 2.0)
URL: 
https://github.com/apache/incubator-mxnet/issues/9686#issuecomment-507462646
 
 
   Removing deprecate operators from code base (or at lease hide them from 
users). 
   
   Some operators, such as `SoftmaxActivation` has been made deprecate for a 
few minor releases. Ideally, we should have a process to remove deprecate 
operators in the next major release given users have had sufficient time to 
update them to the newer version in the minor releases. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #14855: [WIP] Jetson nano & TX installation updates

2019-07-01 Thread GitBox
aaronmarkham commented on issue #14855: [WIP] Jetson nano & TX installation 
updates
URL: https://github.com/apache/incubator-mxnet/pull/14855#issuecomment-507460156
 
 
   Closing in favor of #15117  - although at some point nano should be added to 
CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham closed pull request #14855: [WIP] Jetson nano & TX installation updates

2019-07-01 Thread GitBox
aaronmarkham closed pull request #14855: [WIP] Jetson nano & TX installation 
updates
URL: https://github.com/apache/incubator-mxnet/pull/14855
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham edited a comment on issue #15117: nano instructions

2019-07-01 Thread GitBox
aaronmarkham edited a comment on issue #15117: nano instructions
URL: https://github.com/apache/incubator-mxnet/pull/15117#issuecomment-507459856
 
 
   @larroy  - I have some working wheels now which are now in the guide. The 
other PR related to this I'll close, but we really need to get CI going and 
hopefully automate the generation of wheels for this arch.  Right now we only 
have 1.4.0 wheels... but that's better than nothing!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #15117: nano instructions

2019-07-01 Thread GitBox
aaronmarkham commented on issue #15117: nano instructions
URL: https://github.com/apache/incubator-mxnet/pull/15117#issuecomment-507459856
 
 
   @larray - I have some working wheels now which are now in the guide. The 
other PR related to this I'll close, but we really need to get CI going and 
hopefully automate the generation of wheels for this arch.  Right now we only 
have 1.4.0 wheels... but that's better than nothing!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (d74b993 -> 7210cc4)

2019-07-01 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d74b993  Revert default return type for indices in argsort() and 
topk() back to float32 (#15360)
 add 7210cc4  nano instructions (#15117)

No new revisions were added by this update.

Summary of changes:
 docs/install/index.md  |  85 +---
 docs/install/install-jetson.md | 289 +
 2 files changed, 292 insertions(+), 82 deletions(-)
 create mode 100644 docs/install/install-jetson.md



[GitHub] [incubator-mxnet] aaronmarkham merged pull request #15117: nano instructions

2019-07-01 Thread GitBox
aaronmarkham merged pull request #15117: nano instructions
URL: https://github.com/apache/incubator-mxnet/pull/15117
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #13143: [MXNET-1206] Support NDArray indexing with None and Ellipsis

2019-07-01 Thread GitBox
reminisce commented on issue #13143: [MXNET-1206] Support NDArray indexing with 
None and Ellipsis
URL: https://github.com/apache/incubator-mxnet/pull/13143#issuecomment-507458141
 
 
   You can run some simple performance tests involving integer indexing and 
first-axis slicing. For example, you can get the time of "splitting" a batch of 
images, say shape (256, 3, 224, 224) into four sub-batches, each of which shape 
is (64, 3, 224, 224).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] meixitu opened a new issue #15425: in mxnet1.4.1-cuda10.0, depthwise conv training is very very slow

2019-07-01 Thread GitBox
meixitu opened a new issue #15425: in mxnet1.4.1-cuda10.0, depthwise conv  
training is very very slow
URL: https://github.com/apache/incubator-mxnet/issues/15425
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   Depthwise conv is very slow in python3.6, mxnet1.4.1-cuda10.0 version.
   If I set num_group=1, the training speed can improve 20 times.
   
   
   ## Environment info (Required)
   
![image](https://user-images.githubusercontent.com/32910309/60470843-d9467e00-9c16-11e9-8aab-7c86d677db5c.png)
   
![image](https://user-images.githubusercontent.com/32910309/60470857-e5cad680-9c16-11e9-85fb-153fc7a303ea.png)
   
   
   
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   0, python=3.6.0
   1. pip3 install mxnet-cu100==1.4.1
   2. use this model, 
https://github.com/mnikitin/EfficientNet/blob/master/efficientnet_model.py
   3. set the batch_size=128, 4GPU, efficientnet-b6, input_size=112x112x3
   
   ## What have you tried to solve it?
   
   1. I try to use  mx.sym.Convolution to replace the gluon, the training speed 
is same
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #15425: in mxnet1.4.1-cuda10.0, depthwise conv training is very very slow

2019-07-01 Thread GitBox
mxnet-label-bot commented on issue #15425: in mxnet1.4.1-cuda10.0, depthwise 
conv  training is very very slow
URL: 
https://github.com/apache/incubator-mxnet/issues/15425#issuecomment-507452370
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Performance


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15418: [numpy] fix cython

2019-07-01 Thread GitBox
anirudhacharya commented on issue #15418: [numpy] fix cython
URL: https://github.com/apache/incubator-mxnet/pull/15418#issuecomment-507448343
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15424: fixed config.mk and Makefile bugs for installing mkl

2019-07-01 Thread GitBox
anirudhacharya commented on issue #15424: fixed config.mk and Makefile bugs for 
installing mkl
URL: https://github.com/apache/incubator-mxnet/pull/15424#issuecomment-507448369
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15422: Upgrade MKL-DNN submodule to v0.20 release

2019-07-01 Thread GitBox
anirudhacharya commented on issue #15422: Upgrade MKL-DNN submodule to v0.20 
release
URL: https://github.com/apache/incubator-mxnet/pull/15422#issuecomment-507448357
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15417: update ratcheck for apache-rat 0.13 release

2019-07-01 Thread GitBox
anirudhacharya commented on issue #15417: update ratcheck for apache-rat 0.13 
release
URL: https://github.com/apache/incubator-mxnet/pull/15417#issuecomment-507448327
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
anirudhacharya commented on issue #15413: [MXNET-978] Higher Order Gradient 
Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#issuecomment-507448300
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15412: [MXNET-978] Higher Order Gradient Support `sinh`, `cosh`.

2019-07-01 Thread GitBox
anirudhacharya commented on issue #15412: [MXNET-978] Higher Order Gradient 
Support `sinh`, `cosh`.
URL: https://github.com/apache/incubator-mxnet/pull/15412#issuecomment-507448276
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15416: [MXNET-978] Higher Order Gradient Support `logp1`, `expm1`, `square`.

2019-07-01 Thread GitBox
anirudhacharya commented on issue #15416: [MXNET-978] Higher Order Gradient 
Support `logp1`, `expm1`, `square`.
URL: https://github.com/apache/incubator-mxnet/pull/15416#issuecomment-507448313
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #15418: [numpy] fix cython

2019-07-01 Thread GitBox
reminisce commented on a change in pull request #15418: [numpy] fix cython
URL: https://github.com/apache/incubator-mxnet/pull/15418#discussion_r299237059
 
 

 ##
 File path: python/mxnet/cython/ndarray.pyx
 ##
 @@ -64,21 +64,29 @@ cdef class NDArrayBase:
 
 
 _ndarray_cls = None
+_np_ndarray_cls = None
 
 def _set_ndarray_class(cls):
 global _ndarray_cls
 _ndarray_cls = cls
 
 
-cdef NewArray(NDArrayHandle handle, int stype=-1):
+def _set_np_ndarray_class(cls):
+global _np_ndarray_cls
+_np_ndarray_cls = cls
+
+
+cdef NewArray(NDArrayHandle handle, int is_np_op, int stype=-1):
 
 Review comment:
   1. Rename `is_np_op` to `is_np_array` for better readability.
   2. Better not to change the original API. You can switch `is_np_array` with 
`stype` and default `is_np_array` to 0.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15109: [DOC] refine autograd docs

2019-07-01 Thread GitBox
larroy commented on a change in pull request #15109: [DOC] refine autograd docs
URL: https://github.com/apache/incubator-mxnet/pull/15109#discussion_r299239500
 
 

 ##
 File path: docs/api/python/autograd/autograd.md
 ##
 @@ -76,7 +82,63 @@ Detailed tutorials are available in Part 1 of
 [the MXNet gluon book](http://gluon.mxnet.io/).
 
 
+# Higher order gradient
+
+Some operators support higher order gradients. Meaning that you calculate the 
gradient of the
+gradient. For this the operator's backward must be differentiable as well. 
Some operators support
+differentiating multiple times, and others two, most just once.
+
+For calculating higher order gradients, we can use the `mx.autograd.grad` 
function while recording
+and then call backward, or call `mx.autograd.grad` two times. If we do the 
latter, is important that
+the first call uses `create_graph=True` and `retain_graph=True` and the second 
call uses
+`create_graph=False` and `retain_graph=True`. Otherwise we will not get the 
results that we want. If
+we would be to recreate the graph in the second call, we would end up with a 
graph of just the
+backward nodes, not the full initial graph that includes the forward nodes.
+
+The pattern to calculate higher order gradients is the following:
+
+```python
+from mxnet import ndarray as nd
+from mxnet import autograd as ag
+x = nd.array([1,2,3])
+x.attach_grad()
+def f(x):
+# Any function which supports higher oder gradient
+return nd.log(x)
+```
+
+If the operators used in `f` don't support higher order gradients you will get 
an error like
+`operator ... is non-differentiable because it didn't register FGradient 
attribute.`. This means
+that it doesn't support getting the gradient of the gradient. Which is, 
running backward on
+the backward graph.
+
+Using mxnet.autograd.grad multiple times:
+
+```python
+with ag.record():
+y = f(x)
+x_grad = ag.grad(heads=y, variables=x, create_graph=True, 
retain_graph=True)[0]
+x_grad_grad = ag.grad(heads=x_grad, variables=x, create_graph=False, 
retain_graph=False)[0]
+print(f"dL/dx: {x_grad}")
+print(f"d2L/dx2: {x_grad_grad}")
 
 Review comment:
   you are right, I will revisit. Can you suggest how to name it instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #15418: [numpy] fix cython

2019-07-01 Thread GitBox
reminisce commented on a change in pull request #15418: [numpy] fix cython
URL: https://github.com/apache/incubator-mxnet/pull/15418#discussion_r299237553
 
 

 ##
 File path: python/mxnet/cython/ndarray.pyx
 ##
 @@ -64,21 +64,29 @@ cdef class NDArrayBase:
 
 
 _ndarray_cls = None
+_np_ndarray_cls = None
 
 def _set_ndarray_class(cls):
 global _ndarray_cls
 _ndarray_cls = cls
 
 
-cdef NewArray(NDArrayHandle handle, int stype=-1):
+def _set_np_ndarray_class(cls):
+global _np_ndarray_cls
+_np_ndarray_cls = cls
+
+
+cdef NewArray(NDArrayHandle handle, int is_np_op, int stype=-1):
 """Create a new array given handle"""
-return _ndarray_cls(_ctypes.cast(handle, 
_ctypes.c_void_p), stype=stype)
+if is_np_op:
 
 Review comment:
   These four lines can be simplified into two:
   ```python
   create_array_fn = _np_ndarray_cls if is_np_array else _ndarray_cls
   return create_array_fn(_ctypes.cast(handle, 
_ctypes.c_void_p), stype=stype)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #15418: [numpy] fix cython

2019-07-01 Thread GitBox
reminisce commented on a change in pull request #15418: [numpy] fix cython
URL: https://github.com/apache/incubator-mxnet/pull/15418#discussion_r299238803
 
 

 ##
 File path: python/mxnet/cython/symbol.pyx
 ##
 @@ -84,19 +84,29 @@ cdef SymbolSetAttr(SymbolHandle handle, dict kwargs):
 
 
 _symbol_cls = SymbolBase
+_np_symbol_cls = None
 
 def _set_symbol_class(cls):
 global _symbol_cls
 _symbol_cls = cls
 
-cdef NewSymbol(SymbolHandle handle):
+
+def _set_np_symbol_class(cls):
+global _np_symbol_cls
+_np_symbol_cls = cls
+
+
+cdef NewSymbol(SymbolHandle handle, int is_np_op):
 
 Review comment:
   `is_np_op` --> `is_np_sym`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #15424: fixed config.mk and Makefile bugs for installing mkl

2019-07-01 Thread GitBox
larroy commented on issue #15424: fixed config.mk and Makefile bugs for 
installing mkl
URL: https://github.com/apache/incubator-mxnet/pull/15424#issuecomment-507446664
 
 
   One question that I have about your description, where do we define 
USE_STATIC_MKL?  I see it in files define ad-hoc but not in the Makefile or 
config.mk, could you edit the description and clarify where USE_STATIC_MKL is 
updated or add a comment in Makefile or config.mk or documentation? I didn't 
see it in the patch, maybe I missed something.
   
   ```
   $ ag USE_STATIC_MKL |less
   make/readthedocs.mk:54:USE_STATIC_MKL = NONE
   make/readthedocs.mk:74: USE_STATIC_MKL = 1
   make/maven/maven_linux_mkl.mk:124:USE_STATIC_MKL = 1
   make/maven/maven_linux_mkl.mk:126:USE_STATIC_MKL = NONE
   make/maven/maven_linux_cu90mkl.mk:127:USE_STATIC_MKL = 1
   make/maven/maven_linux_cu90mkl.mk:129:USE_STATIC_MKL = NONE
   make/maven/maven_linux_cu92mkl.mk:127:USE_STATIC_MKL = 1
   make/maven/maven_linux_cu92mkl.mk:129:USE_STATIC_MKL = NONE
   make/maven/maven_darwin_mkl.mk:124:USE_STATIC_MKL = 1
   make/maven/maven_darwin_mkl.mk:126:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu91.mk:127:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu91.mk:129:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu80mkl.mk:111:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu80mkl.mk:113:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu80.mk:127:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu80.mk:129:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu90.mk:127:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu90.mk:129:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu91mkl.mk:111:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu91mkl.mk:113:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu101.mk:127:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu101.mk:129:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu75mkl.mk:108:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu75mkl.mk:110:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu100mkl.mk:111:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu100mkl.mk:113:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cpu.mk:124:USE_STATIC_MKL = 1
   make/pip/pip_linux_cpu.mk:126:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu75.mk:124:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu75.mk:126:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu100.mk:127:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu100.mk:129:USE_STATIC_MKL = NONE
   make/pip/pip_darwin_mkl.mk:108:USE_STATIC_MKL = 1
   make/pip/pip_darwin_mkl.mk:110:USE_STATIC_MKL = NONE
   make/pip/pip_linux_mkl.mk:108:USE_STATIC_MKL = 1
   make/pip/pip_linux_mkl.mk:110:USE_STATIC_MKL = NONE
   make/pip/pip_darwin_cpu.mk:124:USE_STATIC_MKL = 1
   make/pip/pip_darwin_cpu.mk:126:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu92.mk:127:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu92.mk:129:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu92mkl.mk:111:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu92mkl.mk:113:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu90mkl.mk:111:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu90mkl.mk:113:USE_STATIC_MKL = NONE
   make/pip/pip_linux_cu101mkl.mk:111:USE_STATIC_MKL = 1
   make/pip/pip_linux_cu101mkl.mk:113:USE_STATIC_MKL = NONE
   make/crosscompile.jetson.mk:130:USE_STATIC_MKL = 1
   make/crosscompile.jetson.mk:132:USE_STATIC_MKL = NONE
   3rdparty/mshadow/make/mshadow.mk:87:ifneq ($(USE_STATIC_MKL), NONE)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15109: [DOC] refine autograd docs

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15109: [DOC] refine autograd 
docs
URL: https://github.com/apache/incubator-mxnet/pull/15109#discussion_r299238908
 
 

 ##
 File path: docs/api/python/autograd/autograd.md
 ##
 @@ -76,7 +82,63 @@ Detailed tutorials are available in Part 1 of
 [the MXNet gluon book](http://gluon.mxnet.io/).
 
 
+# Higher order gradient
+
+Some operators support higher order gradients. Meaning that you calculate the 
gradient of the
+gradient. For this the operator's backward must be differentiable as well. 
Some operators support
+differentiating multiple times, and others two, most just once.
+
+For calculating higher order gradients, we can use the `mx.autograd.grad` 
function while recording
+and then call backward, or call `mx.autograd.grad` two times. If we do the 
latter, is important that
+the first call uses `create_graph=True` and `retain_graph=True` and the second 
call uses
+`create_graph=False` and `retain_graph=True`. Otherwise we will not get the 
results that we want. If
+we would be to recreate the graph in the second call, we would end up with a 
graph of just the
+backward nodes, not the full initial graph that includes the forward nodes.
+
+The pattern to calculate higher order gradients is the following:
+
+```python
+from mxnet import ndarray as nd
+from mxnet import autograd as ag
+x = nd.array([1,2,3])
+x.attach_grad()
+def f(x):
+# Any function which supports higher oder gradient
+return nd.log(x)
+```
+
+If the operators used in `f` don't support higher order gradients you will get 
an error like
+`operator ... is non-differentiable because it didn't register FGradient 
attribute.`. This means
+that it doesn't support getting the gradient of the gradient. Which is, 
running backward on
+the backward graph.
+
+Using mxnet.autograd.grad multiple times:
+
+```python
+with ag.record():
+y = f(x)
+x_grad = ag.grad(heads=y, variables=x, create_graph=True, 
retain_graph=True)[0]
+x_grad_grad = ag.grad(heads=x_grad, variables=x, create_graph=False, 
retain_graph=False)[0]
+print(f"dL/dx: {x_grad}")
+print(f"d2L/dx2: {x_grad_grad}")
 
 Review comment:
   As we discussed with @sxjscience, this may be not `d2L/dx2` because the 
function of gradients does not necessarily have to be the loss function `L` in 
the first order. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >