[GitHub] wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209416817
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   Thank you!
   I will test it on Windows.
   If it works, I will update the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209416606
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   I see two problems in your particular gist you paste. 
   
   - The destructor need to be declared in the global scope(instead of 
constructing when passing to the argument)
   - THe cstring need to outlive the capsule(construct a global string)
   - The function need to outlive the capsule(constructor c func and put it 
under global scope/module)
   
   ```python
   
   cfunc = ctypes.CFUNCTYPE(None, ctypes.c_void_p)
   
   def dfunc(dltensor):
   pycaps = ctypes.cast(dltensor, ctypes.py_object)
   pass
   
   c_destructor = cfunc(dfunc)
   c_str_dltensor = ctypes.c_char_p(b"dltensor")
   
   def test():
   a = ctypes.pythonapi.PyCapsule_New(1, c_str_dltensor, c_destructor)
   test()
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12121: Broken link in test_gluon_model_zoo.test_models

2018-08-10 Thread GitBox
haojin2 commented on issue #12121: Broken link in 
test_gluon_model_zoo.test_models
URL: 
https://github.com/apache/incubator-mxnet/issues/12121#issuecomment-412251074
 
 
   Not really sure, but seems like there're a few build failures caused by 
failed links from time to time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209416606
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   I see two problems in your particular gist you paste. 
   
   - The destructor need to be declared in the global scope(instead of 
constructing when passing to the argument)
   - THe cstring need to outlive the capsule(construct a global string)
   - The function need to outlive the capsule(constructor c func and put it 
under global scope)
   
   ```python
   
   cfunc = ctypes.CFUNCTYPE(None, ctypes.c_void_p)
   
   def dfunc(dltensor):
   pycaps = ctypes.cast(dltensor, ctypes.py_object)
   pass
   
   c_destructor = cfunc(dfunc)
   c_str_dltensor = ctypes.c_char_p(b"dltensor")
   
   def test():
   a = ctypes.pythonapi.PyCapsule_New(1, c_str_dltensor, c_destructor)
   test()
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209416606
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   I see two problems in your particular gist you paste. 
   
   - The destructor need to be declared in the global scope(instead of 
constructing when passing to the argument)
   ```python
   
   cfunc = ctypes.CFUNCTYPE(None, ctypes.c_void_p)
   
   def dfunc(dltensor):
   pycaps = ctypes.cast(dltensor, ctypes.py_object)
   pass
   
   c_destructor = cfunc(dfunc)
   c_str_dltensor = cstr("dltensor")
   
   def test():
   a = ctypes.pythonapi.PyCapsule_New(1, c_str_dltensor, c_destructor)
   test()
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on issue #12121: Broken link in test_gluon_model_zoo.test_models

2018-08-10 Thread GitBox
hetong007 commented on issue #12121: Broken link in 
test_gluon_model_zoo.test_models
URL: 
https://github.com/apache/incubator-mxnet/issues/12121#issuecomment-412251011
 
 
   @haojin2 Do we have an idea of how often does such kind of error show up?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 commented on issue #12106: [MXNET-795] Fix a bug that CutSubgraph works only when each subgraph has its distinct name

2018-08-10 Thread GitBox
junrushao1994 commented on issue #12106: [MXNET-795] Fix a bug that CutSubgraph 
works only when each subgraph has its distinct name
URL: https://github.com/apache/incubator-mxnet/pull/12106#issuecomment-412250763
 
 
   @zheng-da Could we get this fix merged?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209416184
 
 

 ##
 File path: python/mxnet/_ctypes/ndarray.py
 ##
 @@ -31,21 +31,24 @@
 
 class NDArrayBase(object):
 """Base data structure for ndarray"""
-__slots__ = ["handle", "writable"]
+__slots__ = ["handle", "writable", "dlpack"]
 # pylint: disable= no-member
 
-def __init__(self, handle, writable=True):
+def __init__(self, handle, writable=True, dlpack=None):
 
 Review comment:
   Thanks.
   But I think the new NDArray object `b` couldn't hold a `shared_ptr` as the 
same as the original NDArray `a`. The new NDArray `b` only get the `data 
pointer` rather than `shared_ptr` from DLPack.
   
   In the other case,
   ```python
   from torch.utils import dlpack
   a = torch.array([1,2,3])
   pack = dlpack.to_dlpack(a)
   b = mx.nd.from_dlpack(pack)
   del a, pack
   ```
   When `dlpack.to_dlpack` is called, PyTorch will allocate `ATenDLMTensor` 
which increases the refcount of Torch Tensor 
[code](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/DLConvertor.cpp#L129).
   After the variables `a` and `pack` are released, `ATenDLMTensor` still 
exists.
   I think the deleter should be called by the new NDArray `b` when the NDArray 
`b` releases. Refer to [PyTorch 
FromDLPack](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/DLConvertor.cpp#L170).
 However, NDArray doesn't have explicit deleter parameter.
   
   In my PR, `from_dlpack` will copy the dlpack object.
   When the old dlpack `pack` releases, it doesn't call the deleter.
   The new dlpack `b.dlpack` will be a member of the new NDArray `b` as 
`NDArray(handle=handle, dlpack=dlpack_copy)`.
   When the new NDArray `b` releases, the new dlpack `b.dlpack` will be 
released, then call the deleter by the new dlpack `b.dlpack`. And the deleter 
will release `NDArrayDLManager` or `ATenDLMTensor`. The refcount of the old 
NDArray `a` will decrease 1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209416184
 
 

 ##
 File path: python/mxnet/_ctypes/ndarray.py
 ##
 @@ -31,21 +31,24 @@
 
 class NDArrayBase(object):
 """Base data structure for ndarray"""
-__slots__ = ["handle", "writable"]
+__slots__ = ["handle", "writable", "dlpack"]
 # pylint: disable= no-member
 
-def __init__(self, handle, writable=True):
+def __init__(self, handle, writable=True, dlpack=None):
 
 Review comment:
   Thanks.
   
   In the other case,
   ```python
   from torch.utils import dlpack
   a = torch.array([1,2,3])
   pack = dlpack.to_dlpack(a)
   b = mx.nd.from_dlpack(pack)
   del a, pack
   ```
   When `dlpack.to_dlpack` is called, PyTorch will allocate `ATenDLMTensor` 
which increases the refcount of Torch Tensor 
[code](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/DLConvertor.cpp#L129).
   After the variables `a` and `pack` are released, `ATenDLMTensor` still 
exists.
   I think the deleter should be called by the new NDArray `b` when the NDArray 
`b` releases. Refer to [PyTorch 
FromDLPack](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/DLConvertor.cpp#L170).
 However, NDArray doesn't have explicit deleter parameter.
   
   In my PR, `from_dlpack` will copy the dlpack object.
   When the old dlpack `pack` releases, it doesn't call the deleter.
   The new dlpack `b.dlpack` will be a member of the new NDArray `b` as 
`NDArray(handle=handle, dlpack=dlpack_copy)`.
   When the new NDArray `b` releases, the new dlpack `b.dlpack` will be 
released, then call the deleter by the new dlpack `b.dlpack`. And the deleter 
will release `NDArrayDLManager` or `ATenDLMTensor`. The refcount of the old 
NDArray `a` will decrease 1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209416184
 
 

 ##
 File path: python/mxnet/_ctypes/ndarray.py
 ##
 @@ -31,21 +31,24 @@
 
 class NDArrayBase(object):
 """Base data structure for ndarray"""
-__slots__ = ["handle", "writable"]
+__slots__ = ["handle", "writable", "dlpack"]
 # pylint: disable= no-member
 
-def __init__(self, handle, writable=True):
+def __init__(self, handle, writable=True, dlpack=None):
 
 Review comment:
   Thanks.
   
   In the other case,
   ```python
   from torch.utils import dlpack
   a = torch.array([1,2,3])
   pack = dlpack.to_dlpack(a)
   b = mx.nd.from_dlpack(pack)
   del a, pack
   ```
   When `dlpack.to_dlpack` is called, PyTorch will allocate `ATenDLMTensor` 
which increases the refcount of Torch Tensor 
[code](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/DLConvertor.cpp#L129).
   After the variables `a` and `pack` are released, `ATenDLMTensor` still 
exists.
   I think the deleter should be called by the new NDArray `b` when the NDArray 
`b` releases. Refer to [PyTorch 
FromDLPack](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/DLConvertor.cpp#L170).
 However, NDArray doesn't have explicit deleter parameter.
   
   In my PR, `from_dlpack` will copy the dlpack object.
   When the old dlpack `pack` releases, it doesn't call the deleter.
   The new dlpack will be a member of the new NDArray as 
`NDArray(handle=handle, dlpack=dlpack_copy)`.
   When the new NDArray releases, the new dlpack will be released, then call 
the deleter by the new dlpack. And the deleter will release `NDArrayDLManager` 
or `ATenDLMTensor`. The refcount of the old NDArray will decrease 1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209416184
 
 

 ##
 File path: python/mxnet/_ctypes/ndarray.py
 ##
 @@ -31,21 +31,24 @@
 
 class NDArrayBase(object):
 """Base data structure for ndarray"""
-__slots__ = ["handle", "writable"]
+__slots__ = ["handle", "writable", "dlpack"]
 # pylint: disable= no-member
 
-def __init__(self, handle, writable=True):
+def __init__(self, handle, writable=True, dlpack=None):
 
 Review comment:
   Thanks.
   
   In the other case,
   ```python
   from torch.utils import dlpack
   a = torch.array([1,2,3])
   pack = dlpack.to_dlpack(a)
   b = mx.nd.from_dlpack(pack)
   del a, pack
   ```
   When `dlpack.to_dlpack` is called, PyTorch will allocate `ATenDLMTensor` 
which increases the refcount of Torch Tensor 
[code](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/DLConvertor.cpp#L129).
   After the variables `a` and `pack` are released, `ATenDLMTensor` still 
exists.
   I think the deleter should be called by the new NDArray `b` when the NDArray 
`b` releases. Refer to [PyTorch 
FromDLPack](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/DLConvertor.cpp#L170).
 However, NDArray doesn't have explicit deleter parameter.
   
   In my PR, `from_dlpack` will copy the dlpack object.
   When the old dlpack releases, it doesn't call the deleter.
   The new dlpack will be a member of the new NDArray as 
`NDArray(handle=handle, dlpack=dlpack_copy)`.
   When the new NDArray releases, the new dlpack will be released, then call 
the deleter by the new dlpack. And the deleter will release `NDArrayDLManager` 
or `ATenDLMTensor`. The refcount of the old NDArray will decrease 1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209415677
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   Yes.
   In [the test 
code](https://gist.github.com/wkcn/501cd118b8ceb1608bd6c8fa968e7912), it works 
in Linux but failed in Windows.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] andrewfayres commented on a change in pull request #12110: [MXNET-730][WIP] Scala test in nightly

2018-08-10 Thread GitBox
andrewfayres commented on a change in pull request #12110: [MXNET-730][WIP] 
Scala test in nightly
URL: https://github.com/apache/incubator-mxnet/pull/12110#discussion_r209415600
 
 

 ##
 File path: 
scala-package/examples/src/main/scala/org/apache/mxnetexamples/Util.scala
 ##
 @@ -42,4 +48,30 @@ object Util {
 }
if (!success) throw new Exception(s"$url Download failed!")
   }
+
+  /**
+* This Util is designed to manage the tests in CI
+* @param name the name of the test
+* @return runTest and number of epoch
+*/
+  def testManager(name: String) : (Boolean, Int) = {
 
 Review comment:
   Yes, you're right. I meant to say we should avoid having it in src/main 
folder.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12129: update dmlc-core for security reason

2018-08-10 Thread GitBox
haojin2 commented on issue #12129: update dmlc-core for security reason
URL: https://github.com/apache/incubator-mxnet/pull/12129#issuecomment-412248681
 
 
   @szha @eric-haibin-lin @anirudh2290 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12027: [MXNET-768] Partially enable flaky test for norm operator

2018-08-10 Thread GitBox
anirudhacharya commented on a change in pull request #12027: [MXNET-768] 
Partially enable flaky test for norm operator
URL: https://github.com/apache/incubator-mxnet/pull/12027#discussion_r209415485
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -3121,20 +3121,22 @@ def l2norm(input_data, axis=0, keepdims=True):
 atol=1e-2 if dtype is np.float16 else 
1e-5, ctx=ctx)
 # Disable numeric gradient 
https://github.com/apache/incubator-mxnet/issues/11509
 # # check gradient
-# check_numeric_gradient(norm_sym, [in_data], 
numeric_eps=epsilon, rtol=1e-2, atol=1e-3)
-# if i < in_data_dim-1:
-# norm_sym = mx.symbol.norm(data=data, ord=order, axis=(i, 
i+1), keepdims=True)
-# npy_out = l1norm(in_data, (i, i+1)) if order is 1 else 
l2norm(in_data, (i, i+1))
-# npy_out_backward = np.sign(in_data) if order is 1 else 
in_data/npy_out
-# check_symbolic_forward(norm_sym, [in_data], [npy_out],
-#rtol=1e-2 if dtype is np.float16 
else 1e-5,
-#atol=1e-2 if dtype is np.float16 
else 1e-5, ctx=ctx)
-# check_symbolic_backward(norm_sym, [in_data], 
[np.ones(npy_out.shape)],
-# [npy_out_backward],
-# rtol=1e-2 if dtype is np.float16 
else 1e-5,
-# atol=1e-2 if dtype is np.float16 
else 1e-5, ctx=ctx)
-# # check gradient
-# check_numeric_gradient(norm_sym, [in_data], 
numeric_eps=epsilon, rtol=1e-2, atol=1e-3)
+# if dtype is not np.float16:
+# check_numeric_gradient(norm_sym, [in_data], 
numeric_eps=epsilon, rtol=1e-1, atol=1e-3)
+if i < in_data_dim-1:
+norm_sym = mx.symbol.norm(data=data, ord=order, axis=(i, 
i+1), keepdims=True)
+npy_out = l1norm(in_data, (i, i+1)) if order is 1 else 
l2norm(in_data, (i, i+1))
+npy_out_backward = np.sign(in_data) if order is 1 else 
in_data/npy_out
+check_symbolic_forward(norm_sym, [in_data], [npy_out],
+   rtol=1e-2 if dtype is np.float16 
else 1e-5,
+   atol=1e-2 if dtype is np.float16 
else 1e-5, ctx=ctx)
+check_symbolic_backward(norm_sym, [in_data], 
[np.ones(npy_out.shape)],
+[npy_out_backward],
+rtol=1e-2 if dtype is np.float16 
else 1e-5,
+atol=1e-2 if dtype is np.float16 
else 1e-5, ctx=ctx)
+# # check gradient
+# if dtype is not np.float16:
+# check_numeric_gradient(norm_sym, [in_data], 
numeric_eps=epsilon, rtol=1e-1, atol=1e-3)
 
 Review comment:
   it is there in line 3122


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #6925: Layer Specific Learning Rate For R

2018-08-10 Thread GitBox
anirudhacharya commented on issue #6925: Layer Specific Learning Rate For R
URL: 
https://github.com/apache/incubator-mxnet/issues/6925#issuecomment-412247668
 
 
   @mxnet-label-bot please add [Feature Request] to this issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ifeherva commented on issue #11984: Generalized broadcast_like operator

2018-08-10 Thread GitBox
ifeherva commented on issue #11984: Generalized broadcast_like operator
URL: https://github.com/apache/incubator-mxnet/pull/11984#issuecomment-412247572
 
 
   I think so.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] safrooze commented on issue #11493: Fix MXPredReshape in the c_predict_api

2018-08-10 Thread GitBox
safrooze commented on issue #11493: Fix MXPredReshape in the c_predict_api
URL: https://github.com/apache/incubator-mxnet/pull/11493#issuecomment-412245838
 
 
   What tolerance values are recommended for random data in mxnet?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #11928: Generalized reshape_like operator

2018-08-10 Thread GitBox
szha closed pull request #11928: Generalized reshape_like operator
URL: https://github.com/apache/incubator-mxnet/pull/11928
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/elemwise_unary_op.h 
b/src/operator/tensor/elemwise_unary_op.h
index 0c37a941fb6..e09a6cccddb 100644
--- a/src/operator/tensor/elemwise_unary_op.h
+++ b/src/operator/tensor/elemwise_unary_op.h
@@ -476,6 +476,34 @@ void HardSigmoidBackward(const nnvm::NodeAttrs& attrs,
   });
 }
 
+struct ReshapeLikeParam : public dmlc::Parameter {
+  dmlc::optional lhs_begin, rhs_begin, lhs_end, rhs_end;
+  DMLC_DECLARE_PARAMETER(ReshapeLikeParam) {
+DMLC_DECLARE_FIELD(lhs_begin)
+.set_default(dmlc::optional())
+.describe(
+"Defaults to 0. "
+"The beginning index along which the lhs dimensions are to be "
+"reshaped. Supports negative indices.");
+DMLC_DECLARE_FIELD(lhs_end)
+.set_default(dmlc::optional())
+.describe("Defaults to None. "
+  "The ending index along which the lhs dimensions are to be "
+  "used for reshaping. Supports negative indices.");
+DMLC_DECLARE_FIELD(rhs_begin)
+.set_default(dmlc::optional())
+.describe("Defaults to 0. "
+  "The beginning index along which the rhs dimensions are to "
+  "be used for "
+  "reshaping. Supports negative indices.");
+DMLC_DECLARE_FIELD(rhs_end)
+.set_default(dmlc::optional())
+.describe("Defaults to None. "
+  "The ending index along which the rhs dimensions are to be "
+  "used for reshaping. Supports negative indices.");
+  }
+};
+
 /*! \brief Unary compute */
 #define MXNET_OPERATOR_REGISTER_UNARY(__name$)  \
   NNVM_REGISTER_OP(__name$) \
diff --git a/src/operator/tensor/elemwise_unary_op_basic.cc 
b/src/operator/tensor/elemwise_unary_op_basic.cc
index 929bc7426d5..f7f21f9076a 100644
--- a/src/operator/tensor/elemwise_unary_op_basic.cc
+++ b/src/operator/tensor/elemwise_unary_op_basic.cc
@@ -350,10 +350,109 @@ NNVM_REGISTER_OP(_identity_with_attr_like_rhs)
 .add_argument("lhs", "NDArray-or-Symbol", "First input.")
 .add_argument("rhs", "NDArray-or-Symbol", "Second input.");
 
+void ReshapeLikeRangeCanonicalize(int ndims, const char *side,
+  const dmlc::optional ,
+  const dmlc::optional , int *cbegin,
+  int *cend) {
+  *cbegin = begin.has_value() ? begin.value() : 0;
+  if (*cbegin < 0)
+*cbegin += ndims;
+
+  if (!end.has_value()) {
+*cend = ndims;
+  } else {
+*cend = end.value();
+if (*cend < 0) {
+  *cend += ndims;
+}
+  }
+  CHECK(*cend <= ndims) << "Invalid end for " << side << "_end=" << end
+<< " as dimension number is " << ndims;
+  CHECK((*cbegin < *cend)) << "Invalid begin, end, get " << side
+   << "_begin=" << begin << ", " << side
+   << "_end=" << end;
+
+  CHECK(*cend >= 0) << "Invalid end for " << side << "_end=" << end;
+  CHECK(*cbegin >= 0) << "Invalid begin for " << side << "_begin=" << begin;
+}
+
+void GetReshapeLikeParams(const ReshapeLikeParam , const TShape ,
+  const TShape , int *lhs_begin, int *lhs_end,
+  int *rhs_begin, int *rhs_end) {
+  // LHS params
+  ReshapeLikeRangeCanonicalize(lshape.ndim(), "lhs", param.lhs_begin,
+   param.lhs_end, lhs_begin, lhs_end);
+  // RHS params
+  ReshapeLikeRangeCanonicalize(rshape.ndim(), "rhs", param.rhs_begin,
+   param.rhs_end, rhs_begin, rhs_end);
+}
 
+bool ReshapeLikeShapeCompute(const nnvm::NodeAttrs ,
+ std::vector *in_attrs,
+ std::vector *out_attrs) {
+  const ReshapeLikeParam  = nnvm::get(attrs.parsed);
+  const TShape  = (*in_attrs)[0];
+  const TShape  = (*in_attrs)[1];
+  int lhs_begin, lhs_end, rhs_begin, rhs_end;
+  GetReshapeLikeParams(param, lshape, rshape, _begin, _end, _begin,
+   _end);
+
+  int lhsrank = static_cast(lshape.ndim());
+  int orank = lhsrank + (rhs_end - rhs_begin) - (lhs_end - lhs_begin);
+  TShape oshape(orank);
+
+  for (int i = 0; i < lhs_begin; ++i)
+oshape[i] = lshape[i];
+
+  int opos = lhs_begin;
+  for (int i = rhs_begin; i < rhs_end; ++i) {
+oshape[opos] = rshape[i];
+opos += 1;
+  }
+
+  for (int i = lhs_end; i < lhsrank; ++i) {
+oshape[opos] = lshape[i];
+opos += 1;
+  }
+
+  CHECK_EQ((*in_attrs)[0].Size(), oshape.Size())
+  << "Cannot reshape lhs with 

[incubator-mxnet] branch master updated: Generalized reshape_like operator (#11928)

2018-08-10 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new c44f16b  Generalized reshape_like operator (#11928)
c44f16b is described below

commit c44f16b0909d94c9beaf9c5fc0773855bbc91807
Author: Sebastian Bodenstein 
AuthorDate: Sat Aug 11 04:35:32 2018 +0200

Generalized reshape_like operator (#11928)

* first commit

* fix documentation

* changed static_cast(end) to end.has_value()
fixed documentation issues

* change begin from int to optional

* test None as lhs
---
 src/operator/tensor/elemwise_unary_op.h|  28 ++
 src/operator/tensor/elemwise_unary_op_basic.cc | 118 +
 tests/python/unittest/test_operator.py |  53 +++
 3 files changed, 184 insertions(+), 15 deletions(-)

diff --git a/src/operator/tensor/elemwise_unary_op.h 
b/src/operator/tensor/elemwise_unary_op.h
index 0c37a94..e09a6cc 100644
--- a/src/operator/tensor/elemwise_unary_op.h
+++ b/src/operator/tensor/elemwise_unary_op.h
@@ -476,6 +476,34 @@ void HardSigmoidBackward(const nnvm::NodeAttrs& attrs,
   });
 }
 
+struct ReshapeLikeParam : public dmlc::Parameter {
+  dmlc::optional lhs_begin, rhs_begin, lhs_end, rhs_end;
+  DMLC_DECLARE_PARAMETER(ReshapeLikeParam) {
+DMLC_DECLARE_FIELD(lhs_begin)
+.set_default(dmlc::optional())
+.describe(
+"Defaults to 0. "
+"The beginning index along which the lhs dimensions are to be "
+"reshaped. Supports negative indices.");
+DMLC_DECLARE_FIELD(lhs_end)
+.set_default(dmlc::optional())
+.describe("Defaults to None. "
+  "The ending index along which the lhs dimensions are to be "
+  "used for reshaping. Supports negative indices.");
+DMLC_DECLARE_FIELD(rhs_begin)
+.set_default(dmlc::optional())
+.describe("Defaults to 0. "
+  "The beginning index along which the rhs dimensions are to "
+  "be used for "
+  "reshaping. Supports negative indices.");
+DMLC_DECLARE_FIELD(rhs_end)
+.set_default(dmlc::optional())
+.describe("Defaults to None. "
+  "The ending index along which the rhs dimensions are to be "
+  "used for reshaping. Supports negative indices.");
+  }
+};
+
 /*! \brief Unary compute */
 #define MXNET_OPERATOR_REGISTER_UNARY(__name$)  \
   NNVM_REGISTER_OP(__name$) \
diff --git a/src/operator/tensor/elemwise_unary_op_basic.cc 
b/src/operator/tensor/elemwise_unary_op_basic.cc
index 929bc74..f7f21f9 100644
--- a/src/operator/tensor/elemwise_unary_op_basic.cc
+++ b/src/operator/tensor/elemwise_unary_op_basic.cc
@@ -350,10 +350,109 @@ NNVM_REGISTER_OP(_identity_with_attr_like_rhs)
 .add_argument("lhs", "NDArray-or-Symbol", "First input.")
 .add_argument("rhs", "NDArray-or-Symbol", "Second input.");
 
+void ReshapeLikeRangeCanonicalize(int ndims, const char *side,
+  const dmlc::optional ,
+  const dmlc::optional , int *cbegin,
+  int *cend) {
+  *cbegin = begin.has_value() ? begin.value() : 0;
+  if (*cbegin < 0)
+*cbegin += ndims;
+
+  if (!end.has_value()) {
+*cend = ndims;
+  } else {
+*cend = end.value();
+if (*cend < 0) {
+  *cend += ndims;
+}
+  }
+  CHECK(*cend <= ndims) << "Invalid end for " << side << "_end=" << end
+<< " as dimension number is " << ndims;
+  CHECK((*cbegin < *cend)) << "Invalid begin, end, get " << side
+   << "_begin=" << begin << ", " << side
+   << "_end=" << end;
+
+  CHECK(*cend >= 0) << "Invalid end for " << side << "_end=" << end;
+  CHECK(*cbegin >= 0) << "Invalid begin for " << side << "_begin=" << begin;
+}
+
+void GetReshapeLikeParams(const ReshapeLikeParam , const TShape ,
+  const TShape , int *lhs_begin, int *lhs_end,
+  int *rhs_begin, int *rhs_end) {
+  // LHS params
+  ReshapeLikeRangeCanonicalize(lshape.ndim(), "lhs", param.lhs_begin,
+   param.lhs_end, lhs_begin, lhs_end);
+  // RHS params
+  ReshapeLikeRangeCanonicalize(rshape.ndim(), "rhs", param.rhs_begin,
+   param.rhs_end, rhs_begin, rhs_end);
+}
 
+bool ReshapeLikeShapeCompute(const nnvm::NodeAttrs ,
+ std::vector *in_attrs,
+ std::vector *out_attrs) {
+  const ReshapeLikeParam  = nnvm::get(attrs.parsed);
+  const TShape  = (*in_attrs)[0];
+  const TShape  = (*in_attrs)[1];
+  int lhs_begin, lhs_end, rhs_begin, rhs_end;
+  GetReshapeLikeParams(param, lshape, rshape, _begin, 

[GitHub] tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209413382
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   This is strange as destructor itself sits in the global scope and should be 
destructed after the dltensors(which have a local scope)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209412721
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -494,6 +494,57 @@ int MXNDArrayGetData(NDArrayHandle handle,
   API_END();
 }
 
+int MXNDArrayToDLPack(NDArrayHandle handle,
+  DLManagedTensorHandle *out_dlpack) {
+  API_BEGIN();
+  NDArray *arr = static_cast(handle);
+  *out_dlpack = arr->ToDLPack();
+  API_END();
+}
+
+int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+NDArrayHandle *out_handle) {
+  API_BEGIN();
+  NDArray *pdata = new NDArray();
+  *pdata = NDArray::FromDLPack(
+   static_cast(dlpack));
+  *out_handle = pdata;
+  API_END();
+}
+
+int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack) {
+  API_BEGIN();
+  if (dlpack) {
+DLManagedTensor *p_dlpack = static_cast(dlpack);
+p_dlpack->deleter(p_dlpack);
+  }
+  API_END();
+}
+
+
+typedef struct {
+char py_object[16];
 
 Review comment:
   Yes, it's dangerous.
   I want to call the function `PyCapsule_GetPointer` in c_api.cc,
   however MXNet doesn't include Python.h header file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #12118: fix potential floating number overflow, enable float16

2018-08-10 Thread GitBox
zhreshold commented on issue #12118: fix potential floating number overflow, 
enable float16
URL: https://github.com/apache/incubator-mxnet/pull/12118#issuecomment-412241265
 
 
   @larroy Negative number required to mark some indices for special purposes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209411964
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   Yes. I knew the trick and tried it in my previous PR. But it failed in 
Windows Test.
   [Related 
CI](http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-12047/8/pipeline)
   
   It seems that the CI of TVM doesn't have Windows Test so the CI is passed.
   The reason is that the destructor will be released by Python GC before 
calling it.
   And the GC release order are different between Linux and Windows.
   
   In Linux, the destructor is called first, then the destructor is released. 
So it works.
   However, In Windows, the destructor is released first before calling it, it 
doesn't work. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
wkcn commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209411964
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   Yes. I knew the trick and tried it in my previous PR. But it failed in 
Windows Test.
   [Related 
CI](http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-12047/8/pipeline)
   
   It seems that the CI of TVM doesn't have Windows Test so the CI is passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-08-10 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c82e93b  Bump the publish timestamp.
c82e93b is described below

commit c82e93bb3b621bffc6d7821ca29205e056c79429
Author: mxnet-ci 
AuthorDate: Sat Aug 11 00:46:35 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..456abda
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Aug 11 00:46:35 UTC 2018



[GitHub] larroy commented on a change in pull request #12110: [MXNET-730][WIP] Scala test in nightly

2018-08-10 Thread GitBox
larroy commented on a change in pull request #12110: [MXNET-730][WIP] Scala 
test in nightly
URL: https://github.com/apache/incubator-mxnet/pull/12110#discussion_r209410971
 
 

 ##
 File path: 
scala-package/examples/src/main/scala/org/apache/mxnetexamples/Util.scala
 ##
 @@ -42,4 +48,30 @@ object Util {
 }
if (!success) throw new Exception(s"$url Download failed!")
   }
+
+  /**
+* This Util is designed to manage the tests in CI
+* @param name the name of the test
+* @return runTest and number of epoch
+*/
+  def testManager(name: String) : (Boolean, Int) = {
 
 Review comment:
   isn't that why we have src/test/scala in the standard layout?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #12118: fix potential floating number overflow, enable float16

2018-08-10 Thread GitBox
larroy commented on issue #12118: fix potential floating number overflow, 
enable float16
URL: https://github.com/apache/incubator-mxnet/pull/12118#issuecomment-412238211
 
 
   Why don't you use size_t instead of int32_t?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] alexmosc opened a new issue #12002: How do I properly dimensionalize my array and tune `rnn.graph.unroll` to make the LSTM work for this multidimensional sequence

2018-08-10 Thread GitBox
alexmosc opened a new issue #12002: How do I properly dimensionalize my array 
and tune `rnn.graph.unroll` to make the LSTM work for this multidimensional 
sequence
URL: https://github.com/apache/incubator-mxnet/issues/12002
 
 
   It is essentially a call for help rather than code-related issue.
   
   Assume a matrix with 5 rows and 20 columns. Each column is a sample of a 
multivariate timeseries. Each row is one dimension of the multivariate 
timeseries.
   
   I have also a vector of 20 output values.
   
   I am trying to build an LSTM model with sequence length = 20 which would 
iterate over samples 1 to 20 and regress output values associated.
   
   I get all sorts of "shape mismatch" and "You are trying to split the 0-th 
axis of input tensor with shape" error messages. 
   
   The question is how I properly dimensionalize my array of input data and 
tune `rnn.graph.unroll` to make the LSTM work for this multidimensional 
sequence.
   
   
   ```
   library(mxnet)
   
   rm(symbol)
   
   symbol <- rnn.graph.unroll(seq_len = 20, 
  num_rnn_layer =  1, 
  num_hidden = 50,
  input_size = NULL,
  num_embed = NULL, 
  num_decode = 1,
  masking = F, 
  loss_output = "linear",
  dropout = 0.2, 
  ignore_label = -1,
  cell_type = "lstm",
  output_last_state = F,
  config = "seq-to-one")
   
   #graph.viz(symbol, type = "graph", direction = "LR", graph.height.px = 600, 
graph.width.px = 800)
   
   # train.data <- mx.io.arrayiter(
   #   data = matrix(rnorm(100, 0, 1), ncol = 20)
   #   , label = rnorm(20, 0, 1)
   #   , batch.size = 20
   #   , shuffle = F
   #  )
   
   train.x <- array(
  t(matrix(rnorm(100, 0, 1), nrow = 1))
  , dim = c(5, 20)
   )
   
   train.y <- matrix(rnorm(20, 0, 1), nrow = 1)
   
   nn_model <- mx.model.FeedForward.create(
symbol,
X = train.x,
y = train.y,
ctx = mx.cpu(),
begin.round = 1,
num.round = 1000,
optimizer = "sgd",
learning.rate = 0.01,
initializer = mx.init.uniform(0.01),
eval.metric = mx.metric.mse,
array.batch.size = 1,
array.layout = 'colmajor'
   )
   ```
   
   Alexey


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12059: Support selu activation function

2018-08-10 Thread GitBox
haojin2 commented on issue #12059: Support selu activation function
URL: https://github.com/apache/incubator-mxnet/pull/12059#issuecomment-412236399
 
 
   @apeforest All comments addressed already.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12080: Fix MKLDNNSum cpp test failure

2018-08-10 Thread GitBox
haojin2 commented on issue #12080: Fix MKLDNNSum cpp test failure
URL: https://github.com/apache/incubator-mxnet/pull/12080#issuecomment-412235495
 
 
   @zheng-da @azai91 @mseth10 Please give a review when you have time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11493: Fix MXPredReshape in the c_predict_api

2018-08-10 Thread GitBox
marcoabreu commented on issue #11493: Fix MXPredReshape in the c_predict_api
URL: https://github.com/apache/incubator-mxnet/pull/11493#issuecomment-412233719
 
 
   The default is atol=e-20 and rtol=e-5. I don't know by how much we can 
expect a derivation here.
   
   I think the default values are reasonable. The only case when they can cause 
problems is when we are working with randomized input data. Since we got both 
cases, I'd prefer to not increase the default tolerances but instead rather 
have them increased specifically if we use random data.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stu1130 opened a new pull request #12131: [MXNET-737][WIP] Add last batch handle for imageiter

2018-08-10 Thread GitBox
stu1130 opened a new pull request #12131: [MXNET-737][WIP] Add last batch 
handle for imageiter
URL: https://github.com/apache/incubator-mxnet/pull/12131
 
 
   ## Description ##
   
   Add `last_batch_handle` parameter to ImageIter based on #11883
   * `last_batch_handle` support `pad`(default), `discard`, `roll_over`
   
   Note that reading record files(.rec) without shuffle(i.e. sequential read) 
didn't support 'discard'

   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
* Add last_batch_handle feature to ImageIter and adjust test_imageiter to 
test last_batch_handle parameter
* Change the shuffle behavior to the same as NDArrayIter where shuffling 
the data only happen during the iterator initialization
   
   ## Comments ##
   N/A
   @zhreshold 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #12125: CI scripts refinements. Separate Py2 and Py3 installs cripts. Fix perms.

2018-08-10 Thread GitBox
larroy commented on a change in pull request #12125: CI scripts refinements. 
Separate Py2 and Py3 installs cripts. Fix perms.
URL: https://github.com/apache/incubator-mxnet/pull/12125#discussion_r209401440
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -1005,6 +1005,7 @@ broken_link_checker() {
 ./tests/nightly/broken_link_checker_test/broken_link_checker.sh
 }
 
+
 
 Review comment:
   Added two blank lines for separation. Not important.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vishaalkapoor commented on issue #12068: [MXAPPS-805] Notebook execution failures in CI.

2018-08-10 Thread GitBox
vishaalkapoor commented on issue #12068: [MXAPPS-805] Notebook execution 
failures in CI.
URL: https://github.com/apache/incubator-mxnet/pull/12068#issuecomment-412224612
 
 
   Thanks everyone for reviews! :-)
   
   On Fri, Aug 10, 2018, 1:27 PM Marco de Abreu 
   wrote:
   
   > Merged #12068  into
   > master.
   >
   > —
   > You are receiving this because you authored the thread.
   > Reply to this email directly, view it on GitHub
   > ,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #12125: CI scripts refinements. Separate Py2 and Py3 installs cripts. Fix perms.

2018-08-10 Thread GitBox
haojin2 commented on a change in pull request #12125: CI scripts refinements. 
Separate Py2 and Py3 installs cripts. Fix perms.
URL: https://github.com/apache/incubator-mxnet/pull/12125#discussion_r209397540
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -1005,6 +1005,7 @@ broken_link_checker() {
 ./tests/nightly/broken_link_checker_test/broken_link_checker.sh
 }
 
+
 
 Review comment:
   Why is there an extra blank line here? Is this needed or just a typo?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #12104: [DO NOT REVIEW] Subgraph API

2018-08-10 Thread GitBox
reminisce commented on issue #12104: [DO NOT REVIEW] Subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/12104#issuecomment-412217837
 
 
   @ZhennanQin Yes, I fine with that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Document MXNET_LIBRARY_PATH environment variable which was not documented explicitly. (#12074)

2018-08-10 Thread nswamy
This is an automated email from the ASF dual-hosted git repository.

nswamy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1f8debb  Document MXNET_LIBRARY_PATH environment variable which was 
not documented explicitly. (#12074)
1f8debb is described below

commit 1f8debb7dd614272092423f35cf49444f18498af
Author: Pedro Larroy <928489+lar...@users.noreply.github.com>
AuthorDate: Fri Aug 10 23:23:32 2018 +0200

Document MXNET_LIBRARY_PATH environment variable which was not documented 
explicitly. (#12074)
---
 docs/faq/env_var.md | 12 
 1 file changed, 12 insertions(+)

diff --git a/docs/faq/env_var.md b/docs/faq/env_var.md
index 6e9a359..15ba225 100644
--- a/docs/faq/env_var.md
+++ b/docs/faq/env_var.md
@@ -8,6 +8,18 @@ For example, you can set these environment variables in Linux 
or macOS as follow
 export MXNET_GPU_WORKER_NTHREADS=3
 ```
 
+Or in powershell:
+```
+$env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
+```
+
+## Variables controlling the execution environment
+
+* MXNET_LIBRARY_PATH
+Absolute path indicating where the mxnet dynamic library is to be located, 
this would be the absolute
+path to `libmxnet.so` or `libmxnet.dll` depending on the platform. The 
logic for loading the
+library is in `python/mxnet/libinfo.py`
+
 ## Set the Number of Threads
 
 * MXNET_GPU_WORKER_NTHREADS



[GitHub] nswamy closed pull request #12074: Document MXNET_LIBRARY_PATH environment variable which was not docume…

2018-08-10 Thread GitBox
nswamy closed pull request #12074: Document MXNET_LIBRARY_PATH environment 
variable which was not docume…
URL: https://github.com/apache/incubator-mxnet/pull/12074
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/faq/env_var.md b/docs/faq/env_var.md
index 6e9a3594168..15ba225ea86 100644
--- a/docs/faq/env_var.md
+++ b/docs/faq/env_var.md
@@ -8,6 +8,18 @@ For example, you can set these environment variables in Linux 
or macOS as follow
 export MXNET_GPU_WORKER_NTHREADS=3
 ```
 
+Or in powershell:
+```
+$env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
+```
+
+## Variables controlling the execution environment
+
+* MXNET_LIBRARY_PATH
+Absolute path indicating where the mxnet dynamic library is to be located, 
this would be the absolute
+path to `libmxnet.so` or `libmxnet.dll` depending on the platform. The 
logic for loading the
+library is in `python/mxnet/libinfo.py`
+
 ## Set the Number of Threads
 
 * MXNET_GPU_WORKER_NTHREADS


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aalexandrov opened a new pull request #12130: Do not show "needs to register block" warning for registered blocks.

2018-08-10 Thread GitBox
aalexandrov opened a new pull request #12130: Do not show "needs to register 
block" warning for registered blocks.
URL: https://github.com/apache/incubator-mxnet/pull/12130
 
 
   ## Description ##
   
   This slightly modifies the semantics of the `_check_container_with_block` 
function in `block.py` in order to suppress the `"{name}" is an unregistered 
container with Blocks.` warning if all blocks nested inside containers are 
already registered.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #12099: Fix precision issue of test case test_rnnrelu_bidirectional

2018-08-10 Thread GitBox
vandanavk commented on issue #12099: Fix precision issue of test case 
test_rnnrelu_bidirectional
URL: https://github.com/apache/incubator-mxnet/pull/12099#issuecomment-412204952
 
 
   @haojin2 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xcgoner commented on issue #12062: DMLC_PS_ROOT_URI using hostname failed in distributed training

2018-08-10 Thread GitBox
xcgoner commented on issue #12062: DMLC_PS_ROOT_URI using hostname failed in 
distributed training
URL: 
https://github.com/apache/incubator-mxnet/issues/12062#issuecomment-412204886
 
 
   I think this is actually an issue of zmq, used in ps-lite. Zmq can only take 
ip address to find network interface. Using hostname will fail. 
   Please take a look at the discussion here:
   
https://stackoverflow.com/questions/6024003/why-doesnt-zeromq-work-on-localhost


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #12098: [MXNET-794] Remove Wrong InferType for AdaptiveAvgPool and BilinearReisze2D

2018-08-10 Thread GitBox
eric-haibin-lin closed pull request #12098: [MXNET-794] Remove Wrong InferType 
for AdaptiveAvgPool and BilinearReisze2D
URL: https://github.com/apache/incubator-mxnet/pull/12098
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/contrib/adaptive_avg_pooling-inl.h 
b/src/operator/contrib/adaptive_avg_pooling-inl.h
index 7331c7bd47a..12284d9d85d 100644
--- a/src/operator/contrib/adaptive_avg_pooling-inl.h
+++ b/src/operator/contrib/adaptive_avg_pooling-inl.h
@@ -144,41 +144,6 @@ static bool AdaptiveAvgPoolOpInferShape(const 
nnvm::NodeAttrs& attrs,
   return true;
 }
 
-static bool AdaptiveAvgPoolOpInferType(const nnvm::NodeAttrs& attrs,
-   std::vector *in_type,
-   std::vector *out_type) {
-  using namespace mshadow;
-  CHECK_EQ(in_type->size(), 1U);
-  int dtype = (*in_type)[0];
-  CHECK_NE(dtype, -1) << "First input must have specified type";
-  // For float16 input type beta, gamma, mean, and average are stored in 
float32.
-  // For other input types, these parameters have the same type as input
-  // NOTE: This requirement is from cuDNN (v. 4 and 5)
-  int dtype_param = 0;
-  MSHADOW_REAL_TYPE_SWITCH_EX(dtype, DTypeX, AccRealX, {
-  dtype_param = mshadow::DataType::kFlag; });
-  out_type->clear();
-  out_type->push_back(dtype_param);
-  return true;
-}
-
-static inline bool AdaptiveAvgPoolOpStorageType(const nnvm::NodeAttrs ,
-const int dev_mask,
-DispatchMode *dispatch_mode,
-std::vector *in_attrs,
-std::vector *out_attrs) {
-  CHECK_EQ(in_attrs->size(), 1);
-  CHECK_EQ(out_attrs->size(), 1);
-  *dispatch_mode = DispatchMode::kFCompute;
-  for (int& v : *in_attrs) {
-if (v == - 1) v = kDefaultStorage;
-  }
-  for (size_t i = 0; i < out_attrs->size(); i++) {
-(*out_attrs)[i] = kDefaultStorage;
-  }
-  return true;
-}
-
 using namespace mshadow;
 template
 MSHADOW_XINLINE int get_stride(Tensor tensor, int idx) {
diff --git a/src/operator/contrib/adaptive_avg_pooling.cc 
b/src/operator/contrib/adaptive_avg_pooling.cc
index 079571177cb..00ab36605bf 100644
--- a/src/operator/contrib/adaptive_avg_pooling.cc
+++ b/src/operator/contrib/adaptive_avg_pooling.cc
@@ -216,8 +216,6 @@ The pooling kernel and stride sizes are automatically 
chosen for desired output
 .set_num_inputs(1)
 .set_num_outputs(1)
 .set_attr("FInferShape", AdaptiveAvgPoolOpInferShape)
-.set_attr("FInferType", AdaptiveAvgPoolOpInferType)
-.set_attr("FInferStorageType", AdaptiveAvgPoolOpStorageType)
 .set_attr("FCompute", AdaptiveAvgPoolOpForward)
 .set_attr("FGradient",
   ElemwiseGradUseNone{"_backward_contrib_AdaptiveAvgPooling2D"})
@@ -229,7 +227,6 @@ NNVM_REGISTER_OP(_backward_contrib_AdaptiveAvgPooling2D)
 .set_num_inputs(1)
 .set_num_outputs(1)
 .set_attr("TIsBackward", true)
-.set_attr("FInferStorageType", AdaptiveAvgPoolOpStorageType)
 .set_attr("FCompute", AdaptiveAvgPoolOpBackward);
 
 
diff --git a/src/operator/contrib/bilinear_resize-inl.h 
b/src/operator/contrib/bilinear_resize-inl.h
index c096f014975..ff3f794d167 100644
--- a/src/operator/contrib/bilinear_resize-inl.h
+++ b/src/operator/contrib/bilinear_resize-inl.h
@@ -136,42 +136,6 @@ static bool BilinearSampleOpInferShape(const 
nnvm::NodeAttrs& attrs,
   return true;
 }
 
-static bool BilinearSampleOpInferType(const nnvm::NodeAttrs& attrs,
-  std::vector *in_type,
-  std::vector *out_type) {
-  using namespace mshadow;
-  CHECK_EQ(in_type->size(), 1U);
-  int dtype = (*in_type)[0];
-  CHECK_NE(dtype, -1) << "First input must have specified type";
-  // For float16 input type beta, gamma, mean, and average are stored in 
float32.
-  // For other input types, these parameters have the same type as input
-  // NOTE: This requirement is from cuDNN (v. 4 and 5)
-  int dtype_param = 0;
-  MSHADOW_REAL_TYPE_SWITCH_EX(dtype, DTypeX, AccRealX, {
-  dtype_param = mshadow::DataType::kFlag; });
-  out_type->clear();
-  out_type->push_back(dtype_param);
-  return true;
-}
-
-static inline bool BilinearSampleOpStorageType(const nnvm::NodeAttrs ,
-   const int dev_mask,
-   DispatchMode *dispatch_mode,
-   std::vector *in_attrs,
-   std::vector *out_attrs) {
-  CHECK_EQ(in_attrs->size(), 1);
-  CHECK_EQ(out_attrs->size(), 1);
-  *dispatch_mode = DispatchMode::kFCompute;
-  for (int& v : *in_attrs) {
-if (v == - 1) v = 

[incubator-mxnet] branch master updated: rm wrong infertype for AdaptiveAvgPool and BilinearReisze2D (#12098)

2018-08-10 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new f7211b2  rm wrong infertype for AdaptiveAvgPool and BilinearReisze2D 
(#12098)
f7211b2 is described below

commit f7211b227c912abed58435bcd288ced7d28d9ef0
Author: Hang Zhang <8041160+zhanghang1...@users.noreply.github.com>
AuthorDate: Fri Aug 10 13:56:29 2018 -0700

rm wrong infertype for AdaptiveAvgPool and BilinearReisze2D (#12098)
---
 src/operator/contrib/adaptive_avg_pooling-inl.h | 35 
 src/operator/contrib/adaptive_avg_pooling.cc|  3 ---
 src/operator/contrib/bilinear_resize-inl.h  | 36 -
 src/operator/contrib/bilinear_resize.cc |  3 ---
 4 files changed, 77 deletions(-)

diff --git a/src/operator/contrib/adaptive_avg_pooling-inl.h 
b/src/operator/contrib/adaptive_avg_pooling-inl.h
index 7331c7b..12284d9 100644
--- a/src/operator/contrib/adaptive_avg_pooling-inl.h
+++ b/src/operator/contrib/adaptive_avg_pooling-inl.h
@@ -144,41 +144,6 @@ static bool AdaptiveAvgPoolOpInferShape(const 
nnvm::NodeAttrs& attrs,
   return true;
 }
 
-static bool AdaptiveAvgPoolOpInferType(const nnvm::NodeAttrs& attrs,
-   std::vector *in_type,
-   std::vector *out_type) {
-  using namespace mshadow;
-  CHECK_EQ(in_type->size(), 1U);
-  int dtype = (*in_type)[0];
-  CHECK_NE(dtype, -1) << "First input must have specified type";
-  // For float16 input type beta, gamma, mean, and average are stored in 
float32.
-  // For other input types, these parameters have the same type as input
-  // NOTE: This requirement is from cuDNN (v. 4 and 5)
-  int dtype_param = 0;
-  MSHADOW_REAL_TYPE_SWITCH_EX(dtype, DTypeX, AccRealX, {
-  dtype_param = mshadow::DataType::kFlag; });
-  out_type->clear();
-  out_type->push_back(dtype_param);
-  return true;
-}
-
-static inline bool AdaptiveAvgPoolOpStorageType(const nnvm::NodeAttrs ,
-const int dev_mask,
-DispatchMode *dispatch_mode,
-std::vector *in_attrs,
-std::vector *out_attrs) {
-  CHECK_EQ(in_attrs->size(), 1);
-  CHECK_EQ(out_attrs->size(), 1);
-  *dispatch_mode = DispatchMode::kFCompute;
-  for (int& v : *in_attrs) {
-if (v == - 1) v = kDefaultStorage;
-  }
-  for (size_t i = 0; i < out_attrs->size(); i++) {
-(*out_attrs)[i] = kDefaultStorage;
-  }
-  return true;
-}
-
 using namespace mshadow;
 template
 MSHADOW_XINLINE int get_stride(Tensor tensor, int idx) {
diff --git a/src/operator/contrib/adaptive_avg_pooling.cc 
b/src/operator/contrib/adaptive_avg_pooling.cc
index 0795711..00ab366 100644
--- a/src/operator/contrib/adaptive_avg_pooling.cc
+++ b/src/operator/contrib/adaptive_avg_pooling.cc
@@ -216,8 +216,6 @@ The pooling kernel and stride sizes are automatically 
chosen for desired output
 .set_num_inputs(1)
 .set_num_outputs(1)
 .set_attr("FInferShape", AdaptiveAvgPoolOpInferShape)
-.set_attr("FInferType", AdaptiveAvgPoolOpInferType)
-.set_attr("FInferStorageType", AdaptiveAvgPoolOpStorageType)
 .set_attr("FCompute", AdaptiveAvgPoolOpForward)
 .set_attr("FGradient",
   ElemwiseGradUseNone{"_backward_contrib_AdaptiveAvgPooling2D"})
@@ -229,7 +227,6 @@ NNVM_REGISTER_OP(_backward_contrib_AdaptiveAvgPooling2D)
 .set_num_inputs(1)
 .set_num_outputs(1)
 .set_attr("TIsBackward", true)
-.set_attr("FInferStorageType", AdaptiveAvgPoolOpStorageType)
 .set_attr("FCompute", AdaptiveAvgPoolOpBackward);
 
 
diff --git a/src/operator/contrib/bilinear_resize-inl.h 
b/src/operator/contrib/bilinear_resize-inl.h
index c096f01..ff3f794 100644
--- a/src/operator/contrib/bilinear_resize-inl.h
+++ b/src/operator/contrib/bilinear_resize-inl.h
@@ -136,42 +136,6 @@ static bool BilinearSampleOpInferShape(const 
nnvm::NodeAttrs& attrs,
   return true;
 }
 
-static bool BilinearSampleOpInferType(const nnvm::NodeAttrs& attrs,
-  std::vector *in_type,
-  std::vector *out_type) {
-  using namespace mshadow;
-  CHECK_EQ(in_type->size(), 1U);
-  int dtype = (*in_type)[0];
-  CHECK_NE(dtype, -1) << "First input must have specified type";
-  // For float16 input type beta, gamma, mean, and average are stored in 
float32.
-  // For other input types, these parameters have the same type as input
-  // NOTE: This requirement is from cuDNN (v. 4 and 5)
-  int dtype_param = 0;
-  MSHADOW_REAL_TYPE_SWITCH_EX(dtype, DTypeX, AccRealX, {
-  dtype_param = mshadow::DataType::kFlag; });
-  out_type->clear();
-  out_type->push_back(dtype_param);
-  return true;
-}
-
-static inline bool BilinearSampleOpStorageType(const nnvm::NodeAttrs ,
-  

[GitHub] haojin2 commented on issue #12090: [MXNET-791][WIP] Pick with negative indices

2018-08-10 Thread GitBox
haojin2 commented on issue #12090: [MXNET-791][WIP] Pick with negative indices
URL: https://github.com/apache/incubator-mxnet/pull/12090#issuecomment-412196959
 
 
   @perdasilva So if there's no direct equivalence of this one in numpy then we 
may not need to match the behavior with numpy, please simply ensure we have 
enough documentation to document the behavior.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXAPPS-805] Notebook execution failures in CI. (#12068)

2018-08-10 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 89717d4  [MXAPPS-805] Notebook execution failures in CI. (#12068)
89717d4 is described below

commit 89717d4fff3bca37796f54e4c7d324cee90c60fb
Author: Vishaal Kapoor <40836875+vishaalkap...@users.noreply.github.com>
AuthorDate: Fri Aug 10 13:26:17 2018 -0700

[MXAPPS-805] Notebook execution failures in CI. (#12068)

* [MXAPPS-805] Notebook execution failures in CI.

* Add a retry policy when starting a notebook executor to handle the 
failure to
 start a notebook executor (due to a port collision, kernel taking too
 long to start, etc.).

* Change logging level for tests to INFO so that we have more
 informative test output.

* Make retry logic for Jupyter notebook execution specific to the error
message we are looking for to prevent false positives in the retry logic.
---
 .../straight_dope/test_notebooks_multi_gpu.py  |  2 ++
 .../straight_dope/test_notebooks_single_gpu.py |  3 ++-
 tests/utils/notebook_test/__init__.py  | 26 +-
 3 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/tests/nightly/straight_dope/test_notebooks_multi_gpu.py 
b/tests/nightly/straight_dope/test_notebooks_multi_gpu.py
index 2038ada..ef07550 100644
--- a/tests/nightly/straight_dope/test_notebooks_multi_gpu.py
+++ b/tests/nightly/straight_dope/test_notebooks_multi_gpu.py
@@ -20,6 +20,7 @@
 This file tests that the notebooks requiring multi GPUs run without
 warning or exception.
 """
+import logging
 import unittest
 from straight_dope_test_utils import _test_notebook
 from straight_dope_test_utils import _download_straight_dope_notebooks
@@ -27,6 +28,7 @@ from straight_dope_test_utils import 
_download_straight_dope_notebooks
 class StraightDopeMultiGpuTests(unittest.TestCase):
 @classmethod
 def setUpClass(self):
+logging.basicConfig(level=logging.INFO)
 assert _download_straight_dope_notebooks()
 
 # Chapter 7
diff --git a/tests/nightly/straight_dope/test_notebooks_single_gpu.py 
b/tests/nightly/straight_dope/test_notebooks_single_gpu.py
index 06ced96..fca49f4 100644
--- a/tests/nightly/straight_dope/test_notebooks_single_gpu.py
+++ b/tests/nightly/straight_dope/test_notebooks_single_gpu.py
@@ -21,6 +21,7 @@
 warning or exception.
 """
 import glob
+import logging
 import re
 import os
 import unittest
@@ -51,9 +52,9 @@ NOTEBOOKS_WHITELIST = [
 class StraightDopeSingleGpuTests(unittest.TestCase):
 @classmethod
 def setUpClass(self):
+logging.basicConfig(level=logging.INFO)
 assert _download_straight_dope_notebooks()
 
-
 def test_completeness(self):
 """
 Make sure that every tutorial that isn't in the whitelist is 
considered for testing by this
diff --git a/tests/utils/notebook_test/__init__.py 
b/tests/utils/notebook_test/__init__.py
index 2cdb613..25e96ab 100644
--- a/tests/utils/notebook_test/__init__.py
+++ b/tests/utils/notebook_test/__init__.py
@@ -32,6 +32,9 @@ import nbformat
 
 IPYTHON_VERSION = 4  # Pin to ipython version 4.
 TIME_OUT = 10*60  # Maximum 10 mins/test. Reaching timeout causes test failure.
+RETRIES = 8
+KERNEL_ERROR_MSG = 'Kernel died before replying to kernel_info'
+
 
 def run_notebook(notebook, notebook_dir, kernel=None, no_cache=False, 
temp_dir='tmp_notebook'):
 """Run tutorial Jupyter notebook to catch any execution error.
@@ -72,15 +75,28 @@ def run_notebook(notebook, notebook_dir, kernel=None, 
no_cache=False, temp_dir='
 os.makedirs(working_dir)
 try:
 notebook = nbformat.read(notebook_path + '.ipynb', 
as_version=IPYTHON_VERSION)
-# Adding a small delay to allow time for sockets to be freed
-# stop-gap measure to battle the 1000ms linger of socket hard coded
-# in the kernel API code
-time.sleep(1.1)
 if kernel is not None:
 eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
 else:
 eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
-nb, _ = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+
+# There is a low (< 1%) chance that starting a notebook executor will 
fail due to the kernel
+# taking to long to start, or a port collision, etc.
+for i in range(RETRIES):
+try:
+nb, _ = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+except RuntimeError as rte:
+# We check if the exception has to do with the Jupyter kernel 
failing to start. If
+# not, we rethrow to prevent the notebook from erring RETRIES 
times. It is not ideal
+# to inspect the exception message, but necessary 

[GitHub] marcoabreu closed pull request #12068: [MXAPPS-805] Notebook execution failures in CI.

2018-08-10 Thread GitBox
marcoabreu closed pull request #12068: [MXAPPS-805] Notebook execution failures 
in CI.
URL: https://github.com/apache/incubator-mxnet/pull/12068
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/nightly/straight_dope/test_notebooks_multi_gpu.py 
b/tests/nightly/straight_dope/test_notebooks_multi_gpu.py
index 2038ada3a8b..ef07550bdf7 100644
--- a/tests/nightly/straight_dope/test_notebooks_multi_gpu.py
+++ b/tests/nightly/straight_dope/test_notebooks_multi_gpu.py
@@ -20,6 +20,7 @@
 This file tests that the notebooks requiring multi GPUs run without
 warning or exception.
 """
+import logging
 import unittest
 from straight_dope_test_utils import _test_notebook
 from straight_dope_test_utils import _download_straight_dope_notebooks
@@ -27,6 +28,7 @@
 class StraightDopeMultiGpuTests(unittest.TestCase):
 @classmethod
 def setUpClass(self):
+logging.basicConfig(level=logging.INFO)
 assert _download_straight_dope_notebooks()
 
 # Chapter 7
diff --git a/tests/nightly/straight_dope/test_notebooks_single_gpu.py 
b/tests/nightly/straight_dope/test_notebooks_single_gpu.py
index ee7c94c80af..a9db85398bf 100644
--- a/tests/nightly/straight_dope/test_notebooks_single_gpu.py
+++ b/tests/nightly/straight_dope/test_notebooks_single_gpu.py
@@ -21,6 +21,7 @@
 warning or exception.
 """
 import glob
+import logging
 import re
 import os
 import unittest
@@ -51,9 +52,9 @@
 class StraightDopeSingleGpuTests(unittest.TestCase):
 @classmethod
 def setUpClass(self):
+logging.basicConfig(level=logging.INFO)
 assert _download_straight_dope_notebooks()
 
-
 def test_completeness(self):
 """
 Make sure that every tutorial that isn't in the whitelist is 
considered for testing by this
diff --git a/tests/utils/notebook_test/__init__.py 
b/tests/utils/notebook_test/__init__.py
index 2cdb6134a60..25e96ab0fc5 100644
--- a/tests/utils/notebook_test/__init__.py
+++ b/tests/utils/notebook_test/__init__.py
@@ -32,6 +32,9 @@
 
 IPYTHON_VERSION = 4  # Pin to ipython version 4.
 TIME_OUT = 10*60  # Maximum 10 mins/test. Reaching timeout causes test failure.
+RETRIES = 8
+KERNEL_ERROR_MSG = 'Kernel died before replying to kernel_info'
+
 
 def run_notebook(notebook, notebook_dir, kernel=None, no_cache=False, 
temp_dir='tmp_notebook'):
 """Run tutorial Jupyter notebook to catch any execution error.
@@ -72,15 +75,28 @@ def run_notebook(notebook, notebook_dir, kernel=None, 
no_cache=False, temp_dir='
 os.makedirs(working_dir)
 try:
 notebook = nbformat.read(notebook_path + '.ipynb', 
as_version=IPYTHON_VERSION)
-# Adding a small delay to allow time for sockets to be freed
-# stop-gap measure to battle the 1000ms linger of socket hard coded
-# in the kernel API code
-time.sleep(1.1)
 if kernel is not None:
 eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
 else:
 eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
-nb, _ = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+
+# There is a low (< 1%) chance that starting a notebook executor will 
fail due to the kernel
+# taking to long to start, or a port collision, etc.
+for i in range(RETRIES):
+try:
+nb, _ = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+except RuntimeError as rte:
+# We check if the exception has to do with the Jupyter kernel 
failing to start. If
+# not, we rethrow to prevent the notebook from erring RETRIES 
times. It is not ideal
+# to inspect the exception message, but necessary for retry 
logic, as Jupyter client
+# throws the generic RuntimeError that can be confused with 
other Runtime errors.
+if str(rte) != KERNEL_ERROR_MSG:
+raise rte
+
+logging.info("Error starting preprocessor: {}. Attempt 
{}/{}".format(str(rte), i+1, RETRIES))
+time.sleep(1)
+continue
+break
 except Exception as err:
 err_msg = str(err)
 errors.append(err_msg)


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12121: Broken link in test_gluon_model_zoo.test_models

2018-08-10 Thread GitBox
haojin2 commented on issue #12121: Broken link in 
test_gluon_model_zoo.test_models
URL: 
https://github.com/apache/incubator-mxnet/issues/12121#issuecomment-412194561
 
 
   @hetong007 maybe we can add retry logic in this test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #12117: [MXNET-782] Fix Custom Metric Creation in R tutorial

2018-08-10 Thread GitBox
sandeep-krishnamurthy commented on issue #12117: [MXNET-782] Fix Custom Metric 
Creation in R tutorial
URL: https://github.com/apache/incubator-mxnet/pull/12117#issuecomment-412194408
 
 
   @hetong007 - Thanks for your review. I see your comments are addressed. 
Merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-782] Fix Custom Metric Creation in R tutorial (#12117)

2018-08-10 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new f499dc4  [MXNET-782] Fix Custom Metric Creation in R tutorial (#12117)
f499dc4 is described below

commit f499dc4e57e64e221a49e27abeeca124051d2c58
Author: Anirudh 
AuthorDate: Fri Aug 10 13:17:39 2018 -0700

[MXNET-782] Fix Custom Metric Creation in R tutorial (#12117)

* fix tutorial

* install instructions

* fix typo
---
 docs/tutorials/r/fiveMinutesNeuralNetwork.md | 119 ++-
 1 file changed, 61 insertions(+), 58 deletions(-)

diff --git a/docs/tutorials/r/fiveMinutesNeuralNetwork.md 
b/docs/tutorials/r/fiveMinutesNeuralNetwork.md
index 9104e8f..a2ce5ec 100644
--- a/docs/tutorials/r/fiveMinutesNeuralNetwork.md
+++ b/docs/tutorials/r/fiveMinutesNeuralNetwork.md
@@ -1,18 +1,21 @@
 Develop a Neural Network with MXNet in Five Minutes
 =
 
-This tutorial is designed for new users of the `mxnet` package for R. It shows 
how to construct a neural network to do regression in 5 minutes. It shows how 
to perform classification and regression tasks, respectively. The data we use 
is in the `mlbench` package.
+This tutorial is designed for new users of the `mxnet` package for R. It shows 
how to construct a neural network to do regression in 5 minutes. It shows how 
to perform classification and regression tasks, respectively. The data we use 
is in the `mlbench` package. Instructions to install R and MXNet's R package in 
different environments can be found 
[here](http://mxnet.incubator.apache.org/install/index.html?platform=Linux=R=CPU).
 
 
 ## Classification
 
-
-
+ ```
+## Loading required package: mlbench
+ ```
  ```r
-require(mlbench)
+if (!require(mlbench)) {
+  install.packages('mlbench')
+}
  ```
 
  ```
-## Loading required package: mlbench
+## Loading required package: mxnet
  ```
 
  ```r
@@ -20,8 +23,7 @@ This tutorial is designed for new users of the `mxnet` 
package for R. It shows h
  ```
 
  ```
-## Loading required package: mxnet
-## Loading required package: methods
+## Loading required datasets
  ```
 
  ```r
@@ -235,7 +237,8 @@ Currently, we have four predefined metrics: "accuracy", 
"rmse", "mae", and "rmsl
 
  ```r
 demo.metric.mae <- mx.metric.custom("mae", function(label, pred) {
-  res <- mean(abs(label-pred))
+  pred <- mx.nd.reshape(pred, shape = 0)
+  res <- mx.nd.mean(mx.nd.abs(label-pred))
   return(res)
 })
  ```
@@ -253,56 +256,56 @@ This is an example of the mean absolute error metric. 
Simply plug it into the tr
  ```
 ## Auto detect layout of input matrix, use rowmajor.
 ## Start training with 1 devices
-## [1] Train-mae=13.1889538083225
-## [2] Train-mae=9.81431959337658
-## [3] Train-mae=9.21576419870059
-## [4] Train-mae=8.38071537613869
-## [5] Train-mae=7.45462437611487
-## [6] Train-mae=6.93423301743136
-## [7] Train-mae=6.91432357016537
-## [8] Train-mae=7.02742733055105
-## [9] Train-mae=7.00618194618469
-## [10] Train-mae=6.92541576984028
-## [11] Train-mae=6.87530243690643
-## [12] Train-mae=6.84757369098564
-## [13] Train-mae=6.82966501611388
-## [14] Train-mae=6.81151759574811
-## [15] Train-mae=6.78394182841811
-## [16] Train-mae=6.75914719419347
-## [17] Train-mae=6.74180388773481
-## [18] Train-mae=6.725853071279
-## [19] Train-mae=6.70932178215848
-## [20] Train-mae=6.6928868798746
-## [21] Train-mae=6.6769521329138
-## [22] Train-mae=6.66184809505939
-## [23] Train-mae=6.64754504809777
-## [24] Train-mae=6.63358514060577
-## [25] Train-mae=6.62027640889088
-## [26] Train-mae=6.60738245232238
-## [27] Train-mae=6.59505546771818
-## [28] Train-mae=6.58346195800437
-## [29] Train-mae=6.57285477783945
-## [30] Train-mae=6.56259003960424
-## [31] Train-mae=6.5527790788975
-## [32] Train-mae=6.54353428422991
-## [33] Train-mae=6.5344172368447
-## [34] Train-mae=6.52557652526432
-## [35] Train-mae=6.51697905850079
-## [36] Train-mae=6.50847898812758
-## [37] Train-mae=6.50014844106303
-## [38] Train-mae=6.49207674844397
-## [39] Train-mae=6.48412070125341
-## [40] Train-mae=6.47650500999557
-## [41] Train-mae=6.46893867486053
-## [42] Train-mae=6.46142131653097
-## [43] Train-mae=6.45395035048326
-## [44] Train-mae=6.44652914123403
-## [45] Train-mae=6.43916216409869
-## [46] Train-mae=6.43183777381976
-## [47] Train-mae=6.42455544223388
-## [48] Train-mae=6.41731406417158
-## [49] Train-mae=6.41011292926139
-## [50] Train-mae=6.40312503493494
+## [1] Train-mae=14.953625731998
+## [2] Train-mae=11.4802955521478
+## [3] 

[GitHub] sandeep-krishnamurthy closed pull request #12117: [MXNET-782] Fix Custom Metric Creation in R tutorial

2018-08-10 Thread GitBox
sandeep-krishnamurthy closed pull request #12117: [MXNET-782] Fix Custom Metric 
Creation in R tutorial
URL: https://github.com/apache/incubator-mxnet/pull/12117
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/r/fiveMinutesNeuralNetwork.md 
b/docs/tutorials/r/fiveMinutesNeuralNetwork.md
index 9104e8f05c2..a2ce5ecd376 100644
--- a/docs/tutorials/r/fiveMinutesNeuralNetwork.md
+++ b/docs/tutorials/r/fiveMinutesNeuralNetwork.md
@@ -1,18 +1,21 @@
 Develop a Neural Network with MXNet in Five Minutes
 =
 
-This tutorial is designed for new users of the `mxnet` package for R. It shows 
how to construct a neural network to do regression in 5 minutes. It shows how 
to perform classification and regression tasks, respectively. The data we use 
is in the `mlbench` package.
+This tutorial is designed for new users of the `mxnet` package for R. It shows 
how to construct a neural network to do regression in 5 minutes. It shows how 
to perform classification and regression tasks, respectively. The data we use 
is in the `mlbench` package. Instructions to install R and MXNet's R package in 
different environments can be found 
[here](http://mxnet.incubator.apache.org/install/index.html?platform=Linux=R=CPU).
 
 
 ## Classification
 
-
-
+ ```
+## Loading required package: mlbench
+ ```
  ```r
-require(mlbench)
+if (!require(mlbench)) {
+  install.packages('mlbench')
+}
  ```
 
  ```
-## Loading required package: mlbench
+## Loading required package: mxnet
  ```
 
  ```r
@@ -20,8 +23,7 @@ This tutorial is designed for new users of the `mxnet` 
package for R. It shows h
  ```
 
  ```
-## Loading required package: mxnet
-## Loading required package: methods
+## Loading required datasets
  ```
 
  ```r
@@ -235,7 +237,8 @@ Currently, we have four predefined metrics: "accuracy", 
"rmse", "mae", and "rmsl
 
  ```r
 demo.metric.mae <- mx.metric.custom("mae", function(label, pred) {
-  res <- mean(abs(label-pred))
+  pred <- mx.nd.reshape(pred, shape = 0)
+  res <- mx.nd.mean(mx.nd.abs(label-pred))
   return(res)
 })
  ```
@@ -253,56 +256,56 @@ This is an example of the mean absolute error metric. 
Simply plug it into the tr
  ```
 ## Auto detect layout of input matrix, use rowmajor.
 ## Start training with 1 devices
-## [1] Train-mae=13.1889538083225
-## [2] Train-mae=9.81431959337658
-## [3] Train-mae=9.21576419870059
-## [4] Train-mae=8.38071537613869
-## [5] Train-mae=7.45462437611487
-## [6] Train-mae=6.93423301743136
-## [7] Train-mae=6.91432357016537
-## [8] Train-mae=7.02742733055105
-## [9] Train-mae=7.00618194618469
-## [10] Train-mae=6.92541576984028
-## [11] Train-mae=6.87530243690643
-## [12] Train-mae=6.84757369098564
-## [13] Train-mae=6.82966501611388
-## [14] Train-mae=6.81151759574811
-## [15] Train-mae=6.78394182841811
-## [16] Train-mae=6.75914719419347
-## [17] Train-mae=6.74180388773481
-## [18] Train-mae=6.725853071279
-## [19] Train-mae=6.70932178215848
-## [20] Train-mae=6.6928868798746
-## [21] Train-mae=6.6769521329138
-## [22] Train-mae=6.66184809505939
-## [23] Train-mae=6.64754504809777
-## [24] Train-mae=6.63358514060577
-## [25] Train-mae=6.62027640889088
-## [26] Train-mae=6.60738245232238
-## [27] Train-mae=6.59505546771818
-## [28] Train-mae=6.58346195800437
-## [29] Train-mae=6.57285477783945
-## [30] Train-mae=6.56259003960424
-## [31] Train-mae=6.5527790788975
-## [32] Train-mae=6.54353428422991
-## [33] Train-mae=6.5344172368447
-## [34] Train-mae=6.52557652526432
-## [35] Train-mae=6.51697905850079
-## [36] Train-mae=6.50847898812758
-## [37] Train-mae=6.50014844106303
-## [38] Train-mae=6.49207674844397
-## [39] Train-mae=6.48412070125341
-## [40] Train-mae=6.47650500999557
-## [41] Train-mae=6.46893867486053
-## [42] Train-mae=6.46142131653097
-## [43] Train-mae=6.45395035048326
-## [44] Train-mae=6.44652914123403
-## [45] Train-mae=6.43916216409869
-## [46] Train-mae=6.43183777381976
-## [47] Train-mae=6.42455544223388
-## [48] Train-mae=6.41731406417158
-## [49] Train-mae=6.41011292926139
-## [50] Train-mae=6.40312503493494
+## [1] Train-mae=14.953625731998
+## [2] Train-mae=11.4802955521478
+## [3] Train-mae=8.50700579749213
+## [4] Train-mae=7.30591265360514
+## [5] Train-mae=7.38049803839789
+## [6] Train-mae=7.36036252975464
+## [7] Train-mae=7.0651959521
+## [8] Train-mae=6.9962231847975
+## [9] Train-mae=6.96296903822157
+## [10] Train-mae=6.9046172036065
+## [11] 

[GitHub] sandeep-krishnamurthy closed issue #11616: Flaky test test_gluon.test_export

2018-08-10 Thread GitBox
sandeep-krishnamurthy closed issue #11616: Flaky test test_gluon.test_export
URL: https://github.com/apache/incubator-mxnet/issues/11616
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #12093: take custom dataset into consideration for multi worker data loader

2018-08-10 Thread GitBox
szha closed pull request #12093: take custom dataset into consideration for 
multi worker data loader
URL: https://github.com/apache/incubator-mxnet/pull/12093
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/gluon/data/dataloader.py 
b/python/mxnet/gluon/data/dataloader.py
index 13ab544a03d..e0b6aec294a 100644
--- a/python/mxnet/gluon/data/dataloader.py
+++ b/python/mxnet/gluon/data/dataloader.py
@@ -160,7 +160,8 @@ def _as_in_context(data, ctx):
 
 def worker_loop(dataset, key_queue, data_queue, batchify_fn):
 """Worker loop for multiprocessing DataLoader."""
-dataset._fork()
+if hasattr(dataset, '_fork') and callable(dataset._fork):
+dataset._fork()
 while True:
 idx, samples = key_queue.get()
 if idx is None:
diff --git a/tests/python/unittest/test_gluon_data.py 
b/tests/python/unittest/test_gluon_data.py
index 4dc4f3ac881..53ce600629c 100644
--- a/tests/python/unittest/test_gluon_data.py
+++ b/tests/python/unittest/test_gluon_data.py
@@ -116,6 +116,13 @@ def test_image_folder_dataset():
 assert dataset.synsets == ['test_images']
 assert len(dataset.items) == 16
 
+@with_seed()
+def test_list_dataset():
+for num_worker in range(0, 3):
+data = mx.gluon.data.DataLoader([([1,2], 0), ([3, 4], 1)], 
batch_size=1, num_workers=num_worker)
+for d, l in data:
+pass
+
 
 class Dataset(gluon.data.Dataset):
 def __len__(self):


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: take custom dataset into consideration (#12093)

2018-08-10 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 2fc4248  take custom dataset into consideration (#12093)
2fc4248 is described below

commit 2fc4248550c325b02a76f67b1cec32161a32dc4f
Author: Joshua Z. Zhang 
AuthorDate: Fri Aug 10 13:03:11 2018 -0700

take custom dataset into consideration (#12093)
---
 python/mxnet/gluon/data/dataloader.py| 3 ++-
 tests/python/unittest/test_gluon_data.py | 7 +++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/python/mxnet/gluon/data/dataloader.py 
b/python/mxnet/gluon/data/dataloader.py
index 13ab544..e0b6aec 100644
--- a/python/mxnet/gluon/data/dataloader.py
+++ b/python/mxnet/gluon/data/dataloader.py
@@ -160,7 +160,8 @@ def _as_in_context(data, ctx):
 
 def worker_loop(dataset, key_queue, data_queue, batchify_fn):
 """Worker loop for multiprocessing DataLoader."""
-dataset._fork()
+if hasattr(dataset, '_fork') and callable(dataset._fork):
+dataset._fork()
 while True:
 idx, samples = key_queue.get()
 if idx is None:
diff --git a/tests/python/unittest/test_gluon_data.py 
b/tests/python/unittest/test_gluon_data.py
index 4dc4f3a..53ce600 100644
--- a/tests/python/unittest/test_gluon_data.py
+++ b/tests/python/unittest/test_gluon_data.py
@@ -116,6 +116,13 @@ def test_image_folder_dataset():
 assert dataset.synsets == ['test_images']
 assert len(dataset.items) == 16
 
+@with_seed()
+def test_list_dataset():
+for num_worker in range(0, 3):
+data = mx.gluon.data.DataLoader([([1,2], 0), ([3, 4], 1)], 
batch_size=1, num_workers=num_worker)
+for d, l in data:
+pass
+
 
 class Dataset(gluon.data.Dataset):
 def __len__(self):



[GitHub] szha closed issue #12087: Python list as a gluon Dataset

2018-08-10 Thread GitBox
szha closed issue #12087: Python list as a gluon Dataset
URL: https://github.com/apache/incubator-mxnet/issues/12087
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new pull request #12129: update dmlc-core for security reason

2018-08-10 Thread GitBox
haojin2 opened a new pull request #12129: update dmlc-core for security reason
URL: https://github.com/apache/incubator-mxnet/pull/12129
 
 
   ## Description ##
   Update dmlc-core due to security problem with yaml.load: 
https://github.com/dmlc/dmlc-core/pull/449
   
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Update dmlc-core
   
   ## Comments ##
   @szha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #12117: [MXNET-782] Fix Custom Metric Creation in R tutorial

2018-08-10 Thread GitBox
ankkhedia commented on issue #12117: [MXNET-782] Fix Custom Metric Creation in 
R tutorial
URL: https://github.com/apache/incubator-mxnet/pull/12117#issuecomment-412178081
 
 
   Looks good!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on issue #12121: Broken link in test_gluon_model_zoo.test_models

2018-08-10 Thread GitBox
hetong007 commented on issue #12121: Broken link in 
test_gluon_model_zoo.test_models
URL: 
https://github.com/apache/incubator-mxnet/issues/12121#issuecomment-412173388
 
 
   Cannot reproduce with the latest(1.3.0b20180810) pip package. It may due to 
unstable connection to the S3 bucket for pretrained models.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-08-10 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new d0b6979  Bump the publish timestamp.
d0b6979 is described below

commit d0b69798e09c22d993501bc9608f80cdc9259c1a
Author: mxnet-ci 
AuthorDate: Fri Aug 10 18:48:20 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..d786364
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Aug 10 18:48:20 UTC 2018



[GitHub] sandeep-krishnamurthy commented on issue #11773: Update PyPI version number

2018-08-10 Thread GitBox
sandeep-krishnamurthy commented on issue #11773: Update PyPI version number
URL: https://github.com/apache/incubator-mxnet/pull/11773#issuecomment-412171546
 
 
   @apeforest - ping.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #11858: Update contribute.md (Fix links to subscribe for users and contributors)

2018-08-10 Thread GitBox
sandeep-krishnamurthy commented on issue #11858: Update contribute.md (Fix 
links to subscribe for users and contributors)
URL: https://github.com/apache/incubator-mxnet/pull/11858#issuecomment-412171377
 
 
   @sad- Can you please retrigger CI? (with empty commit)
   Changes looks good, thanks for your contributions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #12070: Fix flaky test - test_deformable_convolution and psroipooling with_type

2018-08-10 Thread GitBox
sandeep-krishnamurthy commented on issue #12070: Fix flaky test - 
test_deformable_convolution and psroipooling with_type
URL: https://github.com/apache/incubator-mxnet/pull/12070#issuecomment-412170621
 
 
   @haojin2 - Thank you for reviewing. Updated the description.
   @eric-haibin-lin - Can you please take a look and merge if things looks good?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong opened a new issue #12128: NaiveEngine is not threadsafe and crashes when called from multiple threads.

2018-08-10 Thread GitBox
piiswrong opened a new issue #12128: NaiveEngine is not threadsafe and crashes 
when called from multiple threads.
URL: https://github.com/apache/incubator-mxnet/issues/12128
 
 
   MXNet with NaiveEngine crashes when called from multiple threads. 
ThreadedEngine works fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12027: [MXNET-768] Partially enable flaky test for norm operator

2018-08-10 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12027: [MXNET-768] 
Partially enable flaky test for norm operator
URL: https://github.com/apache/incubator-mxnet/pull/12027#discussion_r209350471
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -3121,20 +3121,22 @@ def l2norm(input_data, axis=0, keepdims=True):
 atol=1e-2 if dtype is np.float16 else 
1e-5, ctx=ctx)
 # Disable numeric gradient 
https://github.com/apache/incubator-mxnet/issues/11509
 # # check gradient
-# check_numeric_gradient(norm_sym, [in_data], 
numeric_eps=epsilon, rtol=1e-2, atol=1e-3)
-# if i < in_data_dim-1:
-# norm_sym = mx.symbol.norm(data=data, ord=order, axis=(i, 
i+1), keepdims=True)
-# npy_out = l1norm(in_data, (i, i+1)) if order is 1 else 
l2norm(in_data, (i, i+1))
-# npy_out_backward = np.sign(in_data) if order is 1 else 
in_data/npy_out
-# check_symbolic_forward(norm_sym, [in_data], [npy_out],
-#rtol=1e-2 if dtype is np.float16 
else 1e-5,
-#atol=1e-2 if dtype is np.float16 
else 1e-5, ctx=ctx)
-# check_symbolic_backward(norm_sym, [in_data], 
[np.ones(npy_out.shape)],
-# [npy_out_backward],
-# rtol=1e-2 if dtype is np.float16 
else 1e-5,
-# atol=1e-2 if dtype is np.float16 
else 1e-5, ctx=ctx)
-# # check gradient
-# check_numeric_gradient(norm_sym, [in_data], 
numeric_eps=epsilon, rtol=1e-2, atol=1e-3)
+# if dtype is not np.float16:
+# check_numeric_gradient(norm_sym, [in_data], 
numeric_eps=epsilon, rtol=1e-1, atol=1e-3)
+if i < in_data_dim-1:
+norm_sym = mx.symbol.norm(data=data, ord=order, axis=(i, 
i+1), keepdims=True)
+npy_out = l1norm(in_data, (i, i+1)) if order is 1 else 
l2norm(in_data, (i, i+1))
+npy_out_backward = np.sign(in_data) if order is 1 else 
in_data/npy_out
+check_symbolic_forward(norm_sym, [in_data], [npy_out],
+   rtol=1e-2 if dtype is np.float16 
else 1e-5,
+   atol=1e-2 if dtype is np.float16 
else 1e-5, ctx=ctx)
+check_symbolic_backward(norm_sym, [in_data], 
[np.ones(npy_out.shape)],
+[npy_out_backward],
+rtol=1e-2 if dtype is np.float16 
else 1e-5,
+atol=1e-2 if dtype is np.float16 
else 1e-5, ctx=ctx)
+# # check gradient
+# if dtype is not np.float16:
+# check_numeric_gradient(norm_sym, [in_data], 
numeric_eps=epsilon, rtol=1e-1, atol=1e-3)
 
 Review comment:
   Please add Github link here to track enabling the numeric_gradient.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] prathik-naidu opened a new issue #12127: LSTM split0 Operator Error

2018-08-10 Thread GitBox
prathik-naidu opened a new issue #12127: LSTM split0 Operator Error
URL: https://github.com/apache/incubator-mxnet/issues/12127
 
 
   ## Description
   Getting an error in the split0 operator when training an image captioning 
network in mxnet.
   
   ## Environment info (Required)
   
   ```
   --Python Info--
   Version  : 3.6.4
   Compiler : GCC 7.2.0
   Build: ('default', 'Mar 13 2018 01:15:57')
   Arch : ('64bit', '')
   Pip Info---
   Version  : 9.0.1
   Directory: 
/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/pip
   --MXNet Info---
   Version  : 1.1.0
   Directory: 
/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet
   Commit Hash   : 07a83a0325a3d782513a04f47d711710972cb144
   --System Info--
   Platform : Linux-4.4.0-1062-aws-x86_64-with-debian-stretch-sid
   system   : Linux
   node : ip-172-31-5-176
   release  : 4.4.0-1062-aws
   version  : #71-Ubuntu SMP Fri Jun 15 10:07:39 UTC 2018
   --Hardware Info--
   machine  : x86_64
   processor: x86_64
   Architecture:  x86_64
   CPU op-mode(s):32-bit, 64-bit
   Byte Order:Little Endian
   CPU(s):4
   On-line CPU(s) list:   0-3
   Thread(s) per core:2
   Core(s) per socket:2
   Socket(s): 1
   NUMA node(s):  1
   Vendor ID: GenuineIntel
   CPU family:6
   Model: 79
   Model name:Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
   Stepping:  1
   CPU MHz:   2699.804
   CPU max MHz:   3000.
   CPU min MHz:   1200.
   BogoMIPS:  4600.09
   Hypervisor vendor: Xen
   Virtualization type:   full
   L1d cache: 32K
   L1i cache: 32K
   L2 cache:  256K
   L3 cache:  46080K
   NUMA node0 CPU(s): 0-3
   Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm 
constant_tsc rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni 
pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt 
tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 
3dnowprefetch invpcid_single kaiser fsgsbase bmi1 hle avx2 smep bmi2 erms 
invpcid rtm rdseed adx xsaveopt
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0060 
sec, LOAD: 0.4613 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0701 sec, LOAD: 
0.4250 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1166 sec, LOAD: 
0.3497 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0044 sec, LOAD: 0.1650 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0039 sec, LOAD: 
0.0744 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0035 sec, 
LOAD: 0.1032 sec.
   
   ```
   
   Package used (Python/R/Scala/Julia): I'm using Python
   
   ## Error Message:
   ```
   ---INFO---
   vocab_size:663
   sentence_length:46
   -
   
   Creating Iterators...
   Initiating Training...
   INFO:root:Epoch[0] Train-perplexity=655.513238
   INFO:root:Epoch[0] Time cost=1.261
   infer_shape error. Arguments:
 image_feature: (50, 1024)
 word_data: (50, 77)
 softmax_label: (50,)
   Traceback (most recent call last):
 File "2_train_val.py", line 102, in 
   epoch_end_callback=mx.callback.do_checkpoint(checkpoints_prefix, 
period=10)
 File 
"/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/base_module.py",
 line 528, in fit
   batch_end_callback=eval_batch_end_callback, epoch=epoch)
 File 
"/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/base_module.py",
 line 244, in score
   self.forward(eval_batch, is_train=False)
 File 
"/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/module.py",
 line 608, in forward
   self.reshape(new_dshape, new_lshape)
 File 
"/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/module.py",
 line 470, in reshape
   self._exec_group.reshape(self._data_shapes, self._label_shapes)
 File 
"/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/executor_group.py",
 line 381, in reshape
   self.bind_exec(data_shapes, label_shapes, reshape=True)
 File 
"/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/executor_group.py",
 line 357, in bind_exec
   allow_up_sizing=True, **dict(data_shapes_i + label_shapes_i))
 File 

[GitHub] sandeep-krishnamurthy commented on issue #12067: [MXNET-788] Fix for issue #11733

2018-08-10 Thread GitBox
sandeep-krishnamurthy commented on issue #12067: [MXNET-788] Fix for issue 
#11733
URL: https://github.com/apache/incubator-mxnet/pull/12067#issuecomment-412168436
 
 
   Thank you @samskalicky for your contributions!
   @apeforest - Your comments are addressed. 
   This PR LGTM. Shall we merge this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed pull request #12083: GPU Memory Query to C API

2018-08-10 Thread GitBox
sandeep-krishnamurthy closed pull request #12083: GPU Memory Query to C API
URL: https://github.com/apache/incubator-mxnet/pull/12083
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/include/mxnet/base.h b/include/mxnet/base.h
index a652fe5b707..75784a391b4 100644
--- a/include/mxnet/base.h
+++ b/include/mxnet/base.h
@@ -222,6 +222,14 @@ struct Context {
* \return The number of GPUs that are available.
*/
   inline static int32_t GetGPUCount();
+  /*!
+   * \brief get the free and total available memory on a GPU
+   * \param dev the GPU number to query
+   * \param free_mem pointer to the integer holding free GPU memory
+   * \param total_mem pointer to the integer holding total GPU memory
+   * \return No return value
+   */
+  inline static void GetGPUMemoryInformation(int dev, int *free, int *total);
   /*!
* Create a pinned CPU context.
* \param dev_id the device id for corresponding GPU.
@@ -326,6 +334,35 @@ inline int32_t Context::GetGPUCount() {
 #endif
 }
 
+inline void Context::GetGPUMemoryInformation(int dev, int *free_mem,
+ int *total_mem) {
+#if MXNET_USE_CUDA
+
+  size_t memF, memT;
+  cudaError_t e;
+
+  int curDevice;
+  e = cudaGetDevice();
+  CHECK_EQ(e, cudaSuccess) << " CUDA: " << cudaGetErrorString(e);
+
+  e = cudaSetDevice(dev);
+  CHECK_EQ(e, cudaSuccess) << " CUDA: " << cudaGetErrorString(e);
+
+  e = cudaMemGetInfo(, );
+  CHECK_EQ(e, cudaSuccess) << " CUDA: " << cudaGetErrorString(e);
+
+  e = cudaSetDevice(curDevice);
+  CHECK_EQ(e, cudaSuccess) << " CUDA: " << cudaGetErrorString(e);
+
+  *free_mem = static_cast(memF);
+  *total_mem = static_cast(memT);
+
+#else
+  LOG(FATAL)
+  << "This call is only supported for MXNet built with CUDA support.";
+#endif
+}
+
 inline Context Context::FromString(const std::string& str) {
   Context ret;
   try {
diff --git a/include/mxnet/c_api.h b/include/mxnet/c_api.h
index 75147cfd706..5a24cc0b944 100644
--- a/include/mxnet/c_api.h
+++ b/include/mxnet/c_api.h
@@ -390,6 +390,15 @@ MXNET_DLL int MXEngineSetBulkSize(int bulk_size, int* 
prev_bulk_size);
  */
 MXNET_DLL int MXGetGPUCount(int* out);
 
+/*!
+ * \brief get the free and total available memory on a GPU
+ * \param dev the GPU number to query
+ * \param free_mem pointer to the integer holding free GPU memory
+ * \param total_mem pointer to the integer holding total GPU memory
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXGetGPUMemoryInformation(int dev, int *free_mem, int 
*total_mem);
+
 /*!
  * \brief get the MXNet library version as an integer
  * \param pointer to the integer holding the version number
diff --git a/src/c_api/c_api.cc b/src/c_api/c_api.cc
index ed513c0d778..1ef3f0fca9f 100644
--- a/src/c_api/c_api.cc
+++ b/src/c_api/c_api.cc
@@ -122,6 +122,12 @@ int MXGetGPUCount(int* out) {
   API_END();
 }
 
+int MXGetGPUMemoryInformation(int dev, int *free_mem, int *total_mem) {
+  API_BEGIN();
+  Context::GetGPUMemoryInformation(dev, free_mem, total_mem);
+  API_END();
+}
+
 int MXGetVersion(int *out) {
   API_BEGIN();
   *out = static_cast(MXNET_VERSION);


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: GPU Memory Query to C API (#12083)

2018-08-10 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 584c5c1  GPU Memory Query to C API (#12083)
584c5c1 is described below

commit 584c5c184be3073f2f2fad6dafe4689a625d676a
Author: Sebastian Bodenstein 
AuthorDate: Fri Aug 10 20:18:46 2018 +0200

GPU Memory Query to C API (#12083)

* add support for GPU memory query

* remove lint
---
 include/mxnet/base.h  | 37 +
 include/mxnet/c_api.h |  9 +
 src/c_api/c_api.cc|  6 ++
 3 files changed, 52 insertions(+)

diff --git a/include/mxnet/base.h b/include/mxnet/base.h
index a652fe5..75784a3 100644
--- a/include/mxnet/base.h
+++ b/include/mxnet/base.h
@@ -223,6 +223,14 @@ struct Context {
*/
   inline static int32_t GetGPUCount();
   /*!
+   * \brief get the free and total available memory on a GPU
+   * \param dev the GPU number to query
+   * \param free_mem pointer to the integer holding free GPU memory
+   * \param total_mem pointer to the integer holding total GPU memory
+   * \return No return value
+   */
+  inline static void GetGPUMemoryInformation(int dev, int *free, int *total);
+  /*!
* Create a pinned CPU context.
* \param dev_id the device id for corresponding GPU.
* \return Pinned CPU context. -1 for current GPU.
@@ -326,6 +334,35 @@ inline int32_t Context::GetGPUCount() {
 #endif
 }
 
+inline void Context::GetGPUMemoryInformation(int dev, int *free_mem,
+ int *total_mem) {
+#if MXNET_USE_CUDA
+
+  size_t memF, memT;
+  cudaError_t e;
+
+  int curDevice;
+  e = cudaGetDevice();
+  CHECK_EQ(e, cudaSuccess) << " CUDA: " << cudaGetErrorString(e);
+
+  e = cudaSetDevice(dev);
+  CHECK_EQ(e, cudaSuccess) << " CUDA: " << cudaGetErrorString(e);
+
+  e = cudaMemGetInfo(, );
+  CHECK_EQ(e, cudaSuccess) << " CUDA: " << cudaGetErrorString(e);
+
+  e = cudaSetDevice(curDevice);
+  CHECK_EQ(e, cudaSuccess) << " CUDA: " << cudaGetErrorString(e);
+
+  *free_mem = static_cast(memF);
+  *total_mem = static_cast(memT);
+
+#else
+  LOG(FATAL)
+  << "This call is only supported for MXNet built with CUDA support.";
+#endif
+}
+
 inline Context Context::FromString(const std::string& str) {
   Context ret;
   try {
diff --git a/include/mxnet/c_api.h b/include/mxnet/c_api.h
index 43f8227..0043996 100644
--- a/include/mxnet/c_api.h
+++ b/include/mxnet/c_api.h
@@ -438,6 +438,15 @@ MXNET_DLL int MXEngineSetBulkSize(int bulk_size, int* 
prev_bulk_size);
 MXNET_DLL int MXGetGPUCount(int* out);
 
 /*!
+ * \brief get the free and total available memory on a GPU
+ * \param dev the GPU number to query
+ * \param free_mem pointer to the integer holding free GPU memory
+ * \param total_mem pointer to the integer holding total GPU memory
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXGetGPUMemoryInformation(int dev, int *free_mem, int 
*total_mem);
+
+/*!
  * \brief get the MXNet library version as an integer
  * \param pointer to the integer holding the version number
  * \return 0 when success, -1 when failure happens
diff --git a/src/c_api/c_api.cc b/src/c_api/c_api.cc
index ed513c0..1ef3f0f 100644
--- a/src/c_api/c_api.cc
+++ b/src/c_api/c_api.cc
@@ -122,6 +122,12 @@ int MXGetGPUCount(int* out) {
   API_END();
 }
 
+int MXGetGPUMemoryInformation(int dev, int *free_mem, int *total_mem) {
+  API_BEGIN();
+  Context::GetGPUMemoryInformation(dev, free_mem, total_mem);
+  API_END();
+}
+
 int MXGetVersion(int *out) {
   API_BEGIN();
   *out = static_cast(MXNET_VERSION);



[GitHub] sandeep-krishnamurthy opened a new issue #12126: [Feature Request] Utility API for querying GPU memory

2018-08-10 Thread GitBox
sandeep-krishnamurthy opened a new issue #12126: [Feature Request] Utility API 
for querying GPU memory
URL: https://github.com/apache/incubator-mxnet/issues/12126
 
 
   Platform independent API (without using nvidia-smi) for querying GPU memory.
   C implementation is provided by @sbodenstein  in this PR - 
https://github.com/apache/incubator-mxnet/pull/12083#pullrequestreview-145347358
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #12083: GPU Memory Query to C API

2018-08-10 Thread GitBox
sandeep-krishnamurthy commented on issue #12083: GPU Memory Query to C API
URL: https://github.com/apache/incubator-mxnet/pull/12083#issuecomment-412164099
 
 
   @sbodenstein - Thank you for the contribution. It would be great if you 
follow up with a Python API for querying GPU memory? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12117: [MXNET-782] Fix Custom Metric Creation in R tutorial

2018-08-10 Thread GitBox
anirudhacharya commented on a change in pull request #12117: [MXNET-782] Fix 
Custom Metric Creation in R tutorial
URL: https://github.com/apache/incubator-mxnet/pull/12117#discussion_r209342266
 
 

 ##
 File path: docs/tutorials/r/fiveMinutesNeuralNetwork.md
 ##
 @@ -1,27 +1,31 @@
 Develop a Neural Network with MXNet in Five Minutes
 =
 
-This tutorial is designed for new users of the `mxnet` package for R. It shows 
how to construct a neural network to do regression in 5 minutes. It shows how 
to perform classification and regression tasks, respectively. The data we use 
is in the `mlbench` package.
+This tutorial is designed for new users of the `mxnet` package for R. It shows 
how to construct a neural network to do regression in 5 minutes. It shows how 
to perform classification and regression tasks, respectively. The data we use 
is in the `mlbench` package. Instructions to install R and MXNet's R package in 
different environments can be found 
[here](http://mxnet.incubator.apache.org/install/index.html?platform=Linux=R=CPU).
 
 
 ## Classification
 
-
-
+ ```
+## Loading required package: mlbench
+ ```
  ```r
-require(mlbench)
+if (!require(mlbench)) {
+  install.packages('mlbench')
+}
  ```
 
  ```
-## Loading required package: mlbench
+## Loading required package: mxnet
  ```
 
  ```r
-require(mxnet)
+if (!require(mxnet)) {
+  install.packages('mlbench')
 
 Review comment:
   this was a typo, i will fix it


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] safrooze commented on issue #11493: Fix MXPredReshape in the c_predict_api

2018-08-10 Thread GitBox
safrooze commented on issue #11493: Fix MXPredReshape in the c_predict_api
URL: https://github.com/apache/incubator-mxnet/pull/11493#issuecomment-412160584
 
 
   @marcoabreu If the default for `assert_almost_equal()` which is part of 
`mxnet` is too flaky for mxnet tests, then I'd argue that the defaults should 
be changed, not every place that it's used.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12117: [MXNET-782] Fix Custom Metric Creation in R tutorial

2018-08-10 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12117: [MXNET-782] 
Fix Custom Metric Creation in R tutorial
URL: https://github.com/apache/incubator-mxnet/pull/12117#discussion_r209338775
 
 

 ##
 File path: docs/tutorials/r/fiveMinutesNeuralNetwork.md
 ##
 @@ -1,27 +1,31 @@
 Develop a Neural Network with MXNet in Five Minutes
 =
 
-This tutorial is designed for new users of the `mxnet` package for R. It shows 
how to construct a neural network to do regression in 5 minutes. It shows how 
to perform classification and regression tasks, respectively. The data we use 
is in the `mlbench` package.
+This tutorial is designed for new users of the `mxnet` package for R. It shows 
how to construct a neural network to do regression in 5 minutes. It shows how 
to perform classification and regression tasks, respectively. The data we use 
is in the `mlbench` package. Instructions to install R and MXNet's R package in 
different environments can be found 
[here](http://mxnet.incubator.apache.org/install/index.html?platform=Linux=R=CPU).
 
 
 ## Classification
 
-
-
+ ```
+## Loading required package: mlbench
+ ```
  ```r
-require(mlbench)
+if (!require(mlbench)) {
+  install.packages('mlbench')
+}
  ```
 
  ```
-## Loading required package: mlbench
+## Loading required package: mxnet
  ```
 
  ```r
-require(mxnet)
+if (!require(mxnet)) {
+  install.packages('mlbench')
 
 Review comment:
   Why test on mxnet and install mlbench?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] perdasilva edited a comment on issue #12090: [MXNET-791][WIP] Pick with negative indices

2018-08-10 Thread GitBox
perdasilva edited a comment on issue #12090: [MXNET-791][WIP] Pick with 
negative indices
URL: https://github.com/apache/incubator-mxnet/pull/12090#issuecomment-412007597
 
 
   @haojin2 No problem at all ^^
   The only thing I could find that is similar to pick in numpy is 
[numpy.choose](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.choose.html).
 But there's no axis input there. It also seems that 'raise' is the default 
mode.  Upon closer inspection, 'wrap' works the same here as with numpy. 
Interestingly though, np.array indexing is in the range [-len, len) ^^
   
   I'll also increase the coverage for the test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #10264: Run rnn backward fail

2018-08-10 Thread GitBox
sandeep-krishnamurthy closed issue #10264: Run rnn backward fail 
URL: https://github.com/apache/incubator-mxnet/issues/10264
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #9715: IndexError in labels when size of training dataset is not multiple of batch size

2018-08-10 Thread GitBox
sandeep-krishnamurthy closed issue #9715: IndexError in labels when size of 
training dataset is not multiple of batch size
URL: https://github.com/apache/incubator-mxnet/issues/9715
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on issue #10264: Run rnn backward fail

2018-08-10 Thread GitBox
Roshrini commented on issue #10264: Run rnn backward fail 
URL: 
https://github.com/apache/incubator-mxnet/issues/10264#issuecomment-412155455
 
 
   This issue seemed to have resolved. Test cases are in place as mentioned 
above. I will be closing it issue for now. Please feel free to reopen if closed 
in error and if the issue still persists.
   
   @sandeep-krishnamurthy Can you please close this issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #12085: Accelerate the performance of topk for CPU side

2018-08-10 Thread GitBox
szha commented on issue #12085: Accelerate the performance of topk for CPU side
URL: https://github.com/apache/incubator-mxnet/pull/12085#issuecomment-412154289
 
 
   @asmushetzel thanks for the review. @pengzhao-intel I will take a look 
shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on issue #9715: IndexError in labels when size of training dataset is not multiple of batch size

2018-08-10 Thread GitBox
Roshrini commented on issue #9715: IndexError in labels when size of training 
dataset is not multiple of batch size
URL: 
https://github.com/apache/incubator-mxnet/issues/9715#issuecomment-412153017
 
 
   @nazikus Wasn't able to reproduce this issue on a similar dataset by running 
your code snippet.
   I will be closing this issue for now. Please feel free to reopen if closed 
in error and if the issue still persists.
   
   @sandeep-krishnamurthy Can you please close this issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #12104: [DO NOT REVIEW] Subgraph API

2018-08-10 Thread GitBox
zheng-da commented on a change in pull request #12104: [DO NOT REVIEW] Subgraph 
API
URL: https://github.com/apache/incubator-mxnet/pull/12104#discussion_r209332962
 
 

 ##
 File path: src/executor/graph_executor.cc
 ##
 @@ -1699,6 +1701,146 @@ GraphExecutor::CachedSegOpr 
GraphExecutor::CreateCachedSegOpr(size_t topo_start,
 iter->c_str());
   return ret;
 }
+
+// Infer shapes, dtypes, stypes, contexts for the forward graph
+static nnvm::Graph InferForwardAttrs(nnvm::Graph g,
+ nnvm::ShapeVector arg_shapes,
+ nnvm::DTypeVector arg_dtypes,
+ StorageTypeVector arg_stypes,
+ const Context& default_ctx,
+ const std::map& 
ctx_map,
+ const std::vector& in_arg_ctxes,
+ const std::vector& 
aux_state_ctxes) {
+  const auto& indexed_graph = g.indexed_graph();
+  const auto num_forward_inputs = indexed_graph.input_nodes().size();
+  g = AssignContext(g, default_ctx, ctx_map, in_arg_ctxes, {},
+   aux_state_ctxes, {}, num_forward_inputs, g.outputs.size());
+  g = InferShape(std::move(g), std::move(arg_shapes), "__shape__");
+  if (g.GetAttr("shape_num_unknown_nodes") != 0U) {
+HandleInferShapeError(num_forward_inputs, indexed_graph,
+  g.GetAttr("shape"));
+  }
+  g = InferType(std::move(g), std::move(arg_dtypes), "__dtype__");
+  if (g.GetAttr("dtype_num_unknown_nodes") != 0U) {
+HandleInferTypeError(num_forward_inputs, indexed_graph,
+ g.GetAttr("dtype"));
+  }
+  g = InferStorageType(std::move(g), std::move(arg_stypes), 
"__storage_type__");
+  if (g.GetAttr("storage_type_num_unknown_nodes") != 0U) {
+HandleInferStorageTypeError(num_forward_inputs, indexed_graph,
+g.GetAttr("storage_type"));
+  }
+  return g;
+}
+
+// Given input attr arrays, partition the graph using the backend name equal 
to prop_name.
+// This is a common function for bind and simple_bind flows.
+static nnvm::Symbol PartitionGraph(const nnvm::Symbol& src,
+   const std::string& prop_name,
+   const nnvm::ShapeVector& arg_shapes,
+   const nnvm::DTypeVector& arg_dtypes,
+   const StorageTypeVector arg_stypes,
+   const Context& default_ctx,
+   const std::map& 
ctx_map,
+   const std::vector& in_arg_ctxes,
+   const std::vector& 
aux_state_ctxes) {
+  auto subgraph_prop = 
op::SubgraphPropertyRegistry::Get()->CreateSubgraphProperty(prop_name);
+  nnvm::Symbol ret = src.Copy();
+  nnvm::Graph g;
+  g.outputs = ret.outputs;
+  g = InferForwardAttrs(g, arg_shapes, arg_dtypes, arg_stypes, default_ctx,
+ctx_map, in_arg_ctxes, aux_state_ctxes);
+  subgraph_prop->SetAttr("graph", g);
 
 Review comment:
   Subgraph selector is created by each node. I feel it's more intuitive to 
customize `g` somewhere subgraph property is created.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] andrewfayres commented on issue #11926: segfault in native code while trying to use CustomOp

2018-08-10 Thread GitBox
andrewfayres commented on issue #11926: segfault in native code while trying to 
use CustomOp
URL: 
https://github.com/apache/incubator-mxnet/issues/11926#issuecomment-412149583
 
 
   I'll take a look and depending on complexity I might open another ticket to 
track this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: reduce a copy for rowsparse parameter.reduce (#12039)

2018-08-10 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 6f7dee0  reduce a copy for rowsparse parameter.reduce (#12039)
6f7dee0 is described below

commit 6f7dee02cb670e514b5ed77c98b78b0a2caf8272
Author: Haibin Lin 
AuthorDate: Fri Aug 10 10:18:58 2018 -0700

reduce a copy for rowsparse parameter.reduce (#12039)
---
 python/mxnet/gluon/parameter.py |  2 +-
 python/mxnet/gluon/trainer.py   | 11 +++--
 tests/python/unittest/test_gluon_trainer.py | 35 +++--
 3 files changed, 28 insertions(+), 20 deletions(-)

diff --git a/python/mxnet/gluon/parameter.py b/python/mxnet/gluon/parameter.py
index 0c6aae9..1f6b86c 100644
--- a/python/mxnet/gluon/parameter.py
+++ b/python/mxnet/gluon/parameter.py
@@ -319,7 +319,7 @@ class Parameter(object):
 # fetch all rows for 'row_sparse' param
 all_row_ids = ndarray.arange(0, self.shape[0], dtype='int64', 
ctx=ctx)
 data = ndarray.zeros(self.shape, stype='row_sparse', ctx=ctx)
-self._trainer._row_sparse_pull(self, data, all_row_ids)
+self._trainer._row_sparse_pull(self, data, all_row_ids, 
full_idx=True)
 return data
 
 def initialize(self, init=None, ctx=None, 
default_init=initializer.Uniform(),
diff --git a/python/mxnet/gluon/trainer.py b/python/mxnet/gluon/trainer.py
index 98a6878..028e660 100644
--- a/python/mxnet/gluon/trainer.py
+++ b/python/mxnet/gluon/trainer.py
@@ -235,14 +235,21 @@ class Trainer(object):
 else:
 self._optimizer.set_learning_rate(lr)
 
-def _row_sparse_pull(self, parameter, out, row_id):
+def _row_sparse_pull(self, parameter, out, row_id, full_idx=False):
+"""Internal method to invoke pull operations on KVStore. If `full_idx` 
is set to True,
+`kv.pull` is preferred instead of `kv.row_sparse_pull`.
+"""
 # initialize kv and params if not already
 if not self._kv_initialized:
 self._init_kvstore()
 if self._params_to_init:
 self._init_params()
 idx = self._param2idx[parameter.name]
-self._kvstore.row_sparse_pull(idx, out=out, row_ids=row_id, 
priority=-idx)
+if full_idx and 'dist' not in self._kvstore.type:
+assert row_id.size == out.shape[0]
+self._kvstore.pull(idx, out=out, priority=-idx, 
ignore_sparse=False)
+else:
+self._kvstore.row_sparse_pull(idx, out=out, row_ids=row_id, 
priority=-idx)
 
 def step(self, batch_size, ignore_stale_grad=False):
 """Makes one step of parameter update. Should be called after
diff --git a/tests/python/unittest/test_gluon_trainer.py 
b/tests/python/unittest/test_gluon_trainer.py
index 2a34400..72c01ac 100644
--- a/tests/python/unittest/test_gluon_trainer.py
+++ b/tests/python/unittest/test_gluon_trainer.py
@@ -114,6 +114,24 @@ def test_trainer_save_load():
 assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.2
 
 @with_seed()
+def test_trainer_sparse_save_load():
+x = gluon.Parameter('x', shape=(10, 1), lr_mult=1.0, stype='row_sparse')
+x.initialize(ctx=[mx.cpu(0)], init='zeros')
+trainer = gluon.Trainer([x], 'sgd', {'learning_rate': 0.1})
+all_rows = mx.nd.arange(0, 10, ctx=mx.cpu(0))
+with mx.autograd.record():
+for w in x.list_row_sparse_data(all_rows):
+y = w * 1
+y.backward()
+trainer.step(1)
+assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.1
+trainer.save_states('test_trainer_sparse_save_load.states')
+trainer.load_states('test_trainer_sparse_save_load.states')
+x.lr_mult = 2.0
+# check if parameter dict is correctly associated with optimizer after 
load_state
+assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.2
+
+@with_seed()
 def test_trainer_multi_layer_init():
 class Net(gluon.Block):
 def __init__(self, **kwargs):
@@ -159,23 +177,6 @@ def test_trainer_multi_layer_init():
 check_init([mx.cpu(1)])
 
 @with_seed()
-def test_trainer_save_load():
-x = gluon.Parameter('x', shape=(10,), lr_mult=1.0)
-x.initialize(ctx=[mx.cpu(0), mx.cpu(1)], init='zeros')
-trainer = gluon.Trainer([x], 'sgd', {'learning_rate': 0.1})
-with mx.autograd.record():
-for w in x.list_data():
-y = w + 1
-y.backward()
-trainer.step(1)
-assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.1
-trainer.save_states('test_trainer_save_load.states')
-trainer.load_states('test_trainer_save_load.states')
-x.lr_mult = 2.0
-# check if parameter dict is correctly associated with optimizer after 
load_state
-assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.2
-
-@with_seed()
 def test_trainer_reset_kv():
 def 

[GitHub] eric-haibin-lin closed pull request #12039: Reduce a copy for rowsparse parameter during parameter.save

2018-08-10 Thread GitBox
eric-haibin-lin closed pull request #12039: Reduce a copy for rowsparse 
parameter during parameter.save
URL: https://github.com/apache/incubator-mxnet/pull/12039
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/gluon/parameter.py b/python/mxnet/gluon/parameter.py
index 0c6aae92135..1f6b86c978c 100644
--- a/python/mxnet/gluon/parameter.py
+++ b/python/mxnet/gluon/parameter.py
@@ -319,7 +319,7 @@ def _reduce(self):
 # fetch all rows for 'row_sparse' param
 all_row_ids = ndarray.arange(0, self.shape[0], dtype='int64', 
ctx=ctx)
 data = ndarray.zeros(self.shape, stype='row_sparse', ctx=ctx)
-self._trainer._row_sparse_pull(self, data, all_row_ids)
+self._trainer._row_sparse_pull(self, data, all_row_ids, 
full_idx=True)
 return data
 
 def initialize(self, init=None, ctx=None, 
default_init=initializer.Uniform(),
diff --git a/python/mxnet/gluon/trainer.py b/python/mxnet/gluon/trainer.py
index 98a6878b94b..028e6607510 100644
--- a/python/mxnet/gluon/trainer.py
+++ b/python/mxnet/gluon/trainer.py
@@ -235,14 +235,21 @@ def set_learning_rate(self, lr):
 else:
 self._optimizer.set_learning_rate(lr)
 
-def _row_sparse_pull(self, parameter, out, row_id):
+def _row_sparse_pull(self, parameter, out, row_id, full_idx=False):
+"""Internal method to invoke pull operations on KVStore. If `full_idx` 
is set to True,
+`kv.pull` is preferred instead of `kv.row_sparse_pull`.
+"""
 # initialize kv and params if not already
 if not self._kv_initialized:
 self._init_kvstore()
 if self._params_to_init:
 self._init_params()
 idx = self._param2idx[parameter.name]
-self._kvstore.row_sparse_pull(idx, out=out, row_ids=row_id, 
priority=-idx)
+if full_idx and 'dist' not in self._kvstore.type:
+assert row_id.size == out.shape[0]
+self._kvstore.pull(idx, out=out, priority=-idx, 
ignore_sparse=False)
+else:
+self._kvstore.row_sparse_pull(idx, out=out, row_ids=row_id, 
priority=-idx)
 
 def step(self, batch_size, ignore_stale_grad=False):
 """Makes one step of parameter update. Should be called after
diff --git a/tests/python/unittest/test_gluon_trainer.py 
b/tests/python/unittest/test_gluon_trainer.py
index 2a34400d60a..72c01acb265 100644
--- a/tests/python/unittest/test_gluon_trainer.py
+++ b/tests/python/unittest/test_gluon_trainer.py
@@ -113,6 +113,24 @@ def test_trainer_save_load():
 # check if parameter dict is correctly associated with optimizer after 
load_state
 assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.2
 
+@with_seed()
+def test_trainer_sparse_save_load():
+x = gluon.Parameter('x', shape=(10, 1), lr_mult=1.0, stype='row_sparse')
+x.initialize(ctx=[mx.cpu(0)], init='zeros')
+trainer = gluon.Trainer([x], 'sgd', {'learning_rate': 0.1})
+all_rows = mx.nd.arange(0, 10, ctx=mx.cpu(0))
+with mx.autograd.record():
+for w in x.list_row_sparse_data(all_rows):
+y = w * 1
+y.backward()
+trainer.step(1)
+assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.1
+trainer.save_states('test_trainer_sparse_save_load.states')
+trainer.load_states('test_trainer_sparse_save_load.states')
+x.lr_mult = 2.0
+# check if parameter dict is correctly associated with optimizer after 
load_state
+assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.2
+
 @with_seed()
 def test_trainer_multi_layer_init():
 class Net(gluon.Block):
@@ -158,23 +176,6 @@ def check_init(ctxes):
 check_init([mx.cpu(1), mx.cpu(2)])
 check_init([mx.cpu(1)])
 
-@with_seed()
-def test_trainer_save_load():
-x = gluon.Parameter('x', shape=(10,), lr_mult=1.0)
-x.initialize(ctx=[mx.cpu(0), mx.cpu(1)], init='zeros')
-trainer = gluon.Trainer([x], 'sgd', {'learning_rate': 0.1})
-with mx.autograd.record():
-for w in x.list_data():
-y = w + 1
-y.backward()
-trainer.step(1)
-assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.1
-trainer.save_states('test_trainer_save_load.states')
-trainer.load_states('test_trainer_save_load.states')
-x.lr_mult = 2.0
-# check if parameter dict is correctly associated with optimizer after 
load_state
-assert trainer._kvstore._updater.optimizer._get_lr(0) == 0.2
-
 @with_seed()
 def test_trainer_reset_kv():
 def check_trainer_reset_kv(kv):


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific 

[GitHub] pengzhao-intel edited a comment on issue #12085: Accelerate the performance of topk for CPU side

2018-08-10 Thread GitBox
pengzhao-intel edited a comment on issue #12085: Accelerate the performance of 
topk for CPU side
URL: https://github.com/apache/incubator-mxnet/pull/12085#issuecomment-412029845
 
 
   @asmushetzel any other comments?
   
   @marcoabreu @szha could you help take a review and merge the PR? we hope it 
can be involved into 1.3 and then sockeye can get lots of benefits.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11493: Fix MXPredReshape in the c_predict_api

2018-08-10 Thread GitBox
marcoabreu commented on issue #11493: Fix MXPredReshape in the c_predict_api
URL: https://github.com/apache/incubator-mxnet/pull/11493#issuecomment-412146573
 
 
   The default values of almost equals are too low and could result in flaky 
behavior. We should address it first


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on issue #12117: [MXNET-782] Fix Custom Metric Creation in R tutorial

2018-08-10 Thread GitBox
hetong007 commented on issue #12117: [MXNET-782] Fix Custom Metric Creation in 
R tutorial
URL: https://github.com/apache/incubator-mxnet/pull/12117#issuecomment-412145214
 
 
   Code to test if a package is installed, if not then install it:
   
   ```r
   if (!require(mlbench)) {
   install.packages('mlbench')
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sbodenstein commented on issue #11984: Generalized broadcast_like operator

2018-08-10 Thread GitBox
sbodenstein commented on issue #11984: Generalized broadcast_like operator
URL: https://github.com/apache/incubator-mxnet/pull/11984#issuecomment-412145001
 
 
   Ah, apologies didn't see that. Is it ready to be merged?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] safrooze commented on issue #12116: Excessive memory allocation without static_alloc

2018-08-10 Thread GitBox
safrooze commented on issue #12116: Excessive memory allocation without 
static_alloc
URL: 
https://github.com/apache/incubator-mxnet/issues/12116#issuecomment-412145103
 
 
   @piiswrong Can you take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] safrooze commented on issue #12116: Excessive memory allocation without static_alloc

2018-08-10 Thread GitBox
safrooze commented on issue #12116: Excessive memory allocation without 
static_alloc
URL: 
https://github.com/apache/incubator-mxnet/issues/12116#issuecomment-412139807
 
 
   @KellenSunderland  I had to be more specific. The memory increase happens 
during inference and increases by each forward call and eventually stabilizes. 
Haven't had a chance to create a minimum reproducible example yet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] safrooze commented on issue #11493: Fix MXPredReshape in the c_predict_api

2018-08-10 Thread GitBox
safrooze commented on issue #11493: Fix MXPredReshape in the c_predict_api
URL: https://github.com/apache/incubator-mxnet/pull/11493#issuecomment-412137792
 
 
   @sandeep-krishnamurthy This is an important fix that we need in our project. 
Given that the code is already using `assert_almost_equal()`, would you be OK 
with merging this as is? I want to make sure it is included in 1.3.0 release.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #11325: [MXNET-703] TensorRT runtime integration

2018-08-10 Thread GitBox
KellenSunderland commented on a change in pull request #11325: [MXNET-703] 
TensorRT runtime integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r209316731
 
 

 ##
 File path: python/mxnet/contrib/tensorrt.py
 ##
 @@ -0,0 +1,110 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+""" Module to enable the use of TensorRT optimized graphs."""
+
+import ctypes
+import logging
+import os
+
+from mxnet.symbol import Symbol
 
 Review comment:
   Addressed here: https://github.com/apache/incubator-mxnet/pull/12124


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ifeherva commented on issue #11984: Generalized broadcast_like operator

2018-08-10 Thread GitBox
ifeherva commented on issue #11984: Generalized broadcast_like operator
URL: https://github.com/apache/incubator-mxnet/pull/11984#issuecomment-412128788
 
 
   Already did:
   
https://github.com/apache/incubator-mxnet/pull/11984/files#diff-69757562d07268150de8b369ff5b6b61R569
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix shared memory with gluon dataloader, add option pin_memory (#11908)

2018-08-10 Thread zhreshold
This is an automated email from the ASF dual-hosted git repository.

zhreshold pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 5a9c3af  Fix shared memory with gluon dataloader, add option 
pin_memory (#11908)
5a9c3af is described below

commit 5a9c3af4b101b85047b306575575ea9022a8474e
Author: Joshua Z. Zhang 
AuthorDate: Fri Aug 10 09:01:44 2018 -0700

Fix shared memory with gluon dataloader, add option pin_memory (#11908)

* use threading for mp dataloader fetching, allow pin_memory option

* allow pin tuple of data into cpu_pinned

* fix as_in_context if not cpu_pinned

* fix cpu_pinned

* fix unittest for windows, update doc that windows mp is available

* fix pin_memory

* fix lint

* always use simplequeue for data queue

* remove main thread clearing for data_queue

* do not use outside folder as pythonpath but run nosetests inside

* use :MXNET_LIBRARY_PATH= to locate dll

* fix dll path

* correct dll path
---
 ci/windows/test_py2_cpu.ps1  |   1 +
 ci/windows/test_py2_gpu.ps1  |   1 +
 ci/windows/test_py3_cpu.ps1  |   1 +
 ci/windows/test_py3_gpu.ps1  |   1 +
 python/mxnet/gluon/data/dataloader.py|  61 ---
 tests/python/unittest/test_gluon_data.py | 168 +++
 6 files changed, 128 insertions(+), 105 deletions(-)

diff --git a/ci/windows/test_py2_cpu.ps1 b/ci/windows/test_py2_cpu.ps1
index 1623d29..aa38b81 100644
--- a/ci/windows/test_py2_cpu.ps1
+++ b/ci/windows/test_py2_cpu.ps1
@@ -16,6 +16,7 @@
 # under the License.
 
 7z x -y windows_package.7z
+$env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll
 $env:PYTHONPATH=join-path $pwd.Path windows_package\python
 $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 c:\Anaconda3\envs\py2\Scripts\pip install -r tests\requirements.txt
diff --git a/ci/windows/test_py2_gpu.ps1 b/ci/windows/test_py2_gpu.ps1
index 13cd536..5f8de5a 100644
--- a/ci/windows/test_py2_gpu.ps1
+++ b/ci/windows/test_py2_gpu.ps1
@@ -16,6 +16,7 @@
 # under the License.
 
 7z x -y windows_package.7z
+$env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll
 $env:PYTHONPATH=join-path $pwd.Path windows_package\python
 $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 c:\Anaconda3\envs\py2\Scripts\pip install -r tests\requirements.txt
diff --git a/ci/windows/test_py3_cpu.ps1 b/ci/windows/test_py3_cpu.ps1
index 98d4e41..0dd48de 100644
--- a/ci/windows/test_py3_cpu.ps1
+++ b/ci/windows/test_py3_cpu.ps1
@@ -16,6 +16,7 @@
 # under the License.
 
 7z x -y windows_package.7z
+$env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll
 $env:PYTHONPATH=join-path $pwd.Path windows_package\python
 $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 c:\Anaconda3\envs\py3\Scripts\pip install -r tests\requirements.txt
diff --git a/ci/windows/test_py3_gpu.ps1 b/ci/windows/test_py3_gpu.ps1
index b94b4f3..4a0feb1 100644
--- a/ci/windows/test_py3_gpu.ps1
+++ b/ci/windows/test_py3_gpu.ps1
@@ -16,6 +16,7 @@
 # under the License.
 
 7z x -y windows_package.7z
+$env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll
 $env:PYTHONPATH=join-path $pwd.Path windows_package\python
 $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 c:\Anaconda3\envs\py3\Scripts\pip install -r tests\requirements.txt
diff --git a/python/mxnet/gluon/data/dataloader.py 
b/python/mxnet/gluon/data/dataloader.py
index eb1eb41..13ab544 100644
--- a/python/mxnet/gluon/data/dataloader.py
+++ b/python/mxnet/gluon/data/dataloader.py
@@ -16,7 +16,7 @@
 # under the License.
 
 # coding: utf-8
-# pylint: disable=
+# pylint: disable=ungrouped-imports
 """Dataset generator."""
 __all__ = ['DataLoader']
 
@@ -26,6 +26,7 @@ import sys
 import multiprocessing
 import multiprocessing.queues
 from multiprocessing.reduction import ForkingPickler
+import threading
 import numpy as np
 
 try:
@@ -149,6 +150,14 @@ def default_mp_batchify_fn(data):
 ctx=context.Context('cpu_shared', 0))
 
 
+def _as_in_context(data, ctx):
+"""Move data into new context."""
+if isinstance(data, nd.NDArray):
+return data.as_in_context(ctx)
+elif isinstance(data, (list, tuple)):
+return [_as_in_context(d, ctx) for d in data]
+return data
+
 def worker_loop(dataset, key_queue, data_queue, batchify_fn):
 """Worker loop for multiprocessing DataLoader."""
 dataset._fork()
@@ -159,9 +168,21 @@ def worker_loop(dataset, key_queue, data_queue, 
batchify_fn):
 batch = batchify_fn([dataset[i] for i in samples])
 data_queue.put((idx, batch))
 
+def fetcher_loop(data_queue, data_buffer, pin_memory=False):
+"""Fetcher loop for fetching data from queue and put in reorder dict."""
+while True:
+idx, batch = data_queue.get()
+ 

[GitHub] zhreshold closed pull request #11908: Fix shared memory with gluon dataloader, add option pin_memory

2018-08-10 Thread GitBox
zhreshold closed pull request #11908: Fix shared memory with gluon dataloader, 
add option pin_memory
URL: https://github.com/apache/incubator-mxnet/pull/11908
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/ci/windows/test_py2_cpu.ps1 b/ci/windows/test_py2_cpu.ps1
index 1623d295610..aa38b81e392 100644
--- a/ci/windows/test_py2_cpu.ps1
+++ b/ci/windows/test_py2_cpu.ps1
@@ -16,6 +16,7 @@
 # under the License.
 
 7z x -y windows_package.7z
+$env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll
 $env:PYTHONPATH=join-path $pwd.Path windows_package\python
 $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 c:\Anaconda3\envs\py2\Scripts\pip install -r tests\requirements.txt
diff --git a/ci/windows/test_py2_gpu.ps1 b/ci/windows/test_py2_gpu.ps1
index 13cd5366e0d..5f8de5ac4f9 100644
--- a/ci/windows/test_py2_gpu.ps1
+++ b/ci/windows/test_py2_gpu.ps1
@@ -16,6 +16,7 @@
 # under the License.
 
 7z x -y windows_package.7z
+$env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll
 $env:PYTHONPATH=join-path $pwd.Path windows_package\python
 $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 c:\Anaconda3\envs\py2\Scripts\pip install -r tests\requirements.txt
diff --git a/ci/windows/test_py3_cpu.ps1 b/ci/windows/test_py3_cpu.ps1
index 98d4e410e8f..0dd48de26b3 100644
--- a/ci/windows/test_py3_cpu.ps1
+++ b/ci/windows/test_py3_cpu.ps1
@@ -16,6 +16,7 @@
 # under the License.
 
 7z x -y windows_package.7z
+$env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll
 $env:PYTHONPATH=join-path $pwd.Path windows_package\python
 $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 c:\Anaconda3\envs\py3\Scripts\pip install -r tests\requirements.txt
diff --git a/ci/windows/test_py3_gpu.ps1 b/ci/windows/test_py3_gpu.ps1
index b94b4f389be..4a0feb1ede8 100644
--- a/ci/windows/test_py3_gpu.ps1
+++ b/ci/windows/test_py3_gpu.ps1
@@ -16,6 +16,7 @@
 # under the License.
 
 7z x -y windows_package.7z
+$env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll
 $env:PYTHONPATH=join-path $pwd.Path windows_package\python
 $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 c:\Anaconda3\envs\py3\Scripts\pip install -r tests\requirements.txt
diff --git a/python/mxnet/gluon/data/dataloader.py 
b/python/mxnet/gluon/data/dataloader.py
index eb1eb419cd0..13ab544a03d 100644
--- a/python/mxnet/gluon/data/dataloader.py
+++ b/python/mxnet/gluon/data/dataloader.py
@@ -16,7 +16,7 @@
 # under the License.
 
 # coding: utf-8
-# pylint: disable=
+# pylint: disable=ungrouped-imports
 """Dataset generator."""
 __all__ = ['DataLoader']
 
@@ -26,6 +26,7 @@
 import multiprocessing
 import multiprocessing.queues
 from multiprocessing.reduction import ForkingPickler
+import threading
 import numpy as np
 
 try:
@@ -149,6 +150,14 @@ def default_mp_batchify_fn(data):
 ctx=context.Context('cpu_shared', 0))
 
 
+def _as_in_context(data, ctx):
+"""Move data into new context."""
+if isinstance(data, nd.NDArray):
+return data.as_in_context(ctx)
+elif isinstance(data, (list, tuple)):
+return [_as_in_context(d, ctx) for d in data]
+return data
+
 def worker_loop(dataset, key_queue, data_queue, batchify_fn):
 """Worker loop for multiprocessing DataLoader."""
 dataset._fork()
@@ -159,9 +168,21 @@ def worker_loop(dataset, key_queue, data_queue, 
batchify_fn):
 batch = batchify_fn([dataset[i] for i in samples])
 data_queue.put((idx, batch))
 
+def fetcher_loop(data_queue, data_buffer, pin_memory=False):
+"""Fetcher loop for fetching data from queue and put in reorder dict."""
+while True:
+idx, batch = data_queue.get()
+if idx is None:
+break
+if pin_memory:
+batch = _as_in_context(batch, context.cpu_pinned())
+else:
+batch = _as_in_context(batch, context.cpu())
+data_buffer[idx] = batch
+
 class _MultiWorkerIter(object):
 """Interal multi-worker iterator for DataLoader."""
-def __init__(self, num_workers, dataset, batchify_fn, batch_sampler):
+def __init__(self, num_workers, dataset, batchify_fn, batch_sampler, 
pin_memory=False):
 assert num_workers > 0, "_MultiWorkerIter is not for {} 
workers".format(num_workers)
 self._num_workers = num_workers
 self._dataset = dataset
@@ -184,6 +205,12 @@ def __init__(self, num_workers, dataset, batchify_fn, 
batch_sampler):
 worker.start()
 workers.append(worker)
 
+self._fetcher = threading.Thread(
+target=fetcher_loop,
+args=(self._data_queue, self._data_buffer, pin_memory))
+self._fetcher.daemon = True
+self._fetcher.start()
+
 # pre-fetch
 for _ in range(2 * 

[GitHub] sbodenstein commented on issue #11984: Generalized broadcast_like operator

2018-08-10 Thread GitBox
sbodenstein commented on issue #11984: Generalized broadcast_like operator
URL: https://github.com/apache/incubator-mxnet/pull/11984#issuecomment-412126165
 
 
   Great! Can you just add a test for the empty tuple case?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #12123: cython nnvm include path error

2018-08-10 Thread GitBox
tqchen commented on issue #12123: cython nnvm include path error
URL: 
https://github.com/apache/incubator-mxnet/issues/12123#issuecomment-412122776
 
 
   Thanks for the catch, can you send a PR to fix this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209298986
 
 

 ##
 File path: python/mxnet/_ctypes/ndarray.py
 ##
 @@ -31,21 +31,24 @@
 
 class NDArrayBase(object):
 """Base data structure for ndarray"""
-__slots__ = ["handle", "writable"]
+__slots__ = ["handle", "writable", "dlpack"]
 # pylint: disable= no-member
 
-def __init__(self, handle, writable=True):
+def __init__(self, handle, writable=True, dlpack=None):
 
 Review comment:
   You need to be careful to use shape from the same NDArray in your 
NDArrayDLManager


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209298810
 
 

 ##
 File path: python/mxnet/_ctypes/ndarray.py
 ##
 @@ -31,21 +31,24 @@
 
 class NDArrayBase(object):
 """Base data structure for ndarray"""
-__slots__ = ["handle", "writable"]
+__slots__ = ["handle", "writable", "dlpack"]
 # pylint: disable= no-member
 
-def __init__(self, handle, writable=True):
+def __init__(self, handle, writable=True, dlpack=None):
 
 Review comment:
   In your case, when a get deleted, b still holds a NDArrayDLManager, which is 
allocated by new, and that object still hold NDArray(which holds a shared_ptr), 
so the original resource won't be released


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209298235
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   Specificialy, the signature of the destructor need to use c_void_p instead 
of py_object to avoid repeatitive destruction


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack Transformation API

2018-08-10 Thread GitBox
tqchen commented on a change in pull request #12047: [MXNET-779]Add DLPack 
Transformation API
URL: https://github.com/apache/incubator-mxnet/pull/12047#discussion_r209298039
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -737,6 +741,57 @@ MXNET_DLL int MXNDArrayGetShape(NDArrayHandle handle,
  */
 MXNET_DLL int MXNDArrayGetData(NDArrayHandle handle,
void **out_pdata);
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForRead(NDArrayHandle handle,
+   DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a reference view of NDArray that
+*  represents as DLManagedTensor until
+*  all the pending reads/writes with respect NDArray are finished.
+* \param handle the handle to the ndarray
+* \param out_dlpack pointer holder to get pointer of DLManagedTensor
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayToDLPackForWrite(NDArrayHandle handle,
+DLManagedTensorHandle *out_dlpack);
+
+/*!
+* \brief Create a NDArray backed by a dlpack tensor.
+*
+* This allows us to create a NDArray using the memory
+* allocated by an external deep learning framework
+* that is DLPack compatible.
+*
+* The memory is retained until the NDArray went out of scope.
+*
+* \param dlpack the pointer of the input DLManagedTensor
+* \param out_handle pointer holder to get pointer of NDArray
+* \return 0 when success, -1 when failure happens
+*/
+MXNET_DLL int MXNDArrayFromDLPack(DLManagedTensorHandle dlpack,
+  NDArrayHandle *out_handle);
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack the pointer of the input DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayCallDLPackDeleter(DLManagedTensorHandle dlpack);
+
+/*!
+ * \brief Delete a dlpack tensor
+ * \param dlpack_capsule the pointer of a PyCapsule storing DLManagedTensor
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL void MXNDArrayCallDLPackCapsuleDeleter(PyObjectHandle 
dlpack_capsule);
 
 Review comment:
   There is a special trick that need to be used to implement destructor in 
python. https://github.com/dmlc/tvm/pull/1573 implements the destructor in 
python and it works


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >