[GitHub] [incubator-mxnet] gachiemchiep commented on issue #17613: Illegal instruction (core dumped) when running mxnet on Intel NUC

2020-02-16 Thread GitBox
gachiemchiep commented on issue #17613: Illegal instruction (core dumped) when 
running mxnet on Intel NUC
URL: 
https://github.com/apache/incubator-mxnet/issues/17613#issuecomment-586852912
 
 
   Hello @TaoLv  
   Thanks man. The discussion did help me understand the problem. 
   I will try to compile mxnet to check whether my problem could be solved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17613: Illegal instruction (core dumped) when running mxnet on Intel NUC

2020-02-16 Thread GitBox
TaoLv commented on issue #17613: Illegal instruction (core dumped) when running 
mxnet on Intel NUC
URL: 
https://github.com/apache/incubator-mxnet/issues/17613#issuecomment-58685
 
 
   The binaries you used needs AVX2 instruction. See discussion in 
https://github.com/apache/incubator-mxnet/issues/11911.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gachiemchiep opened a new issue #17613: Illegal instruction (core dumped) when running mxnet on Intel NUC

2020-02-16 Thread GitBox
gachiemchiep opened a new issue #17613: Illegal instruction (core dumped) when 
running mxnet on Intel NUC
URL: https://github.com/apache/incubator-mxnet/issues/17613
 
 
   ## Description
   Illegal instruction (core dumped) when running mxnet on Intel NUC
   
   ### Error Message
   Illegal instruction (core dumped)
   
   ## What have you tried to solve it?
   
   1. try with mxnet version 1.5.1 and 1.5.0
   2. Also try mxnet on anaconda channel
   
   ## Environment
   
   With mxnet
   
   ```bash
   (base) jil@jil-NUC1:~$ curl --retry 10 -s 
https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | 
python
   --Python Info--
   Version  : 3.6.9
   Compiler : GCC 7.3.0
   Build: ('default', 'Jul 30 2019 19:07:31')
   Arch : ('64bit', '')
   Pip Info---
   Version  : 20.0.2
   Directory: /home/jil/opt/miniconda2/lib/python3.6/site-packages/pip
   --MXNet Info---
   Illegal instruction (core dumped)
   
   ```
   
   Without mxnet
   
   ```
   (base) jil@jil-NUC1:~$ curl --retry 10 -s 
https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | 
python
   --Python Info--
   Version  : 3.6.9
   Compiler : GCC 7.3.0
   Build: ('default', 'Jul 30 2019 19:07:31')
   Arch : ('64bit', '')
   Pip Info---
   Version  : 20.0.2
   Directory: /home/jil/opt/miniconda2/lib/python3.6/site-packages/pip
   --MXNet Info---
   An error occured trying to import mxnet.
   This is very likely due to missing missing or incompatible library files.
   Traceback (most recent call last):
 File "", line 119, in check_mxnet
   AttributeError: module 'mxnet' has no attribute '__version__'
   
   --System Info--
   Platform : Linux-5.3.0-28-generic-x86_64-with-debian-buster-sid
   system   : Linux
   node : jil-NUC1
   release  : 5.3.0-28-generic
   version  : #30~18.04.1-Ubuntu SMP Fri Jan 17 06:14:09 UTC 2020
   --Hardware Info--
   machine  : x86_64
   processor: x86_64
   Architecture:x86_64
   CPU op-mode(s):  32-bit, 64-bit
   Byte Order:  Little Endian
   CPU(s):  4
   On-line CPU(s) list: 0-3
   Thread(s) per core:  1
   Core(s) per socket:  4
   Socket(s):   1
   NUMA node(s):1
   Vendor ID:   GenuineIntel
   CPU family:  6
   Model:   122
   Model name:  Intel(R) Pentium(R) Silver J5005 CPU @ 1.50GHz
   Stepping:1
   CPU MHz: 2665.239
   CPU max MHz: 2800.
   CPU min MHz: 800.
   BogoMIPS:2995.20
   Virtualization:  VT-x
   L1d cache:   24K
   L1i cache:   32K
   L2 cache:4096K
   NUMA node0 CPU(s):   0-3
   Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl 
xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 
monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe 
popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault 
cat_l2 pti cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi 
flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms mpx rdt_a rdseed 
smap clflushopt intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm ida arat 
pln pts umip rdpid md_clear arch_capabilities
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0492 
sec, LOAD: 1.1357 sec.
   Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0009 
sec, LOAD: 0.6649 sec.
   Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.0540 sec, LOAD: 
0.4060 sec.
   Timing for D2L: http://d2l.ai, DNS: 0.8167 sec, LOAD: 0.1636 sec.
   Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0497 sec, LOAD: 0.1626 sec.
   Timing for FashionMNIST: 
https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, 
DNS: 0.1074 sec, LOAD: 0.3472 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.1168 sec, LOAD: 
1.3224 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0408 sec, 
LOAD: 0.3128 sec.
   
   ```
   
   Please help .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-16 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c9e6749  Bump the publish timestamp.
c9e6749 is described below

commit c9e67495324ac1bf098a5208892ed426969f0f6d
Author: mxnet-ci 
AuthorDate: Mon Feb 17 06:43:17 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..faa324d
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Feb 17 06:43:17 UTC 2020



[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17541: [numpy] add op random.rayleigh

2020-02-16 Thread GitBox
haojin2 commented on a change in pull request #17541: [numpy] add op 
random.rayleigh
URL: https://github.com/apache/incubator-mxnet/pull/17541#discussion_r380003075
 
 

 ##
 File path: src/operator/numpy/random/np_rayleigh_op.h
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_rayleigh_op.h
+ * \brief Operator for numpy sampling from rayleigh distribution.
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_RAYLEIGH_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_RAYLEIGH_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyRayleighParam : public dmlc::Parameter {
+  dmlc::optional scale;
+  dmlc::optional> size;
+  std::string ctx;
+  DMLC_DECLARE_PARAMETER(NumpyRayleighParam) {
+  DMLC_DECLARE_FIELD(scale)
+  .set_default(dmlc::optional(1.0));
+  DMLC_DECLARE_FIELD(size)
+  .set_default(dmlc::optional>())
+  .describe("Output shape. If the given shape is, "
+  "e.g., (m, n, k), then m * n * k samples are drawn. "
+  "Default is None, in which case a single value is returned.");
+  DMLC_DECLARE_FIELD(ctx).set_default("cpu").describe(
+"Context of output, in format [cpu|gpu|cpu_pinned](n)."
+" Only used for imperative calls.");
+  }
+};
+
+template 
+struct scalar_rayleigh_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, float scale, float *threshold,
+  DType *out) {
+threshold[i] = sqrt(-2 * log(threshold[i]));
+out[i] =  scale * threshold[i];
+  }
+};
+
+namespace mxnet_op {
+
+template 
+struct check_legal_scale_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, IType *scalar, float* flag) {
+if (scalar[i] < 0.0) {
+  flag[0] = -1.0;
+}
+  }
+};
+
 
 Review comment:
   1 blank line between functions in c++


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17084: [numpy] add op median

2020-02-16 Thread GitBox
haojin2 commented on a change in pull request #17084: [numpy] add op median
URL: https://github.com/apache/incubator-mxnet/pull/17084#discussion_r380002724
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -7083,6 +7083,47 @@ def test_np_share_memory():
 assert not op(np.ones((5, 0), dtype=dt), np.ones((0, 3, 0), 
dtype=adt))
 
 
+def test_np_median():
+class TestMedian(HybridBlock):
+def __init__(self, axis=None, keepdims=False):
+super(TestMedian, self).__init__()
+self._axis = axis
+self._keepdims = keepdims
+
+def hybrid_forward(self, F, a):
+return F.np.median(a, axis=self._axis, keepdims=self._keepdims)
+
+flags = [True, False]
+dtypes = ['float16', 'float32', 'float64']
+qtypes = ['float32', 'float64']
+tensor_shapes = [
+((2, 3), None),
+((2, 3, 4, 5), 3),
+((2, 3, 4), (0, 2)),
+((2, 3, 4), 1)
+]
+
+for hybridize, keepdims, (a_shape, axis), dtype in \
+itertools.product(flags, flags, tensor_shapes, dtypes):
+atol = 3e-4 if dtype == 'float16' else 1e-4
+rtol = 3e-2 if dtype == 'float16' else 1e-2
+test_median = TestMedian(axis=axis, keepdims=keepdims)
+if hybridize:
+test_median.hybridize()
+a = np.random.uniform(-1.0, 1.0, size=a_shape)
+np_out = _np.median(a.asnumpy(), axis=axis, keepdims=keepdims)
+mx_out = test_median(a)
+
+assert mx_out.shape == np_out.shape
+assert_almost_equal(mx_out.asnumpy(), np_out, atol=atol, rtol=rtol)
+
+mx_out = np.median(a, axis=axis, keepdims=keepdims)
+np_out = _np.median(a.asnumpy(), axis=axis, keepdims=keepdims)
+
+assert_almost_equal(mx_out.asnumpy(), np_out, atol=atol, rtol=rtol)
+
 
 Review comment:
   2 blank lines between functions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17084: [numpy] add op median

2020-02-16 Thread GitBox
haojin2 commented on a change in pull request #17084: [numpy] add op median
URL: https://github.com/apache/incubator-mxnet/pull/17084#discussion_r380002411
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -6550,6 +6550,57 @@ def percentile(a, q, axis=None, out=None, 
overwrite_input=None, interpolation='l
keepdims=keepdims, q_scalar=None, out=out)
 
 
+@set_module('mxnet.ndarray.numpy')
+def median(a, axis=None, out=None, overwrite_input=None, keepdims=False):
+r"""
+Compute the median along the specified axis.
+Returns the median of the array elements.
+Parameters
+--
+a : array_like
+Input array or object that can be converted to an array.
+axis : {int, sequence of int, None}, optional
+Axis or axes along which the medians are computed. The default
+is to compute the median along a flattened version of the array.
+A sequence of axes is supported since version 1.9.0.
+out : ndarray, optional
+Alternative output array in which to place the result. It must
+have the same shape and buffer length as the expected output,
+but the type (of the output) will be cast if necessary.
+keepdims : bool, optional
+If this is set to True, the axes which are reduced are left
+in the result as dimensions with size one. With this option,
+the result will broadcast correctly against the original `arr`.
+Returns
+---
+median : ndarray
+A new array holding the result. If the input contains integers
+or floats smaller than ``float32``, then the output data-type is
+``np.float32``.  Otherwise, the data-type of the output is the
+same as that of the input. If `out` is specified, that array is
+returned instead.
+See Also
+
+mean, percentile
+Examples
+
+>>> a = np.array([[10, 7, 4], [3, 2, 1]])
+>>> a
+array([[10,  7,  4],
+[ 3,  2,  1]])
+>>> np.median(a)
+3.5
+>>> np.median(a, axis=0)
+array([6.5, 4.5, 2.5])
+>>> np.median(a, axis=1)
+array([7.,  2.])
+"""
+from mxnet import np, npx
+npx.set_np()
+return quantile(a=a, q=np.array(0.5), axis=axis, out=out, 
overwrite_input=overwrite_input,
 
 Review comment:
   `...quantile(a=a, q=0.5, ...)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17084: [numpy] add op median

2020-02-16 Thread GitBox
haojin2 commented on a change in pull request #17084: [numpy] add op median
URL: https://github.com/apache/incubator-mxnet/pull/17084#discussion_r380002267
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -6550,6 +6550,57 @@ def percentile(a, q, axis=None, out=None, 
overwrite_input=None, interpolation='l
keepdims=keepdims, q_scalar=None, out=out)
 
 
+@set_module('mxnet.ndarray.numpy')
+def median(a, axis=None, out=None, overwrite_input=None, keepdims=False):
+r"""
+Compute the median along the specified axis.
+Returns the median of the array elements.
+Parameters
+--
+a : array_like
+Input array or object that can be converted to an array.
+axis : {int, sequence of int, None}, optional
+Axis or axes along which the medians are computed. The default
+is to compute the median along a flattened version of the array.
+A sequence of axes is supported since version 1.9.0.
+out : ndarray, optional
+Alternative output array in which to place the result. It must
+have the same shape and buffer length as the expected output,
+but the type (of the output) will be cast if necessary.
+keepdims : bool, optional
+If this is set to True, the axes which are reduced are left
+in the result as dimensions with size one. With this option,
+the result will broadcast correctly against the original `arr`.
+Returns
+---
+median : ndarray
+A new array holding the result. If the input contains integers
+or floats smaller than ``float32``, then the output data-type is
+``np.float32``.  Otherwise, the data-type of the output is the
+same as that of the input. If `out` is specified, that array is
+returned instead.
+See Also
+
+mean, percentile
+Examples
+
+>>> a = np.array([[10, 7, 4], [3, 2, 1]])
+>>> a
+array([[10,  7,  4],
+[ 3,  2,  1]])
+>>> np.median(a)
+3.5
+>>> np.median(a, axis=0)
+array([6.5, 4.5, 2.5])
+>>> np.median(a, axis=1)
+array([7.,  2.])
+"""
+from mxnet import np, npx
 
 Review comment:
   no need for the import here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #17609: [numpy] add fallback ops

2020-02-16 Thread GitBox
reminisce commented on a change in pull request #17609: [numpy] add fallback ops
URL: https://github.com/apache/incubator-mxnet/pull/17609#discussion_r38883
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -244,14 +274,10 @@ def __array_function__(self, func, types, args, kwargs): 
 # pylint: disable=bad-
 mx_np_func = _NUMPY_ARRAY_FUNCTION_DICT.get(func, None)
 if mx_np_func is None:
 # try to fallback to official NumPy op
-new_args = []
-cur_ctx = None
-for arg in args:
-if isinstance(arg, ndarray):
-cur_ctx = arg.ctx
-new_args.append(arg.asnumpy())
-else:
-new_args.append(arg)
+new_args, cur_ctx = _as_onp_array(args)
 
 Review comment:
   Please also remember to add the similar check at line 258 for fallback 
through the ufunc protocol.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #17609: [numpy] add fallback ops

2020-02-16 Thread GitBox
reminisce commented on a change in pull request #17609: [numpy] add fallback ops
URL: https://github.com/apache/incubator-mxnet/pull/17609#discussion_r38606
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -244,14 +274,10 @@ def __array_function__(self, func, types, args, kwargs): 
 # pylint: disable=bad-
 mx_np_func = _NUMPY_ARRAY_FUNCTION_DICT.get(func, None)
 if mx_np_func is None:
 # try to fallback to official NumPy op
-new_args = []
-cur_ctx = None
-for arg in args:
-if isinstance(arg, ndarray):
-cur_ctx = arg.ctx
-new_args.append(arg.asnumpy())
-else:
-new_args.append(arg)
+new_args, cur_ctx = _as_onp_array(args)
 
 Review comment:
   Please add a check above this line to assert autograd is not on.
   ```python
   if autograd.is_recording():
   raise ValueError("Falling back to NumPy operator {} with autograd active 
is not supported. Please consider moving the operator to the outside of the 
autograd scope.").format(op_name)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QimingZheng commented on issue #17591: Support for Partitioned Variables (used in large sparse models)

2020-02-16 Thread GitBox
QimingZheng commented on issue #17591: Support for Partitioned Variables (used 
in large sparse models)
URL: 
https://github.com/apache/incubator-mxnet/issues/17591#issuecomment-586834343
 
 
   Hi, @eric-haibin-lin, for both research purpose and production requirements.
   
   My target model size will be hundreds of Gigabytes (cannot be handled by one 
server).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #17609: [numpy] add fallback ops

2020-02-16 Thread GitBox
reminisce commented on issue #17609: [numpy] add fallback ops
URL: https://github.com/apache/incubator-mxnet/pull/17609#issuecomment-586830481
 
 
   @eric-haibin-lin Description part updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #17591: Support for Partitioned Variables (used in large sparse models)

2020-02-16 Thread GitBox
eric-haibin-lin commented on issue #17591: Support for Partitioned Variables 
(used in large sparse models)
URL: 
https://github.com/apache/incubator-mxnet/issues/17591#issuecomment-586829214
 
 
   @QimingZheng are looking into this for research purpose or deploying models 
for real-world use cases? Currently TF has the best support for this kind of 
models. 
   
   Is your target model size larger than what a single machine can hold? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17510: MXNet FFI for Operator Imperative Invocation

2020-02-16 Thread GitBox
hzfan commented on a change in pull request #17510: MXNet FFI for Operator 
Imperative Invocation
URL: https://github.com/apache/incubator-mxnet/pull/17510#discussion_r379994604
 
 

 ##
 File path: include/mxnet/runtime/packed_func.h
 ##
 @@ -0,0 +1,1254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file runtime/packed_func.h
+ * \brief Adapted from incubator-tvm/include/tvm/runtime/packed_func.h
+ * Type-erased function used across MXNET API.
+ */
+#ifndef MXNET_RUNTIME_PACKED_FUNC_H_
+#define MXNET_RUNTIME_PACKED_FUNC_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace mxnet {
+// forward declarations
+// class Integer;
+// class Expr;
+
+namespace runtime {
+
+/*!
+ * \brief Runtime utility for getting custom type name from code
+ * \param type_code Custom type code
+ * \return Custom type name
+ */
+// MXNET_DLL std::string GetCustomTypeName(uint8_t type_code);
+
+/*!
+ * \brief Runtime utility for checking whether custom type is registered
+ * \param type_code Custom type code
+ * \return Bool representing whether type is registered
+ */
+// MXNET_DLL bool GetCustomTypeRegistered(uint8_t type_code);
+
+/*!
+ * \brief Runtime utility for parsing string of the form "custom[]"
+ * \param s String to parse
+ * \param scan pointer to parsing pointer, which is scanning across s
+ * \return type code of custom type parsed
+ */
+// MXNET_DLL uint8_t ParseCustomDatatype(const std::string& s, const char** 
scan);
+
+/*!
+ * \brief convert a string to TVM type.
+ * \param s The string to be converted.
+ * \return The corresponding tvm type.
+ */
+inline DLDataType String2DLDataType(std::string s);
+
+// forward declarations
+class MXNetArgs;
+class MXNetArgValue;
+class MXNetRetValue;
+class MXNetArgsSetter;
+
+/*!
+ * \brief Packed function is a type-erased function.
+ *  The arguments are passed by packed format.
+ *
+ *  This is an useful unified interface to call generated functions,
+ *  It is the unified function function type of TVM.
+ *  It corresponds to TVMFunctionHandle in C runtime API.
+ */
+class PackedFunc {
+ public:
+  /*!
+   * \brief The internal std::function
+   * \param args The arguments to the function.
+   * \param rv The return value.
+   *
+   * \code
+   *   // Example code on how to implemented FType
+   *   void MyPackedFunc(MXNetArgs args, MXNetRetValue* rv) {
+   * // automatically convert arguments to desired type.
+   * int a0 = args[0];
+   * float a1 = args[1];
+   * ...
+   * // automatically assign values to rv
+   * std::string my_return_value = "x";
+   * *rv = my_return_value;
+   *   }
+   * \endcode
+   */
+  using FType = std::function;
 
 Review comment:
   Thanks @wkcn :). Basically `PackedFunc` does not go across the dll boundary. 
It remains in the MXNet side.
   
   When application A calls PackedFunc, it first passes the function name to 
MXNet. Then MXNet finds the corresponding PackedFunc, and returns the address 
of the PackedFunc back to A (A may cache the name-address map). Finally, A 
invokes that PackedFunc with the address.
   
   To conclude, A is not aware of std::function. It only gets an address which 
is a ABI-compatible void*.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on issue #17510: MXNet FFI for Operator Imperative Invocation

2020-02-16 Thread GitBox
hzfan commented on issue #17510: MXNet FFI for Operator Imperative Invocation
URL: https://github.com/apache/incubator-mxnet/pull/17510#issuecomment-586828404
 
 
   @eric-haibin-lin Yes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17510: MXNet FFI for Operator Imperative Invocation

2020-02-16 Thread GitBox
hzfan commented on a change in pull request #17510: MXNet FFI for Operator 
Imperative Invocation
URL: https://github.com/apache/incubator-mxnet/pull/17510#discussion_r379994604
 
 

 ##
 File path: include/mxnet/runtime/packed_func.h
 ##
 @@ -0,0 +1,1254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file runtime/packed_func.h
+ * \brief Adapted from incubator-tvm/include/tvm/runtime/packed_func.h
+ * Type-erased function used across MXNET API.
+ */
+#ifndef MXNET_RUNTIME_PACKED_FUNC_H_
+#define MXNET_RUNTIME_PACKED_FUNC_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace mxnet {
+// forward declarations
+// class Integer;
+// class Expr;
+
+namespace runtime {
+
+/*!
+ * \brief Runtime utility for getting custom type name from code
+ * \param type_code Custom type code
+ * \return Custom type name
+ */
+// MXNET_DLL std::string GetCustomTypeName(uint8_t type_code);
+
+/*!
+ * \brief Runtime utility for checking whether custom type is registered
+ * \param type_code Custom type code
+ * \return Bool representing whether type is registered
+ */
+// MXNET_DLL bool GetCustomTypeRegistered(uint8_t type_code);
+
+/*!
+ * \brief Runtime utility for parsing string of the form "custom[]"
+ * \param s String to parse
+ * \param scan pointer to parsing pointer, which is scanning across s
+ * \return type code of custom type parsed
+ */
+// MXNET_DLL uint8_t ParseCustomDatatype(const std::string& s, const char** 
scan);
+
+/*!
+ * \brief convert a string to TVM type.
+ * \param s The string to be converted.
+ * \return The corresponding tvm type.
+ */
+inline DLDataType String2DLDataType(std::string s);
+
+// forward declarations
+class MXNetArgs;
+class MXNetArgValue;
+class MXNetRetValue;
+class MXNetArgsSetter;
+
+/*!
+ * \brief Packed function is a type-erased function.
+ *  The arguments are passed by packed format.
+ *
+ *  This is an useful unified interface to call generated functions,
+ *  It is the unified function function type of TVM.
+ *  It corresponds to TVMFunctionHandle in C runtime API.
+ */
+class PackedFunc {
+ public:
+  /*!
+   * \brief The internal std::function
+   * \param args The arguments to the function.
+   * \param rv The return value.
+   *
+   * \code
+   *   // Example code on how to implemented FType
+   *   void MyPackedFunc(MXNetArgs args, MXNetRetValue* rv) {
+   * // automatically convert arguments to desired type.
+   * int a0 = args[0];
+   * float a1 = args[1];
+   * ...
+   * // automatically assign values to rv
+   * std::string my_return_value = "x";
+   * *rv = my_return_value;
+   *   }
+   * \endcode
+   */
+  using FType = std::function;
 
 Review comment:
   Thanks @wkcn :). Basically `PackedFunc` does not go across the dll boundary. 
It remains in the MXNet side.
   
   When application A calls PackedFunc, it first passes the function name to 
MXNet. Then MXNet finds the corresponding PackedFunc, and returns the address 
of the PackedFunc back to A (A may cache the name-address map). Finally, A 
invokes that PackedFunc with the address.
   
   To conclude, A is not aware of std::function. It only gets an address or 
some other unique identifier of PackedFunc. And the address is just a void*, 
which is ABI compatible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin opened a new issue #17612: Gluon block thread safety bug

2020-02-16 Thread GitBox
eric-haibin-lin opened a new issue #17612: Gluon block thread safety bug
URL: https://github.com/apache/incubator-mxnet/issues/17612
 
 
   ```
   import mxnet as mx
   import threading
   import queue
   import gluonnlp as nlp
   
   q = queue.Queue()
   count = 2
   net = mx.gluon.nn.Dense((10))
   net.initialize(mx.init.Uniform(), ctx=[mx.gpu(0), mx.gpu(1)])
   mx.nd.waitall()
   
   def fn(net, ctx):
   with mx.autograd.record():
   x = mx.nd.random.uniform(low=0, high=1000, shape=(10, 128), ctx=ctx)
   q.put(net(x))
   mx.nd.waitall()
   return
   
   threads = []
   for i in range(count):
   t = threading.Thread(target=fn, args=(net, mx.gpu()))
   threads.append(t)
   for i in range(count):
   threads[i].start()
   for i in range(count):
   threads[i].join()
   
   ys = []
   for i in range(count):
   ys.append(q.get())
   print(ys)
   ```
   
   Error: 
   ```
   Exception in thread Thread-2:
   Traceback (most recent call last):
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 1154, in 
forward
   params = {k: v.data(ctx) for k, v in self._reg_params.items()}
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 1154, in 

   params = {k: v.data(ctx) for k, v in self._reg_params.items()}
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/parameter.py", line 565, 
in data
   return self._check_and_get(self._data, ctx)
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/parameter.py", line 236, 
in _check_and_get
   "num_features, etc., for network layers."%(self.name))
   mxnet.gluon.parameter.DeferredInitializationError: Parameter 'dense0_weight' 
has not been initialized yet because initialization was deferred. Actual 
initialization happens during the first forward pass. Please pass one batch of 
data through the network before accessing Parameters. You can also avoid 
deferred initialization by specifying in_units, num_features, etc., for network 
layers.
   
   During handling of the above exception, another exception occurred:
   
   Traceback (most recent call last):
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 976, in 
_deferred_infer_shape
   self.infer_shape(*args)
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 1074, in 
infer_shape
   self._infer_attrs('infer_shape', 'shape', *args)
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 1058, in 
_infer_attrs
   inputs, out = self._get_graph(*args)
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 928, in 
_get_graph
   out = self.hybrid_forward(symbol, *grouped_inputs, **params)  # pylint: 
disable=no-value-for-parameter
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 94, in 
__exit__
   self._name_scope.__exit__(ptype, value, trace)
   AttributeError: 'NoneType' object has no attribute '__exit__'
   
   During handling of the above exception, another exception occurred:
   
   Traceback (most recent call last):
 File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
   self.run()
 File "/usr/lib/python3.5/threading.py", line 862, in run
   self._target(*self._args, **self._kwargs)
 File "test.py", line 29, in fn
   q.put(net(x))
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 696, in 
__call__
   out = self.forward(*args)
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 1156, in 
forward
   self._deferred_infer_shape(x, *args)
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/block.py", line 980, in 
_deferred_infer_shape
   raise ValueError(error_msg)
   ValueError: Deferred initialization failed because shape cannot be inferred. 
'NoneType' object has no attribute '__exit__'
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Tommliu opened a new pull request #17611: [Numpy] Modify Quantile/Percentile Testcases

2020-02-16 Thread GitBox
Tommliu opened a new pull request #17611: [Numpy] Modify Quantile/Percentile 
Testcases
URL: https://github.com/apache/incubator-mxnet/pull/17611
 
 
   ## Description ##
   add scalar test for hybrid block.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (1133bfa -> c4c639d)

2020-02-16 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1133bfa  doc fix for argmax & argmin (#17604)
 add c4c639d  support np.dsplit, fix some error msgs and corner cases for 
hsplit and vsplit, add interoperability tests for h/v/dsplit (#17478)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 86 +
 python/mxnet/numpy/multiarray.py   | 89 --
 python/mxnet/numpy_dispatch_protocol.py|  3 +
 python/mxnet/symbol/numpy/_symbol.py   | 46 ++-
 src/operator/numpy/np_matrix_op.cc | 32 
 src/operator/numpy/np_matrix_op.cu |  3 +
 .../python/unittest/test_numpy_interoperability.py | 26 +++
 tests/python/unittest/test_numpy_op.py | 49 
 8 files changed, 312 insertions(+), 22 deletions(-)



[incubator-mxnet] branch master updated (1133bfa -> c4c639d)

2020-02-16 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1133bfa  doc fix for argmax & argmin (#17604)
 add c4c639d  support np.dsplit, fix some error msgs and corner cases for 
hsplit and vsplit, add interoperability tests for h/v/dsplit (#17478)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 86 +
 python/mxnet/numpy/multiarray.py   | 89 --
 python/mxnet/numpy_dispatch_protocol.py|  3 +
 python/mxnet/symbol/numpy/_symbol.py   | 46 ++-
 src/operator/numpy/np_matrix_op.cc | 32 
 src/operator/numpy/np_matrix_op.cu |  3 +
 .../python/unittest/test_numpy_interoperability.py | 26 +++
 tests/python/unittest/test_numpy_op.py | 49 
 8 files changed, 312 insertions(+), 22 deletions(-)



[incubator-mxnet] branch master updated (006c4f9 -> 1133bfa)

2020-02-16 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 006c4f9  [CI] Follow redirects when downloading 
apache-maven-3.3.9-bin.tar.gz (#17608)
 add 1133bfa  doc fix for argmax & argmin (#17604)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py| 2 --
 python/mxnet/numpy/multiarray.py | 2 --
 python/mxnet/symbol/numpy/_symbol.py | 2 --
 3 files changed, 6 deletions(-)



[incubator-mxnet] branch master updated (006c4f9 -> 1133bfa)

2020-02-16 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 006c4f9  [CI] Follow redirects when downloading 
apache-maven-3.3.9-bin.tar.gz (#17608)
 add 1133bfa  doc fix for argmax & argmin (#17604)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py| 2 --
 python/mxnet/numpy/multiarray.py | 2 --
 python/mxnet/symbol/numpy/_symbol.py | 2 --
 3 files changed, 6 deletions(-)



[GitHub] [incubator-mxnet] haojin2 merged pull request #17604: Doc fix for np.argmax and np.argmin

2020-02-16 Thread GitBox
haojin2 merged pull request #17604: Doc fix for np.argmax and  np.argmin
URL: https://github.com/apache/incubator-mxnet/pull/17604
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 merged pull request #17478: Add np.dsplit op

2020-02-16 Thread GitBox
haojin2 merged pull request #17478: Add np.dsplit op
URL: https://github.com/apache/incubator-mxnet/pull/17478
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sergeykolychev edited a comment on issue #17610: MXNET-1447 [Perl] Runtime features and large tensor support.

2020-02-16 Thread GitBox
sergeykolychev edited a comment on issue #17610: MXNET-1447 [Perl] Runtime 
features and large tensor support.
URL: https://github.com/apache/incubator-mxnet/pull/17610#issuecomment-586810603
 
 
   @tlby 
   Robert, could you please review. This is first from several PRs I plan to do 
within next few months.
   It adds support for large tensors (int64) and for displaying runtime 
features (compile flags).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sergeykolychev commented on issue #17610: MXNET-1447 [Perl] Runtime features and large tensor support.

2020-02-16 Thread GitBox
sergeykolychev commented on issue #17610: MXNET-1447 [Perl] Runtime features 
and large tensor support.
URL: https://github.com/apache/incubator-mxnet/pull/17610#issuecomment-586810603
 
 
   @tlby 
   Robert, could you please review. This is first from several PRs I plan to do 
with next few months.
   It adds support for large tensors (int64) and for displaying runtime 
features (compile flags).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sergeykolychev opened a new pull request #17610: MXNET-1447 [Perl] Runtime features and large tensor support.

2020-02-16 Thread GitBox
sergeykolychev opened a new pull request #17610: MXNET-1447 [Perl] Runtime 
features and large tensor support.
URL: https://github.com/apache/incubator-mxnet/pull/17610
 
 
   ## Description ##
   
   [Perl] MXNET-1447 Runtime features and large tensor support.
   
   # Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   1) Added runtime features
   2) Added support for large tensors
   
   ## Comments ##
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin edited a comment on issue #17495: Initial inspections for singleton thread safety in MXNet

2020-02-16 Thread GitBox
eric-haibin-lin edited a comment on issue #17495: Initial inspections for 
singleton thread safety in MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/17495#issuecomment-585461965
 
 
   There are a couple of thread local variables in the python level, too:
   - [AttrScope](python/mxnet/symbol/contrib.py) for symbols. Used in control 
flow ops. 
   - python/mxnet/context.py:_default_ctx = threading.local()
   - python/mxnet/gluon/block.py:_current = threading.local()
   - python/mxnet/name.py:_current = threading.local()
   - [_NumpyArrayScope](python/mxnet/util.py) used to declare numpy array 
creation
   
   
   They suggest that if the frontend python thread is switched, some of these 
contexts are lost. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ciyongch commented on a change in pull request #17318: Enable MKL-DNN FullyConnected backward

2020-02-16 Thread GitBox
ciyongch commented on a change in pull request #17318: Enable MKL-DNN 
FullyConnected backward
URL: https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379962028
 
 

 ##
 File path: tests/cpp/include/test_mkldnn.h
 ##
 @@ -63,24 +63,24 @@ struct TestArrayShapes {
 };
 
 // Init arrays with the default layout.
-inline static void InitDefaultArray(NDArray *arr, bool is_rand = false) {
+inline static void InitDefaultArray(NDArray *arr, bool is_rand = false, int 
max = 50) {
   const TBlob  = arr->data();
   mshadow::default_real_t *data = blob.dptr();
   int size = blob.Size();
 
   for (int i = 0; i < size; i++)
 if (is_rand) {
-  data[i] = (std::rand() % 100) - 50;
+  data[i] = (std::rand() % (max * 2)) - max;
 
 Review comment:
   With `memcmp` to check the results, then the only choice is integer numbers. 
Is it reasonable to check the results by `AssertEqual` within a small enough 
threshold like 1e-6, then we can keep the floating number with better 
distribution? 
   Or we can just increase `max` to filling more different numbers other than 
only -1 and 0.
   What do you think?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16899: Enable MKL-DNN in pip packages

2020-02-16 Thread GitBox
TaoLv commented on issue #16899: Enable MKL-DNN in pip packages
URL: https://github.com/apache/incubator-mxnet/pull/16899#issuecomment-586789208
 
 
   @szha What can I for this situation? It seems we need change the value of 
`MXNET_VARIANTS` in the build environment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv edited a comment on issue #16899: Enable MKL-DNN in pip packages

2020-02-16 Thread GitBox
TaoLv edited a comment on issue #16899: Enable MKL-DNN in pip packages
URL: https://github.com/apache/incubator-mxnet/pull/16899#issuecomment-586789208
 
 
   @szha What can I do for this situation? It seems we need change the value of 
`MXNET_VARIANTS` in the build environment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QimingZheng commented on issue #17591: Support for Partitioned Variables (used in large sparse models)

2020-02-16 Thread GitBox
QimingZheng commented on issue #17591: Support for Partitioned Variables (used 
in large sparse models)
URL: 
https://github.com/apache/incubator-mxnet/issues/17591#issuecomment-586788161
 
 
   @eric-haibin-lin Could you help to take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17318: Enable MKL-DNN FullyConnected backward

2020-02-16 Thread GitBox
TaoLv commented on a change in pull request #17318: Enable MKL-DNN 
FullyConnected backward
URL: https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379959402
 
 

 ##
 File path: tests/cpp/include/test_mkldnn.h
 ##
 @@ -63,24 +63,24 @@ struct TestArrayShapes {
 };
 
 // Init arrays with the default layout.
-inline static void InitDefaultArray(NDArray *arr, bool is_rand = false) {
+inline static void InitDefaultArray(NDArray *arr, bool is_rand = false, int 
max = 50) {
   const TBlob  = arr->data();
   mshadow::default_real_t *data = blob.dptr();
   int size = blob.Size();
 
   for (int i = 0; i < size; i++)
 if (is_rand) {
-  data[i] = (std::rand() % 100) - 50;
+  data[i] = (std::rand() % (max * 2)) - max;
 
 Review comment:
   @ciyongch Failed to do so. There are cases doing bit correction check. It 
seems we cannot pass the tests with these floating numbers.
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-17318/17/pipeline/
   
https://github.com/apache/incubator-mxnet/blob/master/tests/cpp/include/test_mkldnn.h#L605


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QimingZheng commented on issue #17591: Support for Partitioned Variables (used in large sparse models)

2020-02-16 Thread GitBox
QimingZheng commented on issue #17591: Support for Partitioned Variables (used 
in large sparse models)
URL: 
https://github.com/apache/incubator-mxnet/issues/17591#issuecomment-586787492
 
 
   @mxnet-label-bot add [Distributed]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QimingZheng commented on issue #17591: Support for Partitioned Variables (used in large sparse models)

2020-02-16 Thread GitBox
QimingZheng commented on issue #17591: Support for Partitioned Variables (used 
in large sparse models)
URL: 
https://github.com/apache/incubator-mxnet/issues/17591#issuecomment-586787093
 
 
   @mxnet-label-bot add [Sparse]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (bc1fc55 -> 006c4f9)

2020-02-16 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bc1fc55  Fix Non-ASCII character in docstring (#17600)
 add 006c4f9  [CI] Follow redirects when downloading 
apache-maven-3.3.9-bin.tar.gz (#17608)

No new revisions were added by this update.

Summary of changes:
 ci/docker/install/centos7_scala.sh  | 4 ++--
 ci/docker/install/ubuntu_publish.sh | 4 ++--
 ci/docker/install/ubuntu_scala.sh   | 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)



[GitHub] [incubator-mxnet] leezu commented on issue #17017: USE_BLAS=apple broken on OSX 10.15

2020-02-16 Thread GitBox
leezu commented on issue #17017: USE_BLAS=apple broken on OSX 10.15
URL: 
https://github.com/apache/incubator-mxnet/issues/17017#issuecomment-586785907
 
 
   This has been fixed for CMake build. Please follow 
https://mxnet.apache.org/get_started/osx_setup to compile with cmake.
   
   Closing this issue as the Makefile build is deprecated and will be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (bc1fc55 -> 006c4f9)

2020-02-16 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bc1fc55  Fix Non-ASCII character in docstring (#17600)
 add 006c4f9  [CI] Follow redirects when downloading 
apache-maven-3.3.9-bin.tar.gz (#17608)

No new revisions were added by this update.

Summary of changes:
 ci/docker/install/centos7_scala.sh  | 4 ++--
 ci/docker/install/ubuntu_publish.sh | 4 ++--
 ci/docker/install/ubuntu_scala.sh   | 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)



[GitHub] [incubator-mxnet] leezu edited a comment on issue #17017: USE_BLAS=apple broken on OSX 10.15

2020-02-16 Thread GitBox
leezu edited a comment on issue #17017: USE_BLAS=apple broken on OSX 10.15
URL: 
https://github.com/apache/incubator-mxnet/issues/17017#issuecomment-586785907
 
 
   This has been fixed for CMake build. Please follow 
https://mxnet.apache.org/get_started/osx_setup to compile with cmake.
   
   Closing this issue as the Makefile build is deprecated and will be removed. 
Please reopen if you nevertheless run into this issue with cmake, or open new 
issues for other problems with cmake OS X build.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu closed issue #17017: USE_BLAS=apple broken on OSX 10.15

2020-02-16 Thread GitBox
leezu closed issue #17017: USE_BLAS=apple broken on OSX 10.15
URL: https://github.com/apache/incubator-mxnet/issues/17017
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu merged pull request #17608: Follow redirects when downloading apache-maven-3.3.9-bin.tar.gz

2020-02-16 Thread GitBox
leezu merged pull request #17608: Follow redirects when downloading 
apache-maven-3.3.9-bin.tar.gz
URL: https://github.com/apache/incubator-mxnet/pull/17608
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on issue #17585: [WIP] Dynamic subgraph property doc

2020-02-16 Thread GitBox
samskalicky commented on issue #17585: [WIP] Dynamic subgraph property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#issuecomment-586783993
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-16 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 99452e0  Bump the publish timestamp.
99452e0 is described below

commit 99452e039b71d1446196358c294e479f2ae58633
Author: mxnet-ci 
AuthorDate: Mon Feb 17 00:42:40 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..5734ec0
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Feb 17 00:42:40 UTC 2020



[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #17510: MXNet FFI for Operator Imperative Invocation

2020-02-16 Thread GitBox
wkcn commented on a change in pull request #17510: MXNet FFI for Operator 
Imperative Invocation
URL: https://github.com/apache/incubator-mxnet/pull/17510#discussion_r379946805
 
 

 ##
 File path: include/mxnet/runtime/packed_func.h
 ##
 @@ -0,0 +1,1254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file runtime/packed_func.h
+ * \brief Adapted from incubator-tvm/include/tvm/runtime/packed_func.h
+ * Type-erased function used across MXNET API.
+ */
+#ifndef MXNET_RUNTIME_PACKED_FUNC_H_
+#define MXNET_RUNTIME_PACKED_FUNC_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace mxnet {
+// forward declarations
+// class Integer;
+// class Expr;
+
+namespace runtime {
+
+/*!
+ * \brief Runtime utility for getting custom type name from code
+ * \param type_code Custom type code
+ * \return Custom type name
+ */
+// MXNET_DLL std::string GetCustomTypeName(uint8_t type_code);
+
+/*!
+ * \brief Runtime utility for checking whether custom type is registered
+ * \param type_code Custom type code
+ * \return Bool representing whether type is registered
+ */
+// MXNET_DLL bool GetCustomTypeRegistered(uint8_t type_code);
+
+/*!
+ * \brief Runtime utility for parsing string of the form "custom[]"
+ * \param s String to parse
+ * \param scan pointer to parsing pointer, which is scanning across s
+ * \return type code of custom type parsed
+ */
+// MXNET_DLL uint8_t ParseCustomDatatype(const std::string& s, const char** 
scan);
+
+/*!
+ * \brief convert a string to TVM type.
+ * \param s The string to be converted.
+ * \return The corresponding tvm type.
+ */
+inline DLDataType String2DLDataType(std::string s);
+
+// forward declarations
+class MXNetArgs;
+class MXNetArgValue;
+class MXNetRetValue;
+class MXNetArgsSetter;
+
+/*!
+ * \brief Packed function is a type-erased function.
+ *  The arguments are passed by packed format.
+ *
+ *  This is an useful unified interface to call generated functions,
+ *  It is the unified function function type of TVM.
+ *  It corresponds to TVMFunctionHandle in C runtime API.
+ */
+class PackedFunc {
+ public:
+  /*!
+   * \brief The internal std::function
+   * \param args The arguments to the function.
+   * \param rv The return value.
+   *
+   * \code
+   *   // Example code on how to implemented FType
+   *   void MyPackedFunc(MXNetArgs args, MXNetRetValue* rv) {
+   * // automatically convert arguments to desired type.
+   * int a0 = args[0];
+   * float a1 = args[1];
+   * ...
+   * // automatically assign values to rv
+   * std::string my_return_value = "x";
+   * *rv = my_return_value;
+   *   }
+   * \endcode
+   */
+  using FType = std::function;
 
 Review comment:
   I have a question about the ABI compatibility of std::function.
   
   std::function is a STL container, which has different implementations 
between different versions of compiler (e.g. gcc4 and gcc5) or different 
compilers (gcc and clang).
   
   For example, MXNet is built in gcc4, which uses the reference & to store the 
function arguments. However, the other application is built in gcc5, which uses 
the rvalue reference && to store the function arguments. The other application 
could not call the MXNet API by PackedFunc, since the implementations of 
std::function are different between gcc4 and gcc5.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #16735: Use single-bit for mask in dropout operator

2020-02-16 Thread GitBox
eric-haibin-lin commented on issue #16735: Use single-bit for mask in dropout 
operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586772706
 
 
   @PatricZhao @TaoLv what do you suggest as the resolution? If CPU performance 
is a concern, shall we add env_var to control the behavior? Do you agree in the 
long term we want to push for dropout API in MKLDNN with 1-bit mask? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #17510: MXNet FFI for Operator Imperative Invocation

2020-02-16 Thread GitBox
wkcn commented on a change in pull request #17510: MXNet FFI for Operator 
Imperative Invocation
URL: https://github.com/apache/incubator-mxnet/pull/17510#discussion_r379946805
 
 

 ##
 File path: include/mxnet/runtime/packed_func.h
 ##
 @@ -0,0 +1,1254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file runtime/packed_func.h
+ * \brief Adapted from incubator-tvm/include/tvm/runtime/packed_func.h
+ * Type-erased function used across MXNET API.
+ */
+#ifndef MXNET_RUNTIME_PACKED_FUNC_H_
+#define MXNET_RUNTIME_PACKED_FUNC_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace mxnet {
+// forward declarations
+// class Integer;
+// class Expr;
+
+namespace runtime {
+
+/*!
+ * \brief Runtime utility for getting custom type name from code
+ * \param type_code Custom type code
+ * \return Custom type name
+ */
+// MXNET_DLL std::string GetCustomTypeName(uint8_t type_code);
+
+/*!
+ * \brief Runtime utility for checking whether custom type is registered
+ * \param type_code Custom type code
+ * \return Bool representing whether type is registered
+ */
+// MXNET_DLL bool GetCustomTypeRegistered(uint8_t type_code);
+
+/*!
+ * \brief Runtime utility for parsing string of the form "custom[]"
+ * \param s String to parse
+ * \param scan pointer to parsing pointer, which is scanning across s
+ * \return type code of custom type parsed
+ */
+// MXNET_DLL uint8_t ParseCustomDatatype(const std::string& s, const char** 
scan);
+
+/*!
+ * \brief convert a string to TVM type.
+ * \param s The string to be converted.
+ * \return The corresponding tvm type.
+ */
+inline DLDataType String2DLDataType(std::string s);
+
+// forward declarations
+class MXNetArgs;
+class MXNetArgValue;
+class MXNetRetValue;
+class MXNetArgsSetter;
+
+/*!
+ * \brief Packed function is a type-erased function.
+ *  The arguments are passed by packed format.
+ *
+ *  This is an useful unified interface to call generated functions,
+ *  It is the unified function function type of TVM.
+ *  It corresponds to TVMFunctionHandle in C runtime API.
+ */
+class PackedFunc {
+ public:
+  /*!
+   * \brief The internal std::function
+   * \param args The arguments to the function.
+   * \param rv The return value.
+   *
+   * \code
+   *   // Example code on how to implemented FType
+   *   void MyPackedFunc(MXNetArgs args, MXNetRetValue* rv) {
+   * // automatically convert arguments to desired type.
+   * int a0 = args[0];
+   * float a1 = args[1];
+   * ...
+   * // automatically assign values to rv
+   * std::string my_return_value = "x";
+   * *rv = my_return_value;
+   *   }
+   * \endcode
+   */
+  using FType = std::function;
 
 Review comment:
   I have a question about the ABI compatibility of std::functions.
   
   std::functions is a STL container, which has different implementation 
between different versions of compiler (e.g. gcc4 and gcc5) or different 
compilers (gcc and clang).
   
   For example, MXNet is built in gcc4, which uses the reference & to store the 
function arguments. However, the other application is built in gcc5, which uses 
the rvalue reference && to store the function arguments. The other application 
could not call the MXNet API by PackedFunc, since the implementations of 
std::function are different between gcc4 and gcc5.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #17600: Fix Non-ASCII character in docstring

2020-02-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #17600: Fix Non-ASCII 
character in docstring
URL: https://github.com/apache/incubator-mxnet/pull/17600#discussion_r379849548
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -1332,15 +1332,15 @@ integrationtest_ubuntu_cpu_dist_kvstore() {
 export MXNET_USE_OPERATOR_TUNING=0
 export DMLC_LOG_STACK_TRACE_DEPTH=10
 cd tests/nightly/
-python3 ../../tools/launch.py -n 7 --launcher local python 
dist_sync_kvstore.py --type=gluon_step_cpu
-python3 ../../tools/launch.py -n 7 --launcher local python 
dist_sync_kvstore.py --type=gluon_sparse_step_cpu
-python3 ../../tools/launch.py -n 7 --launcher local python 
dist_sync_kvstore.py --type=invalid_cpu
-python3 ../../tools/launch.py -n 7 --launcher local python 
dist_sync_kvstore.py --type=gluon_type_cpu
-python3 ../../tools/launch.py -n 7 --launcher local python 
dist_sync_kvstore.py
-python3 ../../tools/launch.py -n 7 --launcher local python 
dist_sync_kvstore.py --no-multiprecision
-python3 ../../tools/launch.py -n 7 --launcher local python 
dist_sync_kvstore.py --type=compressed_cpu
-python3 ../../tools/launch.py -n 7 --launcher local python 
dist_sync_kvstore.py --type=compressed_cpu --no-multiprecision
-python3 ../../tools/launch.py -n 3 --launcher local python 
test_server_profiling.py
+python3 ../../tools/launch.py -n 7 --launcher local python3 
dist_sync_kvstore.py --type=gluon_step_cpu
 
 Review comment:
   This is one of the causes for #17562 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (0f29cca -> bc1fc55)

2020-02-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 0f29cca  Fix OS X staticbuild, update docs and add tests (#17602)
 add bc1fc55  Fix Non-ASCII character in docstring (#17600)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh   | 26 +-
 python/mxnet/ndarray/numpy/_op.py|  2 +-
 python/mxnet/numpy/multiarray.py |  2 +-
 python/mxnet/symbol/numpy/_symbol.py |  2 +-
 4 files changed, 16 insertions(+), 16 deletions(-)



[GitHub] [incubator-mxnet] szha merged pull request #17600: Fix Non-ASCII character in docstring

2020-02-16 Thread GitBox
szha merged pull request #17600: Fix Non-ASCII character in docstring
URL: https://github.com/apache/incubator-mxnet/pull/17600
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #14373: Passing parameters to HybridBlocks and not using them

2020-02-16 Thread GitBox
szha commented on issue #14373: Passing parameters to HybridBlocks and not 
using them
URL: 
https://github.com/apache/incubator-mxnet/issues/14373#issuecomment-586743718
 
 
   @whamza15 this will be taken into account in the MXNet 2.0 roadmap item 4.3, 
Gluon block enhancement, that @leezu is driving.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #16899: Enable MKL-DNN in pip packages

2020-02-16 Thread GitBox
szha commented on issue #16899: Enable MKL-DNN in pip packages
URL: https://github.com/apache/incubator-mxnet/pull/16899#issuecomment-586740388
 
 
   Looks like the update has taken effect, though the jenkins jobs are still 
running for *mkl variants. 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/restricted-mxnet-cd%2Fmxnet-cd-release-job/detail/mxnet-cd-release-job/724/pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-16 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 9b803dc  Bump the publish timestamp.
9b803dc is described below

commit 9b803dcc378af60b7a0c62e5edcd3bb3668ceabf
Author: mxnet-ci 
AuthorDate: Sun Feb 16 18:47:55 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..55b1ec0
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Feb 16 18:47:55 UTC 2020



[GitHub] [incubator-mxnet] apeforest commented on issue #16735: Use single-bit for mask in dropout operator

2020-02-16 Thread GitBox
apeforest commented on issue #16735: Use single-bit for mask in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586737774
 
 
   > > Why does it increase memory load?
   > 
   > If there are N elements, per the Bernoulli distribution generation in VSL, 
we still need to allocate memory and write `N*4` bytes to it. To generate bit 
mask, we need load the `N*4` bytes back and write `N/8` bytes with bits.
   
   The memory for bit-mask is not extra memory. `N*sizeof(DType)` was used in 
the master branch: 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/dropout.cc#L124
   
   So for the MKL dropout case, 
   master branch uses memory `N*4` + `N*sizeof(DType)` vs. this PR `N*4` + 
`N/8`. This memory reduction is verified through the MXNet profiler results 
reported in the PR description section. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix OS X staticbuild, update docs and add tests (#17602)

2020-02-16 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 0f29cca  Fix OS X staticbuild, update docs and add tests (#17602)
0f29cca is described below

commit 0f29ccaf5c34a01bffe5ad3036d4cf8bb00c551e
Author: Leonard Lausen 
AuthorDate: Sun Feb 16 18:26:02 2020 +

Fix OS X staticbuild, update docs and add tests (#17602)

* Fix OS X staticbuild and add tests

* Update OS X build from source documentation
---
 .github/workflows/os_x_staticbuild.yml |  18 ++
 cmake/Modules/FindAccelerate.cmake |   1 +
 config/{config.cmake => darwin.cmake}  |  43 ++--
 .../distribution/darwin_cpu.cmake  |  34 ++--
 config/{config.cmake => linux.cmake}   |   8 +-
 config/{config.cmake => linux_gpu.cmake}   |   4 +-
 .../static_site/src/pages/get_started/osx_setup.md | 222 ++---
 .../src/pages/get_started/ubuntu_setup.md  |  20 +-
 tools/dependencies/libtiff.sh  |   2 +-
 tools/staticbuild/build_lib_cmake.sh   |  10 +-
 10 files changed, 183 insertions(+), 179 deletions(-)

diff --git a/.github/workflows/os_x_staticbuild.yml 
b/.github/workflows/os_x_staticbuild.yml
new file mode 100644
index 000..eabe88f
--- /dev/null
+++ b/.github/workflows/os_x_staticbuild.yml
@@ -0,0 +1,18 @@
+name: continuous build
+
+on: [push, pull_request]
+
+jobs:
+  macosx-x86_64:
+runs-on: macos-latest
+steps:
+  - name: Checkout repository
+uses: actions/checkout@v2
+  - name: Install Dependencies
+run: |
+  brew install nasm automake ninja libtool
+  - name: Build project
+run: |
+  git --version
+  clang --version
+  CMAKE_STATICBUILD=1 ./tools/staticbuild/build.sh cpu
diff --git a/cmake/Modules/FindAccelerate.cmake 
b/cmake/Modules/FindAccelerate.cmake
index 8bdc665..e9bafb0 100644
--- a/cmake/Modules/FindAccelerate.cmake
+++ b/cmake/Modules/FindAccelerate.cmake
@@ -24,6 +24,7 @@
 
 file(TO_CMAKE_PATH "$ENV{Accelerate_HOME}" Accelerate_HOME)
 set(Accelerate_INCLUDE_SEARCH_PATHS
+  
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/Current
   
/System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Versions/Current
   ${Accelerate_HOME}
 )
diff --git a/config/config.cmake b/config/darwin.cmake
similarity index 94%
copy from config/config.cmake
copy to config/darwin.cmake
index 101e43f..11c9aa4 100644
--- a/config/config.cmake
+++ b/config/darwin.cmake
@@ -22,7 +22,7 @@
 #  Assume you are on the root directory of mxnet. First copy this file so that
 #  any local changes will be ignored by git
 #
-#  $ cp config/config.cmake config.cmake
+#  $ cp config/darwin.cmake config.cmake
 #
 #  Next modify the according entries, and then compile by
 #
@@ -36,30 +36,14 @@
 
#---
 
 #-
-# GPU support
-#-
-set(USE_CUDA ON CACHE BOOL "Build with CUDA support")
-set(USE_CUDNN ON CACHE BOOL "Build with cudnn support, if found")
-
-# Target NVIDIA GPU achitecture.
-# Valid options are "Auto" for autodetection, "All" for all available
-# architectures or a list of architectures by compute capability number, such 
as
-# "7.0" or "7.0;7.5" as well as name, such as "Volta" or "Volta;Turing".
-# The value specified here is passed to cmake's CUDA_SELECT_NVCC_ARCH_FLAGS to
-# obtain the compilation flags for nvcc.
-#
-# When compiling on a machine without GPU, autodetection will fail and you
-# should instead specify the target architecture manually to avoid excessive
-# compilation times.
-set(MXNET_CUDA_ARCH "Auto" CACHE STRING "Target NVIDIA GPU achitecture")
-
-#-
 # Common libraries
 #-
+set(BLAS "apple" CACHE STRING "BLAS Vendor")
+
 set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
 set(OPENCV_ROOT "" CACHE BOOL "OpenCV install path. Supports autodetection.")
 
-set(USE_OPENMP ON CACHE BOOL "Build with Openmp support")
+set(USE_OPENMP OFF CACHE BOOL "Build with Openmp support")
 
 set(USE_MKL_IF_AVAILABLE ON CACHE BOOL "Use Intel MKL if found")
 set(USE_MKLDNN ON CACHE BOOL "Build with MKL-DNN support")
@@ -110,6 +94,25 @@ set(USE_JEMALLOC OFF CACHE BOOL "Build with Jemalloc 
support")
 SET(EXTRA_OPERATORS "" CACHE PATH "EXTRA OPERATORS PATH")
 
 
+#-
+# GPU support
+#-
+set(USE_CUDA OFF CACHE BOOL "Build with CUDA support")
+set(USE_CUDNN OFF CACHE BOOL 

[GitHub] [incubator-mxnet] apeforest merged pull request #17602: Fix OS X staticbuild, update docs and add tests

2020-02-16 Thread GitBox
apeforest merged pull request #17602: Fix OS X staticbuild, update docs and add 
tests
URL: https://github.com/apache/incubator-mxnet/pull/17602
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17602: Fix OS X staticbuild, update docs and add tests

2020-02-16 Thread GitBox
leezu commented on a change in pull request #17602: Fix OS X staticbuild, 
update docs and add tests
URL: https://github.com/apache/incubator-mxnet/pull/17602#discussion_r379920750
 
 

 ##
 File path: docs/static_site/src/pages/get_started/osx_setup.md
 ##
 @@ -24,176 +24,151 @@ permalink: /get_started/osx_setup
 
 # Installing MXNet from source on OS X (Mac)
 
-**NOTE:** For pre-built MXNet with Python, please refer to the [new install 
guide]({{'/get_started'|relative_url}}).
+The following installation instructions are for building MXNet from source. For
+instructions to build MXNet from source on other platforms, see the general
+[Build From Source guide](build_from_source).
 
-Installing MXNet is a two-step process:
+Instead of building from source, you can install a binary version of MXNet. For
+that, please follow the information at [Get Started](get_started).
 
-1. Build the shared library from the MXNet C++ source code.
-2. Install the supported language-specific packages for MXNet.
+Building MXNet from source is a two-step process:
 
-**Note:** To change the compilation options for your build, edit the 
```make/config.mk``` file and submit a build request with the ```make``` 
command.
+1. Build the shared library from the MXNet C++ source code.
+2. (optional) Install the supported language-specific packages for MXNet.
 
-## Prepare Environment for GPU Installation
+If you plan to build with GPU, you need to set up the environment for CUDA and
+cuDNN. Please follow the [NVIDIA CUDA Installation Guide for Mac OS
+X](https://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/index.html) 
and
+[cuDNN Installation
+Guide](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#install-mac).
+Note that CUDA stopped supporting macOS in 2019 and future versions of CUDA may
+not support macOS.
 
-This section is optional. Skip to next section if you don't plan to use GPUs. 
If you plan to build with GPU, you need to set up the environment for CUDA and 
cuDNN.
+## Contents
 
-First, download and install [CUDA 8 
toolkit](https://developer.nvidia.com/cuda-toolkit).
+* [Build the MXNet shared library from source](#build-mxnet-from-source)
+* [Install Language Packages](#installing-language-packages-for-mxnet)
+* [R](#install-the-mxnet-package-for-r)
+* [Julia](#install-the-mxnet-package-for-julia)
+* [Scala](#install-the-mxnet-package-for-scala)
+* [Java](#install-the-mxnet-package-for-java)
+* [Perl](#install-the-mxnet-package-for-perl)
+  * [Contributions](#contributions)
+  * [Next Steps](#next-steps)
 
-Once you have the CUDA Toolkit installed you will need to set up the required 
environment variables by adding the following to your ~/.bash_profile file:
+
 
-```bash
-export CUDA_HOME=/usr/local/cuda
-export DYLD_LIBRARY_PATH="$CUDA_HOME/lib:$DYLD_LIBRARY_PATH"
-export PATH="$CUDA_HOME/bin:$PATH"
-```
 
-Reload ~/.bash_profile file and install dependencies:
-```bash
-. ~/.bash_profile
-brew install coreutils
-brew tap caskroom/cask
-```
+## Build the MXNet shared library from source
 
-Then download [cuDNN 5](https://developer.nvidia.com/cudnn).
+On OS X, you need the following dependencies:
 
-Unzip the file and change to the cudnn root directory. Move the header files 
and libraries to your local CUDA Toolkit folder:
+**Step 1:** Install prerequisite packages.
 
 ```bash
-$ sudo mv include/cudnn.h /Developer/NVIDIA/CUDA-8.0/include/
-$ sudo mv lib/libcudnn* /Developer/NVIDIA/CUDA-8.0/lib
-$ sudo ln -s /Developer/NVIDIA/CUDA-8.0/lib/libcudnn* /usr/local/cuda/lib/
-```
+# Install OS X Developer Tools
+xcode-select --install
+
+# Install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
 
-Now we can start to build MXNet.
+# Install dependencies
+brew install cmake ninja ccache opencv
+```
 
-## Build the Shared Library
+`opencv` is an optional dependency. You can delete it from above `brew install`
+line and build MXNet without OpenCV support by setting `USE_OPENCV` to `OFF` in
+the configuration file described below.
 
-### Install MXNet dependencies
-Install the dependencies, required for MXNet, with the following commands:
-- [Homebrew](http://brew.sh/)
-- OpenBLAS and homebrew/core (for linear algebraic operations)
-- OpenCV (for computer vision operations)
 
-```bash
-   # Paste this command in Mac terminal to install Homebrew
-   /usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+**Step 2:** Download MXNet sources and configure
 
-   # Insert the Homebrew directory at the top of your PATH environment 
variable
-   export PATH=/usr/local/bin:/usr/local/sbin:$PATH
-```
+Clone the repository:
 
 ```bash
-   brew update
-   brew install pkg-config
-   brew install graphviz
-   brew install openblas
-   brew tap homebrew/core
-   brew install opencv
-
-   # If 

[GitHub] [incubator-mxnet] szandara commented on issue #14488: big different between usage of GPU's memory on c++ and python

2020-02-16 Thread GitBox
szandara commented on issue #14488: big different between usage of GPU's memory 
on c++ and python
URL: 
https://github.com/apache/incubator-mxnet/issues/14488#issuecomment-586732779
 
 
   Can you explain why the C API solves the problem? 
   
   I have the same issue and I am unable to run my network. If I use Python the 
memory usage is 2.7GB while using the C++ interfaces causes a cudaMalloc error 
(I have a 12GB Nvidia GPU!)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Additional fix for vector access. (#17230)

2020-02-16 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 7743fb0  Additional fix for vector access.  (#17230)
7743fb0 is described below

commit 7743fb0c516032c39cbf15a7715a9d36a5cd8e18
Author: aws-taylor <57725958+aws-tay...@users.noreply.github.com>
AuthorDate: Sun Feb 16 07:57:46 2020 -0800

Additional fix for vector access.  (#17230)

* Additional fix for vector access. See 
https://github.com/apache/incubator-mxnet/commit/9634786f96388004f68c223d72e120ad425c2f12
 for the original.

* CI

* ci

* ci

* retrigger CI

* ci

Co-authored-by: JackieWu 
---
 3rdparty/mshadow/mshadow/dot_engine-inl.h | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/3rdparty/mshadow/mshadow/dot_engine-inl.h 
b/3rdparty/mshadow/mshadow/dot_engine-inl.h
index 1a02eb9..d9abf29 100644
--- a/3rdparty/mshadow/mshadow/dot_engine-inl.h
+++ b/3rdparty/mshadow/mshadow/dot_engine-inl.h
@@ -421,12 +421,9 @@ struct BLASEngine {
   CBLAS_TRANSPOSE p_transa[GROUP_SIZE] = {cblas_a_trans};
   CBLAS_TRANSPOSE p_transb[GROUP_SIZE] = {cblas_b_trans};
 
-  std::vector pp_A;
-  std::vector pp_B;
-  std::vector pp_C;
-  pp_A.reserve(batch_count);
-  pp_B.reserve(batch_count);
-  pp_C.reserve(batch_count);
+  std::vector pp_A(batch_count, nullptr);
+  std::vector pp_B(batch_count, nullptr);
+  std::vector pp_C(batch_count, nullptr);
 
   auto m_k = m * k;
   auto k_n = k * n;



[incubator-mxnet] branch master updated: Additional fix for vector access. (#17230)

2020-02-16 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 7743fb0  Additional fix for vector access.  (#17230)
7743fb0 is described below

commit 7743fb0c516032c39cbf15a7715a9d36a5cd8e18
Author: aws-taylor <57725958+aws-tay...@users.noreply.github.com>
AuthorDate: Sun Feb 16 07:57:46 2020 -0800

Additional fix for vector access.  (#17230)

* Additional fix for vector access. See 
https://github.com/apache/incubator-mxnet/commit/9634786f96388004f68c223d72e120ad425c2f12
 for the original.

* CI

* ci

* ci

* retrigger CI

* ci

Co-authored-by: JackieWu 
---
 3rdparty/mshadow/mshadow/dot_engine-inl.h | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/3rdparty/mshadow/mshadow/dot_engine-inl.h 
b/3rdparty/mshadow/mshadow/dot_engine-inl.h
index 1a02eb9..d9abf29 100644
--- a/3rdparty/mshadow/mshadow/dot_engine-inl.h
+++ b/3rdparty/mshadow/mshadow/dot_engine-inl.h
@@ -421,12 +421,9 @@ struct BLASEngine {
   CBLAS_TRANSPOSE p_transa[GROUP_SIZE] = {cblas_a_trans};
   CBLAS_TRANSPOSE p_transb[GROUP_SIZE] = {cblas_b_trans};
 
-  std::vector pp_A;
-  std::vector pp_B;
-  std::vector pp_C;
-  pp_A.reserve(batch_count);
-  pp_B.reserve(batch_count);
-  pp_C.reserve(batch_count);
+  std::vector pp_A(batch_count, nullptr);
+  std::vector pp_B(batch_count, nullptr);
+  std::vector pp_C(batch_count, nullptr);
 
   auto m_k = m * k;
   auto k_n = k * n;



[incubator-mxnet] branch master updated: update symbol to json (#16948)

2020-02-16 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c10ed4  update symbol to json (#16948)
9c10ed4 is described below

commit 9c10ed4c33f88fb27a1eca112bbd6bc5a6e3b1f9
Author: chinakook 
AuthorDate: Sun Feb 16 23:57:02 2020 +0800

update symbol to json (#16948)

* update symbol to json

add remove_amp_cast argument to keep same with symbol.save

* retrigger CI

Co-authored-by: JackieWu 
---
 python/mxnet/symbol/symbol.py | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py
index 6d9bf04..a4599c8 100644
--- a/python/mxnet/symbol/symbol.py
+++ b/python/mxnet/symbol/symbol.py
@@ -1364,7 +1364,7 @@ class Symbol(SymbolBase):
 else:
 check_call(_LIB.MXSymbolSaveToFile(self.handle, c_str(fname)))
 
-def tojson(self):
+def tojson(self, remove_amp_cast=True):
 """Saves symbol to a JSON string.
 
 See Also
@@ -1372,7 +1372,12 @@ class Symbol(SymbolBase):
 symbol.load_json : Used to load symbol from JSON string.
 """
 json_str = ctypes.c_char_p()
-check_call(_LIB.MXSymbolSaveToJSON(self.handle, 
ctypes.byref(json_str)))
+if remove_amp_cast:
+handle = SymbolHandle()
+check_call(_LIB.MXSymbolRemoveAmpCast(self.handle, 
ctypes.byref(handle)))
+check_call(_LIB.MXSymbolSaveToJSON(handle, ctypes.byref(json_str)))
+else:
+check_call(_LIB.MXSymbolSaveToJSON(self.handle, 
ctypes.byref(json_str)))
 return py_str(json_str.value)
 
 @staticmethod



[incubator-mxnet] branch master updated: update symbol to json (#16948)

2020-02-16 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c10ed4  update symbol to json (#16948)
9c10ed4 is described below

commit 9c10ed4c33f88fb27a1eca112bbd6bc5a6e3b1f9
Author: chinakook 
AuthorDate: Sun Feb 16 23:57:02 2020 +0800

update symbol to json (#16948)

* update symbol to json

add remove_amp_cast argument to keep same with symbol.save

* retrigger CI

Co-authored-by: JackieWu 
---
 python/mxnet/symbol/symbol.py | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py
index 6d9bf04..a4599c8 100644
--- a/python/mxnet/symbol/symbol.py
+++ b/python/mxnet/symbol/symbol.py
@@ -1364,7 +1364,7 @@ class Symbol(SymbolBase):
 else:
 check_call(_LIB.MXSymbolSaveToFile(self.handle, c_str(fname)))
 
-def tojson(self):
+def tojson(self, remove_amp_cast=True):
 """Saves symbol to a JSON string.
 
 See Also
@@ -1372,7 +1372,12 @@ class Symbol(SymbolBase):
 symbol.load_json : Used to load symbol from JSON string.
 """
 json_str = ctypes.c_char_p()
-check_call(_LIB.MXSymbolSaveToJSON(self.handle, 
ctypes.byref(json_str)))
+if remove_amp_cast:
+handle = SymbolHandle()
+check_call(_LIB.MXSymbolRemoveAmpCast(self.handle, 
ctypes.byref(handle)))
+check_call(_LIB.MXSymbolSaveToJSON(handle, ctypes.byref(json_str)))
+else:
+check_call(_LIB.MXSymbolSaveToJSON(self.handle, 
ctypes.byref(json_str)))
 return py_str(json_str.value)
 
 @staticmethod



[GitHub] [incubator-mxnet] wkcn merged pull request #17230: Additional fix for vector access.

2020-02-16 Thread GitBox
wkcn merged pull request #17230: Additional fix for vector access. 
URL: https://github.com/apache/incubator-mxnet/pull/17230
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #17230: Additional fix for vector access.

2020-02-16 Thread GitBox
wkcn commented on issue #17230: Additional fix for vector access. 
URL: https://github.com/apache/incubator-mxnet/pull/17230#issuecomment-586722380
 
 
   The PR has been merged. Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn merged pull request #16948: update symbol to json

2020-02-16 Thread GitBox
wkcn merged pull request #16948: update symbol to json
URL: https://github.com/apache/incubator-mxnet/pull/16948
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #16948: update symbol to json

2020-02-16 Thread GitBox
wkcn commented on issue #16948: update symbol to json
URL: https://github.com/apache/incubator-mxnet/pull/16948#issuecomment-586722283
 
 
   Merged. Thank you for the contribution!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17318: Enable MKL-DNN FullyConnected backward

2020-02-16 Thread GitBox
TaoLv commented on a change in pull request #17318: Enable MKL-DNN 
FullyConnected backward
URL: https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379905050
 
 

 ##
 File path: tests/cpp/include/test_mkldnn.h
 ##
 @@ -63,24 +63,24 @@ struct TestArrayShapes {
 };
 
 // Init arrays with the default layout.
-inline static void InitDefaultArray(NDArray *arr, bool is_rand = false) {
+inline static void InitDefaultArray(NDArray *arr, bool is_rand = false, int 
max = 50) {
   const TBlob  = arr->data();
   mshadow::default_real_t *data = blob.dptr();
   int size = blob.Size();
 
   for (int i = 0; i < size; i++)
 if (is_rand) {
-  data[i] = (std::rand() % 100) - 50;
+  data[i] = (std::rand() % (max * 2)) - max;
 
 Review comment:
   Okay, i will change to generate float numbers in [-max, max) rather than 
integer numbers. Previously I thought sparse (say 50% zeros) is also a way to 
avoid float number accumulation error.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Yiyan66 opened a new pull request #17609: [numpy] add fallback ops

2020-02-16 Thread GitBox
Yiyan66 opened a new pull request #17609: [numpy] add fallback ops
URL: https://github.com/apache/incubator-mxnet/pull/17609
 
 
   ## Description ##
   add fallback ops
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-16 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 053e42a  Bump the publish timestamp.
053e42a is described below

commit 053e42aa6d34310149924a41d2a19112f764f1a2
Author: mxnet-ci 
AuthorDate: Sun Feb 16 12:42:12 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..01cfe50
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Feb 16 12:42:12 UTC 2020



[GitHub] [incubator-mxnet] ciyongch commented on a change in pull request #17318: Enable MKL-DNN FullyConnected backward

2020-02-16 Thread GitBox
ciyongch commented on a change in pull request #17318: Enable MKL-DNN 
FullyConnected backward
URL: https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379900144
 
 

 ##
 File path: tests/cpp/include/test_mkldnn.h
 ##
 @@ -63,24 +63,24 @@ struct TestArrayShapes {
 };
 
 // Init arrays with the default layout.
-inline static void InitDefaultArray(NDArray *arr, bool is_rand = false) {
+inline static void InitDefaultArray(NDArray *arr, bool is_rand = false, int 
max = 50) {
   const TBlob  = arr->data();
   mshadow::default_real_t *data = blob.dptr();
   int size = blob.Size();
 
   for (int i = 0; i < size; i++)
 if (is_rand) {
-  data[i] = (std::rand() % 100) - 50;
+  data[i] = (std::rand() % (max * 2)) - max;
 
 Review comment:
   I've no idea about why the range was set to [-50, 50) previously, and I 
can't figure out any specific reasons to use this range for the tests (any 
upper bound test?). It'll be great if you have any background for it.
   But anyway, the tensors with only two values (-1 and 0, 50% are 0) might not 
be a good candidate for the tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services