[incubator-mxnet] branch master updated (24f0a10 -> 4f8bc3a)

2019-09-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 24f0a10  [MXNET-978] Higher Order Gradient Support `clip`, `dropout`. 
(#15746)
 add 4f8bc3a  Fix unary operator ceil/floor/trunc when data type is integer 
(#14251)

No new revisions were added by this update.

Summary of changes:
 src/operator/math_functions-inl.h | 15 ++-
 tests/python/unittest/test_ndarray.py | 19 +++
 2 files changed, 29 insertions(+), 5 deletions(-)



[GitHub] [incubator-mxnet] eric-haibin-lin closed issue #13220: Many operators don't work for integer type

2019-09-06 Thread GitBox
eric-haibin-lin closed issue #13220: Many operators don't work for integer type 
URL: https://github.com/apache/incubator-mxnet/issues/13220
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin merged pull request #14251: Fix unary operator ceil/floor/trunc when data type is integer

2019-09-06 Thread GitBox
eric-haibin-lin merged pull request #14251: Fix unary operator ceil/floor/trunc 
when data type is integer
URL: https://github.com/apache/incubator-mxnet/pull/14251
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #14251: Fix unary operator ceil/floor/trunc when data type is integer

2019-09-06 Thread GitBox
eric-haibin-lin commented on issue #14251: Fix unary operator ceil/floor/trunc 
when data type is integer
URL: https://github.com/apache/incubator-mxnet/pull/14251#issuecomment-529075325
 
 
   thx for the fix 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16102: Usability degradation

2019-09-06 Thread GitBox
sxjscience commented on issue #16102: Usability degradation
URL: 
https://github.com/apache/incubator-mxnet/issues/16102#issuecomment-529074657
 
 
   @anirudh2290 Have we tested the case for reshaping to an invalid shape?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-09-06 Thread GitBox
samskalicky commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r321955499
 
 

 ##
 File path: example/lib_ops/subgraph_lib.cc
 ##
 @@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file subgraph_lib.cc
+ * \brief subgraph operator implementation
+ * library file
+ */
+
+#include 
+#include "lib_api.h"
+
+MXReturnValue parseAttrs(std::map attrs,
+   int* num_in, int* num_out) {
+  *num_in = 2;
+  *num_out = 1;
+  return MX_SUCCESS;
+}
+
+MXReturnValue inferType(std::map attrs, 
std::vector ,
+  std::vector ) {
+  outtypes[0] = intypes[0];
+  return MX_SUCCESS;
+}
+
+MXReturnValue inferShape(std::map attrs, 
std::vector> ,
+   std::vector> ) {
+  unsigned n = inshapes[0][0];
+  unsigned k = inshapes[0][1];
+  unsigned kk = inshapes[1][0];
+  unsigned m = inshapes[1][1];
+
+  std::cout << "inshapes[0][0]=" << n << "  inshapes[0][1]=" << k << std::endl;
+  std::cout << "inshapes[1][0]=" << kk << "  inshapes[1][1]=" << m << 
std::endl;
+
+  if (k != kk)
+return MX_FAIL;
+
+  outshapes[0].push_back(n);
+  outshapes[0].push_back(m);
+  return MX_SUCCESS;
+}
+
+MXReturnValue mutateInputs(std::map attrs,
+   std::vector _indices) {
+  input_indices.push_back(1);
+  std::cout << "the 1st input is marked as mutate input by library author" << 
std::endl;
+  return MX_SUCCESS;
+}
+
+class MyStatefulOp : public CustomStatefulOp {
+ public:
+  MyStatefulOp(std::string sym, int count) : subgraph_sym(sym), count(count) {}
+
+  void Forward() {
+count++;
+  }
+
+  int State() {
+return count;
+  }
+
+  ~MyStatefulOp() {}
+
+ private:
+  std::string subgraph_sym;
+  int count;
+};
+
+MXReturnValue createOpState(std::map attrs,
+CustomStatefulOp** op_inst) {
+  *op_inst = new MyStatefulOp("json", 0);
 
 Review comment:
   Lets add a check for "subgraph_sym_json" key in the attrs map, and error out 
if its missing.
   Lets get the "json" string from the attrs map using the "subgraph_sym_json" 
key rather than setting the placeholder "json" string. 
   
   We'll assume the subgraph property sets this attr, later. 
   
   ```
   CHECK(attrs.find("subgraph_sym_json") != attrs.end()) << "Error! Expected 
key '" << "subgraph_sym_json" << "' in Node Attributes";
   *op_inst = new MyStatefulOp(attrs["subgraph_sym_json"], 0);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-09-06 Thread GitBox
samskalicky commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r321955177
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -18,33 +18,627 @@
  */
 
 /*!
- * Copyright (c) 2015 by Contributors
+ * Copyright (c) 2019 by Contributors
  * \file lib_api.h
  * \brief APIs to interact with libraries
+ * This API specifies function prototypes to
+ * register custom ops for library authors
  */
+
 #ifndef MXNET_LIB_API_H_
 #define MXNET_LIB_API_H_
 
+#include 
+#include 
+#include 
+#include 
+
+#define MX_LIBRARY_VERSION 1
+
 /*!
- * \brief Following are the APIs implemented in the external library
+ * \brief External Tensor data types
+ */
+enum MXDType {
+  kFloat32 = 0,
+  kFloat64 = 1,
+  kFloat16 = 2,
+  kUint8 = 3,
+  kInt32 = 4,
+  kInt8  = 5,
+  kInt64 = 6,
+};
+
+enum MXReturnValue {
+  MX_FAIL = 0,
+  MX_SUCCESS = 1,
+};
+
+/*!
+ * \brief External Tensor data structure
+ */
+struct MXTensor {
+  MXTensor() : data(nullptr) {}
+
+  MXTensor(void *data, const std::vector , MXDType dtype)
+  : data{data}, shape{shape}, dtype{dtype} {}
+
+  /*!
+   * \brief helper function to cast data pointer
+   */
+  template
+  data_type* getData() {
+return reinterpret_cast(data);
+  }
+
+  void *data;  // not owned
+  std::vector shape;
+  MXDType dtype;
+};
+
+/*!
+ * \brief resource malloc function to allocate memory inside fcompute function
+ */
+typedef void* (*xpu_malloc_t)(void*, int);
+
+/*!
+ * \brief Class to provide resource APIs to FCompute
+ */
+class OpResource {
+ public:
+  OpResource(xpu_malloc_t xm, void* _xm) : xpu_malloc(xm), _xpu_malloc(_xm) {}
+
+  /*!
+   * \brief allocate memory controlled by MXNet
+   */
+  void* alloc(int size) {
+return xpu_malloc(_xpu_malloc, size);
+  }
+ private:
+  xpu_malloc_t xpu_malloc;
+  void* _xpu_malloc;
+};
+
+/*!
+ * \brief StatefulOp wrapper class to pass to backend OpState
+ */
+class CustomStatefulOpWrapper {
+ public:
+  CustomStatefulOpWrapper(void* inst) : instance(inst) {}
+  void* get_instance() { return instance; }
+ private:
+  void* instance;
+};
+
+/*!
+ * \brief An prototype interface class for library author creating stateful op
+ */
+class CustomStatefulOp {
+ public:
+  virtual void Forward() = 0;
 
 Review comment:
   Shouldnt we make the constructor virtual and = 0 too? Or is there somewhere 
we need to be able to construct instances of the super class?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-09-06 Thread GitBox
samskalicky commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r321955086
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -18,33 +18,627 @@
  */
 
 /*!
- * Copyright (c) 2015 by Contributors
+ * Copyright (c) 2019 by Contributors
  * \file lib_api.h
  * \brief APIs to interact with libraries
+ * This API specifies function prototypes to
+ * register custom ops for library authors
  */
+
 #ifndef MXNET_LIB_API_H_
 #define MXNET_LIB_API_H_
 
+#include 
+#include 
+#include 
+#include 
+
+#define MX_LIBRARY_VERSION 1
+
 /*!
- * \brief Following are the APIs implemented in the external library
+ * \brief External Tensor data types
+ */
+enum MXDType {
+  kFloat32 = 0,
+  kFloat64 = 1,
+  kFloat16 = 2,
+  kUint8 = 3,
+  kInt32 = 4,
+  kInt8  = 5,
+  kInt64 = 6,
+};
+
+enum MXReturnValue {
+  MX_FAIL = 0,
+  MX_SUCCESS = 1,
+};
+
+/*!
+ * \brief External Tensor data structure
+ */
+struct MXTensor {
+  MXTensor() : data(nullptr) {}
+
+  MXTensor(void *data, const std::vector , MXDType dtype)
+  : data{data}, shape{shape}, dtype{dtype} {}
+
+  /*!
+   * \brief helper function to cast data pointer
+   */
+  template
+  data_type* getData() {
+return reinterpret_cast(data);
+  }
+
+  void *data;  // not owned
+  std::vector shape;
+  MXDType dtype;
+};
+
+/*!
+ * \brief resource malloc function to allocate memory inside fcompute function
+ */
+typedef void* (*xpu_malloc_t)(void*, int);
+
+/*!
+ * \brief Class to provide resource APIs to FCompute
+ */
+class OpResource {
+ public:
+  OpResource(xpu_malloc_t xm, void* _xm) : xpu_malloc(xm), _xpu_malloc(_xm) {}
+
+  /*!
+   * \brief allocate memory controlled by MXNet
+   */
+  void* alloc(int size) {
+return xpu_malloc(_xpu_malloc, size);
+  }
+ private:
+  xpu_malloc_t xpu_malloc;
+  void* _xpu_malloc;
+};
+
+/*!
+ * \brief StatefulOp wrapper class to pass to backend OpState
+ */
+class CustomStatefulOpWrapper {
+ public:
+  CustomStatefulOpWrapper(void* inst) : instance(inst) {}
+  void* get_instance() { return instance; }
+ private:
+  void* instance;
+};
+
+/*!
+ * \brief An prototype interface class for library author creating stateful op
+ */
+class CustomStatefulOp {
+ public:
+  virtual void Forward() = 0;
+  virtual ~CustomStatefulOp() = 0;
+};
+
+/*!
+ * Custom Operator function templates
+ */
+typedef MXReturnValue (*fcomp_t)(std::map,
+ std::vector, std::vector,
+ OpResource res);
+typedef MXReturnValue (*parseAttrs_t)(std::map,
+  int*, int*);
+typedef MXReturnValue (*inferType_t)(std::map,
+ std::vector&, std::vector&);
+typedef MXReturnValue (*inferShape_t)(std::map,
+  std::vector>&,
+  std::vector>&);
+typedef MXReturnValue (*mutateInputs_t)(std::map,
+  std::vector&);
+typedef MXReturnValue (*createOpState_t)(std::map,
+  CustomStatefulOp**);
+typedef MXReturnValue (*fstateful_t)(CustomStatefulOp*, std::vector,
+  std::vector);
+
+/*!
+ * \brief Class to hold custom operator registration
+ */
+class CustomOp {
+ public:
+  explicit CustomOp(const char* op_name) : name(op_name), fcompute(nullptr),
+fgradient(nullptr), parse_attrs(nullptr), infer_type(nullptr), 
infer_shape(nullptr),
+mutate_inputs(nullptr), create_op_state(nullptr), fstateful(nullptr) {}
+  ~CustomOp() {}
+  CustomOp& setForward(fcomp_t fcomp) {
+fcompute = fcomp;
+return *this;
+  }
+  CustomOp& setGradient(fcomp_t fcomp) {
+fgradient = fcomp;
+return *this;
+  }
+  CustomOp& setParseAttrs(parseAttrs_t func) {
+parse_attrs = func;
+return *this;
+  }
+  CustomOp& setInferType(inferType_t func) {
+infer_type = func;
+return *this;
+  }
+  CustomOp& setInferShape(inferShape_t func) {
+infer_shape = func;
+return *this;
+  }
+  CustomOp& setMutateInputs(mutateInputs_t func) {
+mutate_inputs = func;
+return *this;
+  }
+  CustomOp& setCreateOpState(createOpState_t func) {
+create_op_state = func;
+return *this;
+  }
+  CustomOp& setForwardStateful(fstateful_t func) {
+fstateful = func;
+return *this;
+  }
 
 Review comment:
   We need to add some error checking in lib_api.h to prevent users from 
registering the wrong combo of functions. Currently we have 2 scenarios:
   
   Basic Op Registration requires:
   - Parse Attrs
   - Infer Type
   - Infer Shape
   - Forward
   
   __optional__
   - Gradient --> Lets rename to Backward (to match with Forward)
   - Mutate inputs
   
   Stateful Op Registration requires:
   - Parse Attrs
   - Infer Type
   - Infer Shape
   - Create Op State
   - Forward Stateful
   
   __optional__
   - Gradient --> Lets rename to Backward (to match 

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-09-06 Thread GitBox
samskalicky commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r321954968
 
 

 ##
 File path: example/lib_ops/subgraph_lib.cc
 ##
 @@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file subgraph_lib.cc
+ * \brief subgraph operator implementation
+ * library file
+ */
+
+#include 
+#include "lib_api.h"
+
+MXReturnValue parseAttrs(std::map attrs,
+   int* num_in, int* num_out) {
+  *num_in = 2;
+  *num_out = 1;
+  return MX_SUCCESS;
+}
+
+MXReturnValue inferType(std::map attrs, 
std::vector ,
+  std::vector ) {
+  outtypes[0] = intypes[0];
+  return MX_SUCCESS;
+}
+
+MXReturnValue inferShape(std::map attrs, 
std::vector> ,
+   std::vector> ) {
+  unsigned n = inshapes[0][0];
+  unsigned k = inshapes[0][1];
+  unsigned kk = inshapes[1][0];
+  unsigned m = inshapes[1][1];
+
+  std::cout << "inshapes[0][0]=" << n << "  inshapes[0][1]=" << k << std::endl;
+  std::cout << "inshapes[1][0]=" << kk << "  inshapes[1][1]=" << m << 
std::endl;
+
+  if (k != kk)
+return MX_FAIL;
+
+  outshapes[0].push_back(n);
+  outshapes[0].push_back(m);
+  return MX_SUCCESS;
+}
+
+MXReturnValue mutateInputs(std::map attrs,
+   std::vector _indices) {
+  input_indices.push_back(1);
+  std::cout << "the 1st input is marked as mutate input by library author" << 
std::endl;
+  return MX_SUCCESS;
+}
+
+class MyStatefulOp : public CustomStatefulOp {
+ public:
+  MyStatefulOp(std::string sym, int count) : subgraph_sym(sym), count(count) {}
+
+  void Forward() {
+count++;
+  }
+
+  int State() {
+return count;
+  }
+
+  ~MyStatefulOp() {}
+
+ private:
+  std::string subgraph_sym;
+  int count;
+};
+
+MXReturnValue createOpState(std::map attrs,
+CustomStatefulOp** op_inst) {
+  *op_inst = new MyStatefulOp("json", 0);
+  std::cout << "create op state successful" << std::endl;
+  return MX_SUCCESS;
+}
+
+MXReturnValue forwardStateful(CustomStatefulOp* op_inst,
+  std::vector inputs,
+  std::vector outputs) {
+  MyStatefulOp* my_op_inst = static_cast(op_inst);
+  if (my_op_inst == nullptr) {
+std::cout << "stateful op loading failed" << std::endl;
+return MX_FAIL;
+  }
+
+  my_op_inst->Forward();
 
 Review comment:
   Need to pass inputs/outputs to forward function


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-09-06 Thread GitBox
samskalicky commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r321954930
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -18,33 +18,627 @@
  */
 
 /*!
- * Copyright (c) 2015 by Contributors
+ * Copyright (c) 2019 by Contributors
  * \file lib_api.h
  * \brief APIs to interact with libraries
+ * This API specifies function prototypes to
+ * register custom ops for library authors
  */
+
 #ifndef MXNET_LIB_API_H_
 #define MXNET_LIB_API_H_
 
+#include 
+#include 
+#include 
+#include 
+
+#define MX_LIBRARY_VERSION 1
+
 /*!
- * \brief Following are the APIs implemented in the external library
+ * \brief External Tensor data types
+ */
+enum MXDType {
+  kFloat32 = 0,
+  kFloat64 = 1,
+  kFloat16 = 2,
+  kUint8 = 3,
+  kInt32 = 4,
+  kInt8  = 5,
+  kInt64 = 6,
+};
+
+enum MXReturnValue {
+  MX_FAIL = 0,
+  MX_SUCCESS = 1,
+};
+
+/*!
+ * \brief External Tensor data structure
+ */
+struct MXTensor {
+  MXTensor() : data(nullptr) {}
+
+  MXTensor(void *data, const std::vector , MXDType dtype)
+  : data{data}, shape{shape}, dtype{dtype} {}
+
+  /*!
+   * \brief helper function to cast data pointer
+   */
+  template
+  data_type* getData() {
+return reinterpret_cast(data);
+  }
+
+  void *data;  // not owned
+  std::vector shape;
+  MXDType dtype;
+};
+
+/*!
+ * \brief resource malloc function to allocate memory inside fcompute function
+ */
+typedef void* (*xpu_malloc_t)(void*, int);
+
+/*!
+ * \brief Class to provide resource APIs to FCompute
+ */
+class OpResource {
+ public:
+  OpResource(xpu_malloc_t xm, void* _xm) : xpu_malloc(xm), _xpu_malloc(_xm) {}
+
+  /*!
+   * \brief allocate memory controlled by MXNet
+   */
+  void* alloc(int size) {
+return xpu_malloc(_xpu_malloc, size);
+  }
+ private:
+  xpu_malloc_t xpu_malloc;
+  void* _xpu_malloc;
+};
+
+/*!
+ * \brief StatefulOp wrapper class to pass to backend OpState
+ */
+class CustomStatefulOpWrapper {
+ public:
+  CustomStatefulOpWrapper(void* inst) : instance(inst) {}
+  void* get_instance() { return instance; }
+ private:
+  void* instance;
+};
+
+/*!
+ * \brief An prototype interface class for library author creating stateful op
+ */
+class CustomStatefulOp {
+ public:
+  virtual void Forward() = 0;
+  virtual ~CustomStatefulOp() = 0;
+};
+
+/*!
+ * Custom Operator function templates
+ */
+typedef MXReturnValue (*fcomp_t)(std::map,
+ std::vector, std::vector,
+ OpResource res);
+typedef MXReturnValue (*parseAttrs_t)(std::map,
+  int*, int*);
+typedef MXReturnValue (*inferType_t)(std::map,
+ std::vector&, std::vector&);
+typedef MXReturnValue (*inferShape_t)(std::map,
+  std::vector>&,
+  std::vector>&);
+typedef MXReturnValue (*mutateInputs_t)(std::map,
+  std::vector&);
+typedef MXReturnValue (*createOpState_t)(std::map,
+  CustomStatefulOp**);
+typedef MXReturnValue (*fstateful_t)(CustomStatefulOp*, std::vector,
+  std::vector);
+
+/*!
+ * \brief Class to hold custom operator registration
+ */
+class CustomOp {
+ public:
+  explicit CustomOp(const char* op_name) : name(op_name), fcompute(nullptr),
+fgradient(nullptr), parse_attrs(nullptr), infer_type(nullptr), 
infer_shape(nullptr),
+mutate_inputs(nullptr), create_op_state(nullptr), fstateful(nullptr) {}
+  ~CustomOp() {}
+  CustomOp& setForward(fcomp_t fcomp) {
+fcompute = fcomp;
+return *this;
+  }
+  CustomOp& setGradient(fcomp_t fcomp) {
+fgradient = fcomp;
+return *this;
+  }
+  CustomOp& setParseAttrs(parseAttrs_t func) {
+parse_attrs = func;
+return *this;
+  }
+  CustomOp& setInferType(inferType_t func) {
+infer_type = func;
+return *this;
+  }
+  CustomOp& setInferShape(inferShape_t func) {
+infer_shape = func;
+return *this;
+  }
+  CustomOp& setMutateInputs(mutateInputs_t func) {
+mutate_inputs = func;
+return *this;
+  }
+  CustomOp& setCreateOpState(createOpState_t func) {
+create_op_state = func;
+return *this;
+  }
+  CustomOp& setForwardStateful(fstateful_t func) {
+fstateful = func;
+return *this;
+  }
+  /*! \brief operator name */
+  const char* name;
+  /*! \brief operator functions */
+  fcomp_t fcompute;
+  fcomp_t fgradient;
+  parseAttrs_t parse_attrs;
+  inferType_t infer_type;
+  inferShape_t infer_shape;
+  mutateInputs_t mutate_inputs;
+  createOpState_t create_op_state;
+  fstateful_t fstateful;
+};
+
+/*!
+ * \brief Registry class to registers things (ops, properties)
+ *   Singleton class
+ */
+template 
+class Registry {
+ public:
+  /*!
+   * \brief get singleton pointer to class
+   * \returns pointer to class
+   */
+  static Registry* get() {
+

[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #16060: [bug] mxnet.ndarray.sparse.norm fallback regression in 1.5.0 and master

2019-09-06 Thread GitBox
eric-haibin-lin commented on issue #16060: [bug] mxnet.ndarray.sparse.norm 
fallback regression in 1.5.0 and master
URL: 
https://github.com/apache/incubator-mxnet/issues/16060#issuecomment-529069438
 
 
   It looks like the regression happens around April 16th
   ```
   ➜  mxnet git:(take) ✗ pip install mxnet==1.5.0b20190417
   Requirement already satisfied: mxnet==1.5.0b20190417 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (1.5.0b20190417)
   Requirement already satisfied: numpy<1.15.0,>=1.8.2 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
mxnet==1.5.0b20190417) (1.14.6)
   Requirement already satisfied: requests>=2.20.0 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
mxnet==1.5.0b20190417) (2.22.0)
   Requirement already satisfied: graphviz<0.9.0,>=0.8.1 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
mxnet==1.5.0b20190417) (0.8.4)
   Requirement already satisfied: idna<2.9,>=2.5 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
requests>=2.20.0->mxnet==1.5.0b20190417) (2.8)
   Requirement already satisfied: certifi>=2017.4.17 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
requests>=2.20.0->mxnet==1.5.0b20190417) (2019.6.16)
   Requirement already satisfied: chardet<3.1.0,>=3.0.2 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
requests>=2.20.0->mxnet==1.5.0b20190417) (3.0.4)
   Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
requests>=2.20.0->mxnet==1.5.0b20190417) (1.24.2)
   ➜  mxnet git:(take) ✗ python test.py
   [20:54:47] src/operator/contrib/../tensor/../../common/utils.h:450:
   Storage type fallback detected:
   operator = norm
   input storage types = [row_sparse, ]
   output storage types = [default, ]
   params = {}
   context.dev_mask = cpu
   The operator with default storage type will be dispatched for execution. 
You're seeing this warning message because the operator above is unable to 
process the given ndarrays with specified storage types, context and parameter. 
Temporary dense ndarrays are generated in order to execute the operator. This 
does not affect the correctness of the programme. You can set environment 
variable MXNET_STORAGE_FALLBACK_LOG_VERBOSE to 0 to suppress this warning.
   
   [2.]
   
   ➜  mxnet git:(take) ✗ pip install mxnet==1.5.0b20190416
   Collecting mxnet==1.5.0b20190416
 Using cached 
https://files.pythonhosted.org/packages/48/41/99ca13c3173c3631a024ace26e36baedf7d0810c0ac465f22cc2f0af2796/mxnet-1.5.0b20190416-cp37-cp37m-macosx_10_11_x86_64.whl
   Requirement already satisfied: numpy<1.15.0,>=1.8.2 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
mxnet==1.5.0b20190416) (1.14.6)
   Requirement already satisfied: graphviz<0.9.0,>=0.8.1 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
mxnet==1.5.0b20190416) (0.8.4)
   Requirement already satisfied: requests>=2.20.0 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
mxnet==1.5.0b20190416) (2.22.0)
   Requirement already satisfied: certifi>=2017.4.17 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
requests>=2.20.0->mxnet==1.5.0b20190416) (2019.6.16)
   Requirement already satisfied: idna<2.9,>=2.5 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
requests>=2.20.0->mxnet==1.5.0b20190416) (2.8)
   Requirement already satisfied: chardet<3.1.0,>=3.0.2 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
requests>=2.20.0->mxnet==1.5.0b20190416) (3.0.4)
   Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in 
/Users/haibilin/miniconda3/lib/python3.7/site-packages (from 
requests>=2.20.0->mxnet==1.5.0b20190416) (1.24.2)
   Installing collected packages: mxnet
 Found existing installation: mxnet 1.5.0b20190417
   Uninstalling mxnet-1.5.0b20190417:
 Successfully uninstalled mxnet-1.5.0b20190417
   Successfully installed mxnet-1.5.0b20190416
   ➜  mxnet git:(take) ✗ python test.py
   
   [2.]
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yifeim commented on issue #16060: [bug] mxnet.ndarray.sparse.norm fallback regression in 1.5.0 and master

2019-09-06 Thread GitBox
yifeim commented on issue #16060: [bug] mxnet.ndarray.sparse.norm fallback 
regression in 1.5.0 and master
URL: 
https://github.com/apache/incubator-mxnet/issues/16060#issuecomment-529067609
 
 
   @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16097: [numpy] array ufunc and array function protocols

2019-09-06 Thread GitBox
reminisce commented on issue #16097: [numpy] array ufunc and array function 
protocols
URL: https://github.com/apache/incubator-mxnet/pull/16097#issuecomment-529063949
 
 
   @szha This PR is ready for merge if there are no more comments. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-06 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 948a3bc  Bump the publish timestamp.
948a3bc is described below

commit 948a3bc794da06e7d604596f1996c5ea802805cb
Author: mxnet-ci 
AuthorDate: Sat Sep 7 01:29:05 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..2d4411c
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Sep  7 01:29:05 UTC 2019



[GitHub] [incubator-mxnet] anirudh2290 commented on issue #15148: Very Large CPU RAM Memory Consumption (>1GB)

2019-09-06 Thread GitBox
anirudh2290 commented on issue #15148: Very Large CPU RAM Memory Consumption 
(>1GB)
URL: 
https://github.com/apache/incubator-mxnet/issues/15148#issuecomment-529053611
 
 
   We ( I and @karan6181  ) looked at a bunch of things to understand where the 
overhead is coming from. We looked at Resource Request and attach op resources 
pass. We looked at object pool in threaded engine, and at turning off OPENMP 
and MKLDNN and checking, but the overhead is still there. We looked at the 
overhead caused by different arrays in the module api.  Overhead is not coming 
from any of these areas.
   
   We also check that increase in memory consumption happens at the bind stage 
and probably coming from somewhere in the graph executor. Next step is to check 
this in 1.2.1 to see if the increase happened in 1.3.1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gigasquid commented on issue #16106: src/executor/graph_executor.cc:1847: Check failed: arg_names.size() == in_args_map.size() (2 vs. 1)

2019-09-06 Thread GitBox
gigasquid commented on issue #16106: src/executor/graph_executor.cc:1847: Check 
failed: arg_names.size() == in_args_map.size() (2 vs. 1)
URL: 
https://github.com/apache/incubator-mxnet/issues/16106#issuecomment-529051927
 
 
   @adc17 looks reasonable to me. Thanks for helping on this  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321942658
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -145,12 +155,16 @@ We have installed MXNet core library. Next, we will 
install MXNet interface pack
 To install the MXNet Python binding navigate to the root of the MXNet folder 
then run the following:
 
 ```bash
-$ cd python
-$ pip install -e .
+virtualenv -p`which python3` mxnet_py3
+source mxnet_py3/bin/activate
+pip install -e python
 ```
-
+First we create a 
[virtualenv](https://docs.python-guide.org/dev/virtualenvs/#lower-level-virtualenv)
 to isolate this installation from our global environment.
 
 Review comment:
   This should not be required for installing mxnet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
marcoabreu commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321942562
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -97,17 +101,23 @@ Install the dependencies, required for MXNet, with the 
following commands:
 ### Build MXNet Shared Library
 After you have installed the dependencies, pull the MXNet source code from Git 
and build MXNet to produce an MXNet library called ```libmxnet.so```. You can 
clone the repository as described in the following code block, or you may try 
the [download links](download.md) for your desired MXNet version.
 
-The file called ```osx.mk``` has the configuration required for building MXNet 
on OS X. First copy ```make/osx.mk``` into ```config.mk```, which is used by 
the ```make``` command:
-
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
-cd ~/mxnet
-cp make/osx.mk ./config.mk
-echo "USE_BLAS = openblas" >> ./config.mk
-echo "ADD_CFLAGS += -I/usr/local/opt/openblas/include" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/opt/openblas/lib" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/lib/graphviz/" >> ./config.mk
-make -j$(sysctl -n hw.ncpu)
+git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
+cd ~/mxnet && pushd .
+mkdir -p build && cd build
+cmake \
+-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+-DCMAKE_C_COMPILER_LAUNCHER=ccache \
+-DUSE_MKL_IF_AVAILABLE=OFF \
+-DUSE_MKLDNN=ON \
+-DUSE_CUDA=OFF \
+-DUSE_OPENMP=OFF \
+-DUSE_OPENCV=ON \
+-DUSE_SIGNAL_HANDLER=ON \
+-DCMAKE_BUILD_TYPE=Debug \
 
 Review comment:
   Why are we making a debug build? Wouldn't a release build be a bit faster 
and improve the first impression? Or maybe release with symbols.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321941620
 
 

 ##
 File path: cmake/cmake_options_osx.yml
 ##
 @@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
 
 Review comment:
   Instead of having this file, can we add a CMakeList_osx.txt instead? I think 
adding another layer of configurations on top of CMakeList.txt adds unnecessary 
effort to maintain.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
marcoabreu commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321942448
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -97,17 +101,23 @@ Install the dependencies, required for MXNet, with the 
following commands:
 ### Build MXNet Shared Library
 After you have installed the dependencies, pull the MXNet source code from Git 
and build MXNet to produce an MXNet library called ```libmxnet.so```. You can 
clone the repository as described in the following code block, or you may try 
the [download links](download.md) for your desired MXNet version.
 
-The file called ```osx.mk``` has the configuration required for building MXNet 
on OS X. First copy ```make/osx.mk``` into ```config.mk```, which is used by 
the ```make``` command:
-
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
-cd ~/mxnet
-cp make/osx.mk ./config.mk
-echo "USE_BLAS = openblas" >> ./config.mk
-echo "ADD_CFLAGS += -I/usr/local/opt/openblas/include" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/opt/openblas/lib" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/lib/graphviz/" >> ./config.mk
-make -j$(sysctl -n hw.ncpu)
+git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
+cd ~/mxnet && pushd .
+mkdir -p build && cd build
+cmake \
+-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+-DCMAKE_C_COMPILER_LAUNCHER=ccache \
+-DUSE_MKL_IF_AVAILABLE=OFF \
+-DUSE_MKLDNN=ON \
 
 Review comment:
   Usually users shouldn't be touching cmakelists.txt


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321941620
 
 

 ##
 File path: cmake/cmake_options_osx.yml
 ##
 @@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
 
 Review comment:
   Instead of having this file, can we add a CMakeList_osx.txt instead? I think 
adding another layer on top of CMakeList.txt adds unnecessary effort to 
maintain.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321941819
 
 

 ##
 File path: cmake/cmake_options.yml
 ##
 @@ -46,7 +46,8 @@ USE_SIGNAL_HANDLER: "ON" # Print stack traces on segfaults.
 USE_TENSORRT: "OFF" # Enable infeference optimization with TensorRT.
 USE_ASAN: "OFF" # Enable Clang/GCC ASAN sanitizers.
 ENABLE_TESTCOVERAGE: "OFF" # Enable compilation with test coverage metric 
output
-CMAKE_BUILD_TYPE: "Debug"
+USE_INT64_TENSOR_SIZE: "OFF" # Use int64_t to represent the total number of 
elements in a tensor
+CMAKE_BUILD_TYPE: "Debug" # Debug | Release | RelWithDebInfo | MinSizeRel
 
 Review comment:
   The danger of this default option is that users are not aware they are 
building a Debug instead of Release. Why not just let users do:
   
   ```
   cmake -DCMAKE_BUILD_TYPE=Debug
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321941620
 
 

 ##
 File path: cmake/cmake_options_osx.yml
 ##
 @@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
 
 Review comment:
   Instead of having this file, can we add a CMakeList_osx.txt instead? I think 
adding another layer on top of CMakeList.txt is unnecessary and difficult to 
maintain.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
marcoabreu commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321942344
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -89,25 +92,31 @@ Install the dependencies, required for MXNet, with the 
following commands:
# Get pip
easy_install pip
# For visualization of network graphs
-   pip install graphviz
+   pip3 install graphviz
# Jupyter notebook
-   pip install jupyter
+   pip3 install jupyter
 ```
 
 ### Build MXNet Shared Library
 After you have installed the dependencies, pull the MXNet source code from Git 
and build MXNet to produce an MXNet library called ```libmxnet.so```. You can 
clone the repository as described in the following code block, or you may try 
the download links for your desired MXNet version.
 
-The file called ```osx.mk``` has the configuration required for building MXNet 
on OS X. First copy ```make/osx.mk``` into ```config.mk```, which is used by 
the ```make``` command:
-
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
-cd ~/mxnet
-cp make/osx.mk ./config.mk
-echo "USE_BLAS = openblas" >> ./config.mk
-echo "ADD_CFLAGS += -I/usr/local/opt/openblas/include" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/opt/openblas/lib" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/lib/graphviz/" >> ./config.mk
-make -j$(sysctl -n hw.ncpu)
+git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
+cd ~/mxnet && pushd .
+mkdir build && cd build
+cmake \
+-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+-DCMAKE_C_COMPILER_LAUNCHER=ccache \
+-DUSE_MKL_IF_AVAILABLE=OFF \
 
 Review comment:
   @pengzhao-intel are these the optimal compile flags? Especially with regard 
to omp and mkl?
   
   Also, why are we so verbose here? Are our defaults not good enough?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321942026
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -97,17 +101,23 @@ Install the dependencies, required for MXNet, with the 
following commands:
 ### Build MXNet Shared Library
 After you have installed the dependencies, pull the MXNet source code from Git 
and build MXNet to produce an MXNet library called ```libmxnet.so```. You can 
clone the repository as described in the following code block, or you may try 
the [download links](download.md) for your desired MXNet version.
 
-The file called ```osx.mk``` has the configuration required for building MXNet 
on OS X. First copy ```make/osx.mk``` into ```config.mk```, which is used by 
the ```make``` command:
-
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
-cd ~/mxnet
-cp make/osx.mk ./config.mk
-echo "USE_BLAS = openblas" >> ./config.mk
-echo "ADD_CFLAGS += -I/usr/local/opt/openblas/include" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/opt/openblas/lib" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/lib/graphviz/" >> ./config.mk
-make -j$(sysctl -n hw.ncpu)
+git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
+cd ~/mxnet && pushd .
+mkdir -p build && cd build
+cmake \
+-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+-DCMAKE_C_COMPILER_LAUNCHER=ccache \
+-DUSE_MKL_IF_AVAILABLE=OFF \
+-DUSE_MKLDNN=ON \
 
 Review comment:
   I feel all these flags are unnecessary. We already have a CMakeList.txt and 
isn't that the place for user to switch on/off these flags?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321941914
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -97,17 +101,23 @@ Install the dependencies, required for MXNet, with the 
following commands:
 ### Build MXNet Shared Library
 After you have installed the dependencies, pull the MXNet source code from Git 
and build MXNet to produce an MXNet library called ```libmxnet.so```. You can 
clone the repository as described in the following code block, or you may try 
the [download links](download.md) for your desired MXNet version.
 
-The file called ```osx.mk``` has the configuration required for building MXNet 
on OS X. First copy ```make/osx.mk``` into ```config.mk```, which is used by 
the ```make``` command:
-
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
-cd ~/mxnet
-cp make/osx.mk ./config.mk
-echo "USE_BLAS = openblas" >> ./config.mk
-echo "ADD_CFLAGS += -I/usr/local/opt/openblas/include" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/opt/openblas/lib" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/lib/graphviz/" >> ./config.mk
-make -j$(sysctl -n hw.ncpu)
+git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
+cd ~/mxnet && pushd .
+mkdir -p build && cd build
+cmake \
+-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+-DCMAKE_C_COMPILER_LAUNCHER=ccache \
+-DUSE_MKL_IF_AVAILABLE=OFF \
+-DUSE_MKLDNN=ON \
+-DUSE_CUDA=OFF \
+-DUSE_OPENMP=OFF \
+-DUSE_OPENCV=ON \
+-DUSE_SIGNAL_HANDLER=ON \
+-DCMAKE_BUILD_TYPE=Debug \
+-GNinja ..
+ninja
+popd
 
 Review comment:
   Why need `pushd .` and `popd`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321941819
 
 

 ##
 File path: cmake/cmake_options.yml
 ##
 @@ -46,7 +46,8 @@ USE_SIGNAL_HANDLER: "ON" # Print stack traces on segfaults.
 USE_TENSORRT: "OFF" # Enable infeference optimization with TensorRT.
 USE_ASAN: "OFF" # Enable Clang/GCC ASAN sanitizers.
 ENABLE_TESTCOVERAGE: "OFF" # Enable compilation with test coverage metric 
output
-CMAKE_BUILD_TYPE: "Debug"
+USE_INT64_TENSOR_SIZE: "OFF" # Use int64_t to represent the total number of 
elements in a tensor
+CMAKE_BUILD_TYPE: "Debug" # Debug | Release | RelWithDebInfo | MinSizeRel
 
 Review comment:
   The danger of this default option is that users are not aware they are 
building a Debug instead of Release.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321941620
 
 

 ##
 File path: cmake/cmake_options_osx.yml
 ##
 @@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
 
 Review comment:
   Instead of having this file, can we add a CMakeList_osx.txt instead? I think 
this adding another layer on top of CMakeList.txt is unnecessary.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
marcoabreu commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321941326
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -145,12 +155,16 @@ We have installed MXNet core library. Next, we will 
install MXNet interface pack
 To install the MXNet Python binding navigate to the root of the MXNet folder 
then run the following:
 
 ```bash
-$ cd python
-$ pip install -e .
+virtualenv -p`which python3` mxnet_py3
 
 Review comment:
   While I generally agree that virtualenv is a good practice, it adds another 
dependency.
   
   These guides are for the very basic user to get something running. I think 
this PR overcomplicates a lot of things. Adding the cmake compilation? Fine! 
Everything else? Please leave it as it is.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
marcoabreu commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321941068
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -100,14 +104,21 @@ After you have installed the dependencies, pull the 
MXNet source code from Git a
 The file called ```osx.mk``` has the configuration required for building MXNet 
on OS X. First copy ```make/osx.mk``` into ```config.mk```, which is used by 
the ```make``` command:
 
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
-cd ~/mxnet
-cp make/osx.mk ./config.mk
-echo "USE_BLAS = openblas" >> ./config.mk
-echo "ADD_CFLAGS += -I/usr/local/opt/openblas/include" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/opt/openblas/lib" >> ./config.mk
-echo "ADD_LDFLAGS += -L/usr/local/lib/graphviz/" >> ./config.mk
-make -j$(sysctl -n hw.ncpu)
+git clone --recursive https://github.com/apache/incubator-mxnet ~/mxnet
+cd ~/mxnet
+mkdir build && cd build
+cmake \
+-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
 
 Review comment:
   These should be bare minimum steps. Requiring ccache is an advanced thing 
and I'd prefer to not have it in the basic instructions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #14535: [DOC] Updated install 
instructions for mac
URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r321939887
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -77,6 +77,10 @@ Install the dependencies, required for MXNet, with the 
following commands:
 
 ```bash
brew update
+brew install python3
 
 Review comment:
   Should this be python instead of python3? Also it seems the community has 
not reached an agreement on retiring python2 sypport yet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] karan6181 commented on issue #15148: Very Large CPU RAM Memory Consumption (>1GB)

2019-09-06 Thread GitBox
karan6181 commented on issue #15148: Very Large CPU RAM Memory Consumption 
(>1GB)
URL: 
https://github.com/apache/incubator-mxnet/issues/15148#issuecomment-529044425
 
 
   - I see roughly 2 GB of CPU memory usage when I ran the user script and also 
with the below minimum reproducible script using Gluon.
   
   - ```python
 import mxnet as mx
 import gluoncv
 from time import sleep
 
 net = gluoncv.model_zoo.get_model('cifar_resnet20_v1', pretrained=True, 
ctx=mx.gpu())
 net.hybridize()
 
 #print(net.summary(mx.nd.ones((1, 3,28,28), ctx=mx.gpu(
 sleep(20)
 ```
   
   - I tried using different MXNet version but the CPU memory usage keeps on 
increasing. Not Much difference in memory consumption if using with/without MKL.
   
   - ```bash
 # P3.16xLarge
 # DLAMI V24
 
 MXNet-cu92: 1.3.1: 2.15 GB
 MXNet-cu92: 1.4.1: 2.3 GB
 MXNet-cu92: 1.5.0: 2.55 GB
 MXNet-cu92: 1.6.0(1.6.0b20190906): 2.9 GB 
 ```
   
   - If I run the same above script on CPU context, then the memory usage is 
approx. 100 MB.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest merged pull request #15746: [MXNET-978] Higher Order Gradient Support `clip`, `dropout`.

2019-09-06 Thread GitBox
apeforest merged pull request #15746: [MXNET-978] Higher Order Gradient Support 
`clip`, `dropout`.
URL: https://github.com/apache/incubator-mxnet/pull/15746
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (c928392 -> 24f0a10)

2019-09-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c928392  [MXNET-978] Higher Order Gradient Support `sqrt`, `cbrt`. 
(#15474)
 add 24f0a10  [MXNET-978] Higher Order Gradient Support `clip`, `dropout`. 
(#15746)

No new revisions were added by this update.

Summary of changes:
 src/operator/nn/dropout.cc  |  4 +++-
 src/operator/tensor/matrix_op.cc|  3 ++-
 tests/python/unittest/test_higher_order_grad.py | 30 +
 3 files changed, 35 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated (6de6848 -> c928392)

2019-09-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 6de6848  Not to search for coverage files when none exist (#16107)
 add c928392  [MXNET-978] Higher Order Gradient Support `sqrt`, `cbrt`. 
(#15474)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/elemwise_unary_op_pow.cc| 71 -
 tests/python/unittest/test_higher_order_grad.py | 40 ++
 2 files changed, 109 insertions(+), 2 deletions(-)



[GitHub] [incubator-mxnet] apeforest merged pull request #15474: [MXNET-978] Higher Order Gradient Support `sqrt`, `cbrt`.

2019-09-06 Thread GitBox
apeforest merged pull request #15474: [MXNET-978] Higher Order Gradient Support 
`sqrt`, `cbrt`.
URL: https://github.com/apache/incubator-mxnet/pull/15474
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] adc17 commented on issue #16106: src/executor/graph_executor.cc:1847: Check failed: arg_names.size() == in_args_map.size() (2 vs. 1)

2019-09-06 Thread GitBox
adc17 commented on issue #16106: src/executor/graph_executor.cc:1847: Check 
failed: arg_names.size() == in_args_map.size() (2 vs. 1)
URL: 
https://github.com/apache/incubator-mxnet/issues/16106#issuecomment-529039677
 
 
   @ZhennanQin The Clojure test cases that failed are `test-maximum` and 
`test-minimum`: 
https://github.com/apache/incubator-mxnet/blob/6de684825130c28dfa75f2a707aeaed64a4340e5/contrib/clojure-package/test/org/apache/clojure_mxnet/operator_test.clj#L409-L441
   
   Changing the duplicate `"data"` strings to `"data1"` and `"data2"` resolves 
the failures .
   
   @gigasquid I think the correct fix is to update the tests + add an error 
message for this scenario? I will submit a PR if you agree.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on issue #16114: improve dataloader signals and messages

2019-09-06 Thread GitBox
zhreshold commented on issue #16114: improve dataloader signals and messages
URL: https://github.com/apache/incubator-mxnet/pull/16114#issuecomment-529036509
 
 
   @szha, @eric-haibin-lin @sxjscience for review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold opened a new pull request #16114: improve dataloader signals and messages

2019-09-06 Thread GitBox
zhreshold opened a new pull request #16114: improve dataloader signals and 
messages
URL: https://github.com/apache/incubator-mxnet/pull/16114
 
 
   ## Description ##
   Improve dataloader use experience.
   With this PR, dataloaders are
   
   - More responsive to terminate with Ctrl + C
   - Show helpful messages if exceptions raises in multiple workers. 
DataLoaders used to produce unreadable messages.
   - Less likely to hang forever, given timeout is added to fetch logic
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16113: [Flaky] test_mkldnn.test_activation

2019-09-06 Thread GitBox
mxnet-label-bot commented on issue #16113: [Flaky] test_mkldnn.test_activation
URL: 
https://github.com/apache/incubator-mxnet/issues/16113#issuecomment-529035181
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Test, Flaky


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce opened a new issue #16113: [Flaky] test_mkldnn.test_activation

2019-09-06 Thread GitBox
reminisce opened a new issue #16113: [Flaky] test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/issues/16113
 
 
   ```
   ==
   
   FAIL: test_mkldnn.test_activation
   
   --
   
   Traceback (most recent call last):
   
 File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in 
runTest
   
   self.test(*self.arg)
   
 File "/work/mxnet/tests/python/mkl/../unittest/common.py", line 177, in 
test_new
   
   orig_test(*args, **kwargs)
   
 File "/work/mxnet/tests/python/mkl/test_mkldnn.py", line 350, in 
test_activation
   
   check_activation_training(stype)
   
 File "/work/mxnet/tests/python/mkl/test_mkldnn.py", line 346, in 
check_activation_training
   
   check_numeric_gradient(test, in_location, numeric_eps=1e-5, rtol=0.16, 
atol=1e-4)
   
 File "/work/mxnet/python/mxnet/test_utils.py", line 1015, in 
check_numeric_gradient
   
   ("NUMERICAL_%s"%name, "BACKWARD_%s"%name))
   
 File "/work/mxnet/python/mxnet/test_utils.py", line 533, in 
assert_almost_equal
   
   raise AssertionError(msg)
   
   AssertionError: 
   
   Items are not equal:
   
   Error 1.609589 exceeds tolerance rtol=0.16, atol=0.000100.  Location of 
maximum error:(0, 1, 0, 0), a=0.523432, b=0.705208
   
NUMERICAL_data: array(0.1385808 , 0.],
   
[0.923872  , 0.58710575]],
   
   ...
   
BACKWARD_data: array(0.13833651, 0.],
   
[0.9269223 , 0.58610183]],
   
   ...
   ```
   @PatricZhao Any idea? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #15808: Add option to choose between OMP implementations

2019-09-06 Thread GitBox
marcoabreu commented on a change in pull request #15808: Add option to choose 
between OMP implementations
URL: https://github.com/apache/incubator-mxnet/pull/15808#discussion_r321929537
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -432,14 +432,13 @@ endif()
 
 # ---[ OpenMP
 if(USE_OPENMP)
-  find_package(OpenMP REQUIRED)
   # This should build on Windows, but there's some problem and I don't have a 
Windows box, so
   # could a Windows user please fix?
-  if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/openmp/CMakeLists.txt
- AND SYSTEM_ARCHITECTURE STREQUAL "x86_64"
- AND NOT MSVC
- AND NOT CMAKE_CROSSCOMPILING)
-
+  if(USE_OPENMP STREQUAL "BUNDLED" AND EXISTS 
${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/openmp/CMakeLists.txt
 
 Review comment:
   The first check and the others should be separate to throw appropriate error 
messages


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #15808: Add option to choose between OMP implementations

2019-09-06 Thread GitBox
marcoabreu commented on a change in pull request #15808: Add option to choose 
between OMP implementations
URL: https://github.com/apache/incubator-mxnet/pull/15808#discussion_r321929582
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -452,14 +451,18 @@ if(USE_OPENMP)
 set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
 add_definitions(-DMXNET_USE_OPENMP=1)
-  else()
+  elseif(USE_OPENMP STREQUAL "PLATFORM" OR USE_OPENMP STREQUAL "ON")
 
 Review comment:
   What's the difference between on and platform


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit opened a new pull request #16112: [DO NOT MERGE]Revert custom profiler

2019-09-06 Thread GitBox
access2rohit opened a new pull request #16112: [DO NOT MERGE]Revert custom 
profiler
URL: https://github.com/apache/incubator-mxnet/pull/16112
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #15808: Add option to choose between OMP implementations

2019-09-06 Thread GitBox
apeforest commented on issue #15808: Add option to choose between OMP 
implementations
URL: https://github.com/apache/incubator-mxnet/pull/15808#issuecomment-529030769
 
 
   > They are for CI, a change in dmlc is needed first:
   > 
   > [dmlc/dmlc-core#558](https://github.com/dmlc/dmlc-core/pull/558)
   
   Why is it needed for this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15808: Add option to choose between OMP implementations

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #15808: Add option to choose 
between OMP implementations
URL: https://github.com/apache/incubator-mxnet/pull/15808#discussion_r321926506
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -452,14 +451,18 @@ if(USE_OPENMP)
 set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
 add_definitions(-DMXNET_USE_OPENMP=1)
-  else()
+  elseif(USE_OPENMP STREQUAL "PLATFORM" OR USE_OPENMP STREQUAL "ON")
+find_package(OpenMP REQUIRED)
+message("Using platform provided OpenMP")
 if(OPENMP_FOUND)
   set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
   set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
   set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
   set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} 
${OpenMP_EXE_LINKER_FLAGS}")
   add_definitions(-DMXNET_USE_OPENMP=1)
 endif()
+  else()
+message(FATAL_ERROR "USE_OPENMP takes values [PLATFORM, BUNDLED, OFF]")
 
 Review comment:
   "USE_OPENMP takes values [PLATFORM, BUNDLED, OFF, ON]")?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15808: Add option to choose between OMP implementations

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #15808: Add option to choose 
between OMP implementations
URL: https://github.com/apache/incubator-mxnet/pull/15808#discussion_r321926437
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -23,7 +23,7 @@ mxnet_option(USE_CUDA "Build with CUDA support"  
 ON)
 mxnet_option(USE_OLDCMAKECUDA "Build with old cmake cuda" OFF)
 mxnet_option(USE_NCCL "Use NVidia NCCL with CUDA" OFF)
 mxnet_option(USE_OPENCV   "Build with OpenCV support" ON)
-mxnet_option(USE_OPENMP   "Build with Openmp support" ON)
+mxnet_option(USE_OPENMP   "Build with Openmp support" ON) # OFF | ON | 
PLATFORM | BUNDLED
 
 Review comment:
   What's the difference between ON and the rest? Can you add some comments 
here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16111: Incompatible data type for numpy ops?

2019-09-06 Thread GitBox
sxjscience commented on issue #16111: Incompatible data type for numpy ops?
URL: 
https://github.com/apache/incubator-mxnet/issues/16111#issuecomment-529019794
 
 
   Link this with https://github.com/apache/incubator-mxnet/issues/16048


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16111: Incompatible data type for numpy ops?

2019-09-06 Thread GitBox
sxjscience commented on issue #16111: Incompatible data type for numpy ops?
URL: 
https://github.com/apache/incubator-mxnet/issues/16111#issuecomment-529017466
 
 
   @reminisce I think it's reasonable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-06 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a108875  Bump the publish timestamp.
a108875 is described below

commit a10887577808f46452bbc0fdea3a3cefdf51f31b
Author: mxnet-ci 
AuthorDate: Fri Sep 6 21:09:57 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..35ddab6
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Sep  6 21:09:57 UTC 2019



[GitHub] [incubator-mxnet] apeforest commented on issue #16023: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes."

2019-09-06 Thread GitBox
apeforest commented on issue #16023: Revert "Refactor LibraryInitializer so 
it's thread safe. Fixes random sporadical concurrency crashes."
URL: https://github.com/apache/incubator-mxnet/pull/16023#issuecomment-529011468
 
 
   https://github.com/apache/incubator-mxnet/pull/16040 already reverted the 
unintentional change in the original PR. So I am closing this one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest closed pull request #16023: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes."

2019-09-06 Thread GitBox
apeforest closed pull request #16023: Revert "Refactor LibraryInitializer so 
it's thread safe. Fixes random sporadical concurrency crashes."
URL: https://github.com/apache/incubator-mxnet/pull/16023
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16111: Incompatible data type for numpy ops?

2019-09-06 Thread GitBox
reminisce commented on issue #16111: Incompatible data type for numpy ops?
URL: 
https://github.com/apache/incubator-mxnet/issues/16111#issuecomment-529010437
 
 
   Yeah, we are already aware of this and has been working on this. It requires 
us to re-write the kernel of those ops and that would become much less tedious 
and efforts after we use TVM op module developed by @yzhliu . One catch here is 
that we may not be able to provide backward compute function for int input 
types as the backward pass shares the same infer type function as the forward. 
So for those ops taking integer inputs, we give error message that the gradient 
calculation is not supported and ask users to use floats as inputs. What do you 
guys think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 removed a comment on issue #15167: Pointwise fusion for GPU

2019-09-06 Thread GitBox
cjolivier01 removed a comment on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-529008979
 
 
   I am removing my block on this PR and letting in this "auto-abuse" in since 
I am in a similar industry to NVidia and don't wish to make it look like I am 
blocking a performance improvement for NVidia GPU's.  I hope that the 
auto-abuse will be fixed before merging, however.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 commented on issue #15167: Pointwise fusion for GPU

2019-09-06 Thread GitBox
cjolivier01 commented on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-529008979
 
 
   I am removing my block on this PR and letting in this "auto-abuse" in since 
I am in a similar industry to NVidia and don't wish to make it look like I am 
blocking a performance improvement for NVidia GPU's.  I hope that the 
auto-abuse will be fixed before merging, however.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #15808: Add option to choose between OMP implementations

2019-09-06 Thread GitBox
larroy commented on issue #15808: Add option to choose between OMP 
implementations
URL: https://github.com/apache/incubator-mxnet/pull/15808#issuecomment-529008630
 
 
   @lebeg please approve then


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #16111: Incompatible data type for numpy ops?

2019-09-06 Thread GitBox
eric-haibin-lin commented on issue #16111: Incompatible data type for numpy ops?
URL: 
https://github.com/apache/incubator-mxnet/issues/16111#issuecomment-529007400
 
 
   @reminisce @sxjscience 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16111: Incompatible data type for numpy ops?

2019-09-06 Thread GitBox
mxnet-label-bot commented on issue #16111: Incompatible data type for numpy ops?
URL: 
https://github.com/apache/incubator-mxnet/issues/16111#issuecomment-529006579
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Feature


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin opened a new issue #16111: Incompatible data type for numpy ops?

2019-09-06 Thread GitBox
eric-haibin-lin opened a new issue #16111: Incompatible data type for numpy ops?
URL: https://github.com/apache/incubator-mxnet/issues/16111
 
 
   ```
   >>> mx.np.log(mx.np.array([1], dtype='int'))
   array([0], dtype=int64)
   
   >>> np.log(np.array([10]))
   array([2.30258509])
   ```
   
   Many math operators in numpy (such as `log`) do not return integer result. 
However, `mx.np` does. Is this expected? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-09-06 Thread GitBox
apeforest commented on issue #15811: [MXNET-891] Support tuple of scales in 
upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#issuecomment-529003895
 
 
   Ping @huangzhiyuan for review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (259f6bb -> 6de6848)

2019-09-06 Thread yuxihu
This is an automated email from the ASF dual-hosted git repository.

yuxihu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 259f6bb  Revert accidental change to CMakelists (#16040)
 add 6de6848  Not to search for coverage files when none exist (#16107)

No new revisions were added by this update.

Summary of changes:
 tests/nightly/JenkinsfileForBinaries | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[GitHub] [incubator-mxnet] yuxihu merged pull request #16107: Skip coverage files find for nightly tests

2019-09-06 Thread GitBox
yuxihu merged pull request #16107: Skip coverage files find for nightly tests
URL: https://github.com/apache/incubator-mxnet/pull/16107
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest merged pull request #16040: Revert accidental change to CMakelists

2019-09-06 Thread GitBox
apeforest merged pull request #16040: Revert accidental change to CMakelists
URL: https://github.com/apache/incubator-mxnet/pull/16040
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (255dff0 -> 259f6bb)

2019-09-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 255dff0  [MXNET-978] Higher Order Gradient Support `arctan`, 
`arctanh`, `radians`. (#15531)
 add 259f6bb  Revert accidental change to CMakelists (#16040)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt | 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)



[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-06 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c60e023  Bump the publish timestamp.
c60e023 is described below

commit c60e0234929ca90d90149810564592b81ec34cbf
Author: mxnet-ci 
AuthorDate: Fri Sep 6 19:30:01 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..df98cef
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Sep  6 19:30:01 UTC 2019



[incubator-mxnet] branch master updated (d85a2d0 -> 255dff0)

2019-09-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d85a2d0  [DOC] Fix doc for nn.Embedding, nn.Dense and nd.Embedding 
(#15869)
 add 255dff0  [MXNET-978] Higher Order Gradient Support `arctan`, 
`arctanh`, `radians`. (#15531)

No new revisions were added by this update.

Summary of changes:
 src/nnvm/node_op_util.h | 76 +
 src/operator/tensor/elemwise_unary_op_trig.cc   | 63 +++-
 tests/python/unittest/test_higher_order_grad.py | 46 +++
 3 files changed, 182 insertions(+), 3 deletions(-)
 create mode 100644 src/nnvm/node_op_util.h



[GitHub] [incubator-mxnet] apeforest merged pull request #15531: [MXNET-978] Higher Order Gradient Support `arctan`, `arctanh`, `radians`.

2019-09-06 Thread GitBox
apeforest merged pull request #15531: [MXNET-978] Higher Order Gradient Support 
`arctan`, `arctanh`, `radians`.
URL: https://github.com/apache/incubator-mxnet/pull/15531
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest edited a comment on issue #16104: Faster Transpose 2D

2019-09-06 Thread GitBox
apeforest edited a comment on issue #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#issuecomment-528957235
 
 
   > Here numpy transpose calls the same TransposeImpl function that I have 
already handled.
   
   Yes, please check why your code broke their unit test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #16104: Faster Transpose 2D

2019-09-06 Thread GitBox
apeforest commented on issue #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#issuecomment-528957235
 
 
   > Here numpy transpose calls the same TransposeImpl function that I have 
already handled.
   
   Yes, please check why your code breaks their unit test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16038: Delete printing messages in unit tests

2019-09-06 Thread GitBox
reminisce commented on issue #16038: Delete printing messages in unit tests
URL: https://github.com/apache/incubator-mxnet/pull/16038#issuecomment-528956965
 
 
   Cleaned up in another PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce closed pull request #16038: Delete printing messages in unit tests

2019-09-06 Thread GitBox
reminisce closed pull request #16038: Delete printing messages in unit tests
URL: https://github.com/apache/incubator-mxnet/pull/16038
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #16104: Faster Transpose 2D

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r321850068
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -257,6 +257,29 @@ struct TransposeParam : public 
dmlc::Parameter {
   }
 };
 
+
+template
+MSHADOW_XINLINE void Transpose2D(DType *in, DType *out, index_t shape_0, 
index_t shape_1) {
+// ensure cache line hits and prevent cache miss for any configuration
+index_t blocksize = 32;
+index_t n = shape_0;
+index_t p = shape_1;
+
+for (index_t i = 0; i < n; i += blocksize) {
+  #pragma omp parallel for
 
 Review comment:
   Why OMP parallel for the innerloop only?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #16104: Faster Transpose 2D

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r321849931
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -257,6 +257,29 @@ struct TransposeParam : public 
dmlc::Parameter {
   }
 };
 
+
+template
+MSHADOW_XINLINE void Transpose2D(DType *in, DType *out, index_t shape_0, 
index_t shape_1) {
+// ensure cache line hits and prevent cache miss for any configuration
+index_t blocksize = 32;
+index_t n = shape_0;
+index_t p = shape_1;
+
+for (index_t i = 0; i < n; i += blocksize) {
 
 Review comment:
   How is this blocksize decided? Is it platform dependent?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs and outputs

2019-09-06 Thread GitBox
HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs 
and outputs
URL: 
https://github.com/apache/incubator-mxnet/issues/16108#issuecomment-528951277
 
 
   Two things to add : 
   
   1. A temporary fix has been tested for fixing the output duplication. (code 
below)
   This fix fits things into the current function signatures and had to redo 
similar computation in 2 spots
   2. A better fix would be to introduce a map for output_entries and 
input_entries to allow 1-many and many-1 mappings between the subgraph and the 
maingraph. Conceptually, similar with the fix in (1)
   
   Code : 
   src/operator/subgraph/partition_graph.cc:CreateSubgraphNode()
   ```
 nnvm::Symbol sym;
 nnvm::NodeEntryEqual node_equal;
 size_t idx = 0;
 sym.outputs.resize(output_entries.size());
   
 //only add unique output_entries to sym.outputs
 //relies on output_entries being pre-sorted
 for (size_t i = 0; i < output_entries.size(); ++i) {
   if (0 == i) {
 sym.outputs[idx] = *output_entries[i];
   } else {
 if (!node_equal(*output_entries[i-1], *output_entries[i])) {
   idx++;
   sym.outputs[idx] = *output_entries[i];
 } //else skip over dupe entry
   }
 }
 sym.outputs.resize(idx+1);
   ```
   
   In src/operator/subgraph/subgraph_property.h : ConnectSubgraphOutputs()
   ```
   nnvm::NodeEntryEqual node_equal;
   nnvm::NodeEntry prevNodeEntry;
   uint32_t idx = 0;
   
   //increment NodeEntry index only if the output_entry is unique
   for (size_t i = 0; i < output_entries->size(); ++i) {
 if (0 != i ) {
   if (!node_equal(prevNodeEntry, *output_entries->at(i))) {
 idx++;
   }
 }
 prevNodeEntry = *output_entries->at(i); //need a copy
 *output_entries->at(i) = nnvm::NodeEntry{subgraph_node, idx, 0};
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs and outputs

2019-09-06 Thread GitBox
HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs 
and outputs
URL: 
https://github.com/apache/incubator-mxnet/issues/16108#issuecomment-528951277
 
 
   Two things to add : 
   
   1. A temporary fix has been tested for fixing the output duplication. (code 
below)
   This fix fits things into the current function signatures and had to redo 
similar computation in 2 spots
   2. A better fix would be to introduce a map for output_entries and 
input_entries to allow 1-many and many-1 mappings between the subgraph and the 
maingraph. Conceptually, similar with the fix in (1)
   
   Code : 
   src/operator/subgraph/partition_graph.cc:CreateSubgraphNode()
   ```
 nnvm::Symbol sym;
 nnvm::NodeEntryEqual node_equal;
 size_t idx = 0;
 sym.outputs.resize(output_entries.size());
   
 //only add unique output_entries to sym.outputs
 //relies on output_entries being pre-sorted
 for (size_t i = 0; i < output_entries.size(); ++i) {
   if (0 == i) {
 sym.outputs[idx] = *output_entries[i];
   } else {
 if (!node_equal(*output_entries[i-1], *output_entries[i])) {
   idx++;
   sym.outputs[idx] = *output_entries[i];
 } //else skip over dupe entry
   }
 }
 sym.outputs.resize(idx+1);
   ```
   
   In src/operator/subgraph/subgraph_property.h : ConnectSubgraphOutputs()
   ```
   nnvm::NodeEntryEqual node_equal;
   nnvm::NodeEntry prevNodeEntry;
   uint32_t idx = 0;
   
   //increment NodeEntry index only if the output_entrie is unique
   for (size_t i = 0; i < output_entries->size(); ++i) {
 if (0 != i ) {
   if (!node_equal(prevNodeEntry, *output_entries->at(i))) {
 idx++;
   }
 }
 prevNodeEntry = *output_entries->at(i); //need a copy
 *output_entries->at(i) = nnvm::NodeEntry{subgraph_node, idx, 0};
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs and outputs

2019-09-06 Thread GitBox
HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs 
and outputs
URL: 
https://github.com/apache/incubator-mxnet/issues/16108#issuecomment-528951277
 
 
   Two things to add : 
   
   1. A temporary fix has been tested for fixing the output duplication. (code 
below)
   This fix fits things into the current function signatures and had to redo 
similar computation in 2 spots
   2. A better fix would be to introduce a map for output_entries and 
input_entries to allow 1-many and many-1 mappings between the subgraph and the 
maingraph. Conceptually, similar with the fix in (1)
   
   Code : 
   src/operator/subgraph/partition_graph.cc:CreateSubgraphNode()
   ```
 nnvm::Symbol sym;
 nnvm::NodeEntryEqual node_equal;
 size_t idx = 0;
 sym.outputs.resize(output_entries.size());
   
 //only add to output_entries to sym.outputs
 //relies on output_entries being pre-sorted
 for (size_t i = 0; i < output_entries.size(); ++i) {
   if (0 == i) {
 sym.outputs[idx] = *output_entries[i];
   } else {
 if (!node_equal(*output_entries[i-1], *output_entries[i])) {
   idx++;
   sym.outputs[idx] = *output_entries[i];
 } //else skip over dupe entry
   }
 }
 sym.outputs.resize(idx+1);
   ```
   
   In src/operator/subgraph/subgraph_property.h : ConnectSubgraphOutputs()
   ```
   nnvm::NodeEntryEqual node_equal;
   nnvm::NodeEntry prevNodeEntry;
   uint32_t idx = 0;
   
   //increment NodeEntry index only if the output_entrie is unique
   for (size_t i = 0; i < output_entries->size(); ++i) {
 if (0 != i ) {
   if (!node_equal(prevNodeEntry, *output_entries->at(i))) {
 idx++;
   }
 }
 prevNodeEntry = *output_entries->at(i); //need a copy
 *output_entries->at(i) = nnvm::NodeEntry{subgraph_node, idx, 0};
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs and outputs

2019-09-06 Thread GitBox
HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs 
and outputs
URL: 
https://github.com/apache/incubator-mxnet/issues/16108#issuecomment-528951277
 
 
   Two things to add : 
   
   1. A temporary fix has been tested for fixing the output duplication. (code 
below)
   This fix fits things into the current function signatures and had to redo 
similar computation in 2 spots
   2. A better fix would be to introduce a map for output_entries and 
input_entries to allow 1-many and many-1 mappings between the subgraph and the 
maingraph. Conceptually, similar with the fix in (1)
   
   Code : 
   src/operator/subgraph/partition_graph.cc:CreateSubgraphNode()
 nnvm::Symbol sym;
 nnvm::NodeEntryEqual node_equal;
 size_t idx = 0;
 sym.outputs.resize(output_entries.size());
   
 //only add to output_entries to sym.outputs
 //relies on output_entries being pre-sorted
 for (size_t i = 0; i < output_entries.size(); ++i) {
   if (0 == i) {
 sym.outputs[idx] = *output_entries[i];
   } else {
 if (!node_equal(*output_entries[i-1], *output_entries[i])) {
   idx++;
   sym.outputs[idx] = *output_entries[i];
 } //else skip over dupe entry
   }
 }
 sym.outputs.resize(idx+1);
   
   In src/operator/subgraph/subgraph_property.h : ConnectSubgraphOutputs()
   nnvm::NodeEntryEqual node_equal;
   nnvm::NodeEntry prevNodeEntry;
   uint32_t idx = 0;
   
   //increment NodeEntry index only if the output_entrie is unique
   for (size_t i = 0; i < output_entries->size(); ++i) {
 if (0 != i ) {
   if (!node_equal(prevNodeEntry, *output_entries->at(i))) {
 idx++;
   }
 }
 prevNodeEntry = *output_entries->at(i); //need a copy
 *output_entries->at(i) = nnvm::NodeEntry{subgraph_node, idx, 0};
   }
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs and outputs

2019-09-06 Thread GitBox
HahTK edited a comment on issue #16108: Subgraph API creates duplicate inputs 
and outputs
URL: 
https://github.com/apache/incubator-mxnet/issues/16108#issuecomment-528951277
 
 
   Two things to add : 
   
   1. A temporary fix has been tested for fixing the output duplication. (code 
below)
   This fix fits things into the current function signatures and had to redo 
similar computation in 2 spot
   2. A better fix would be to introduce a map for output_entries and 
input_entries to allow 1-many and many-1 mappings between the subgraph and the 
maingraph. Conceptually, similar with the fix in (1)
   
   Code : 
   src/operator/subgraph/partition_graph.cc:CreateSubgraphNode()
 nnvm::Symbol sym;
 nnvm::NodeEntryEqual node_equal;
 size_t idx = 0;
 sym.outputs.resize(output_entries.size());
   
 //only add to output_entries to sym.outputs
 //relies on output_entries being pre-sorted
 for (size_t i = 0; i < output_entries.size(); ++i) {
   if (0 == i) {
 sym.outputs[idx] = *output_entries[i];
   } else {
 if (!node_equal(*output_entries[i-1], *output_entries[i])) {
   idx++;
   sym.outputs[idx] = *output_entries[i];
 } //else skip over dupe entry
   }
 }
 sym.outputs.resize(idx+1);
   
   In src/operator/subgraph/subgraph_property.h : ConnectSubgraphOutputs()
   nnvm::NodeEntryEqual node_equal;
   nnvm::NodeEntry prevNodeEntry;
   uint32_t idx = 0;
   
   //increment NodeEntry index only if the output_entrie is unique
   for (size_t i = 0; i < output_entries->size(); ++i) {
 if (0 != i ) {
   if (!node_equal(prevNodeEntry, *output_entries->at(i))) {
 idx++;
   }
 }
 prevNodeEntry = *output_entries->at(i); //need a copy
 *output_entries->at(i) = nnvm::NodeEntry{subgraph_node, idx, 0};
   }
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #16104: Faster Transpose 2D

2019-09-06 Thread GitBox
apeforest commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r321845284
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -257,6 +257,29 @@ struct TransposeParam : public 
dmlc::Parameter {
   }
 };
 
+
+template
+MSHADOW_XINLINE void Transpose2D(DType *in, DType *out, index_t shape_0, 
index_t shape_1) {
+// ensure cache line hits and prevent cache miss for any configuration
+index_t blocksize = 32;
+index_t n = shape_0;
+index_t p = shape_1;
+
+for (index_t i = 0; i < n; i += blocksize) {
+  #pragma omp parallel for
+for (index_t j = 0; j < p; j += blocksize) {
+// transpose the block
+#pragma unroll 4
 
 Review comment:
   Do we really have to unroll this mannually? I thought the compiler could 
figure out the optimal unrolling factor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] HahTK commented on issue #16108: Subgraph API creates duplicate inputs and outputs

2019-09-06 Thread GitBox
HahTK commented on issue #16108: Subgraph API creates duplicate inputs and 
outputs
URL: 
https://github.com/apache/incubator-mxnet/issues/16108#issuecomment-528951277
 
 
   Two things to add : 
   
   1. A temporary fix has been tested for fixing the output duplication. (code 
below)
   This fix fits things into the current function signatures and had to redo 
similar computation in 2 spot
   2. A better fix would be to introduce a map for output_entries and 
input_entries to allow 1-many and many-1 mappings between the subgraph and the 
maingraph. Conceptually, similar with the fix in #1.
   
   Code : 
   src/operator/subgraph/partition_graph.cc:CreateSubgraphNode()
 nnvm::Symbol sym;
 nnvm::NodeEntryEqual node_equal;
 size_t idx = 0;
 sym.outputs.resize(output_entries.size());
   
 //only add to output_entries to sym.outputs
 //relies on output_entries being pre-sorted
 for (size_t i = 0; i < output_entries.size(); ++i) {
   if (0 == i) {
 sym.outputs[idx] = *output_entries[i];
   } else {
 if (!node_equal(*output_entries[i-1], *output_entries[i])) {
   idx++;
   sym.outputs[idx] = *output_entries[i];
 } //else skip over dupe entry
   }
 }
 sym.outputs.resize(idx+1);
   
   In src/operator/subgraph/subgraph_property.h : ConnectSubgraphOutputs()
   nnvm::NodeEntryEqual node_equal;
   nnvm::NodeEntry prevNodeEntry;
   uint32_t idx = 0;
   
   //increment NodeEntry index only if the output_entrie is unique
   for (size_t i = 0; i < output_entries->size(); ++i) {
 if (0 != i ) {
   if (!node_equal(prevNodeEntry, *output_entries->at(i))) {
 idx++;
   }
 }
 prevNodeEntry = *output_entries->at(i); //need a copy
 *output_entries->at(i) = nnvm::NodeEntry{subgraph_node, idx, 0};
   }
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16104: Faster Transpose 2D

2019-09-06 Thread GitBox
access2rohit commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r321840156
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -257,6 +257,29 @@ struct TransposeParam : public 
dmlc::Parameter {
   }
 };
 
+
+template
 
 Review comment:
   Can you add comments explaining input params 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (e98dbe7 -> d85a2d0)

2019-09-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e98dbe7  Speed up group executor (#16069)
 add d85a2d0  [DOC] Fix doc for nn.Embedding, nn.Dense and nd.Embedding 
(#15869)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/nn/basic_layers.py | 17 ++---
 src/operator/tensor/indexing_op.cc|  5 +++--
 2 files changed, 13 insertions(+), 9 deletions(-)



[GitHub] [incubator-mxnet] eric-haibin-lin merged pull request #15869: [DOC] Fix doc for nn.Embedding, nn.Dense and nd.Embedding

2019-09-06 Thread GitBox
eric-haibin-lin merged pull request #15869: [DOC] Fix doc for nn.Embedding, 
nn.Dense and nd.Embedding
URL: https://github.com/apache/incubator-mxnet/pull/15869
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on issue #16100: Infra for tvm op runtime dispatch

2019-09-06 Thread GitBox
hzfan commented on issue #16100: Infra for tvm op runtime dispatch
URL: https://github.com/apache/incubator-mxnet/pull/16100#issuecomment-528916168
 
 
   > Just for curious. Based on my knowledge, tvm op kernel is pre-compiled and 
then linked together with MXNet. How can it be configured according to the 
runtime input shapes?
   
   Yes, kernels are pre-compiled. At compile time, several different schedules 
(kernels) for a single op are defined and compiled. Then at runtime, with the 
runtime input shape, the most suitable kernel is chosen.
   
   It's true that the kernel is pre-compiled, but we have multiple available 
kernels for one single op, so we can choose the most efficient one based on the 
runtime input shape.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16104: Faster Transpose 2D

2019-09-06 Thread GitBox
ChaiBapchya commented on issue #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#issuecomment-528900889
 
 
   Yes. @wuxun-zhang  
   This week, we found out that benchmark shouldn't be run with DEBUG=ON. 
   Turns out that after disabling the debug, results might be different. So 
I'll rerun the benchmarks for all configurations with DEBUG=OFF or Release mode 
(basically) and update the tracking issue. As of this table, you're right. MKL 
doesn't see performance difference for 2D input.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Speed up group executor (#16069)

2019-09-06 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e98dbe7  Speed up group executor (#16069)
e98dbe7 is described below

commit e98dbe776201424cff93f87e67ff3bddb87d45e5
Author: Doron Singer <48903991+doronsin...@users.noreply.github.com>
AuthorDate: Fri Sep 6 17:53:04 2019 +0300

Speed up group executor (#16069)

* Speed up group executor

Current implementation is O(n^2), this implementation is O(n)

* Speed up group executor

Current implementation is O(n^2), this implementation is O(n)

* CI
---
 python/mxnet/module/executor_group.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/python/mxnet/module/executor_group.py 
b/python/mxnet/module/executor_group.py
index 637acce..d47665d 100755
--- a/python/mxnet/module/executor_group.py
+++ b/python/mxnet/module/executor_group.py
@@ -273,9 +273,9 @@ class DataParallelExecutorGroup(object):
 self.data_layouts = None
 self.label_layouts = None
 self.output_names = self.symbol.list_outputs()
-self.output_layouts = 
[DataDesc.get_batch_axis(self.symbol[name].attr('__layout__'))
-   for name in self.output_names]
-self.num_outputs = len(self.symbol.list_outputs())
+self.num_outputs = len(self.output_names)
+self.output_layouts = 
[DataDesc.get_batch_axis(self.symbol[index].attr('__layout__'))
+   for index in range(self.num_outputs)]
 
 self.bind_exec(data_shapes, label_shapes, shared_group)
 



[GitHub] [incubator-mxnet] marcoabreu merged pull request #16069: Speed up group executor

2019-09-06 Thread GitBox
marcoabreu merged pull request #16069: Speed up group executor
URL: https://github.com/apache/incubator-mxnet/pull/16069
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] 01/01: Merge remote-tracking branch 'origin/master' into mkldnn-v1.0

2019-09-06 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 5c5a619afaf69d607fbd254a585584b87f5b1dd8
Merge: 03b734b 7f57e8e
Author: Tao Lv 
AuthorDate: Sat Sep 7 07:40:24 2019 +0800

Merge remote-tracking branch 'origin/master' into mkldnn-v1.0

Conflicts:
ci/jenkins/Jenkins_steps.groovy

 3rdparty/mshadow/mshadow/cuda/tensor_gpu-inl.cuh   |   26 +-
 3rdparty/mshadow/mshadow/extension/slice.h |4 +-
 3rdparty/mshadow/mshadow/tensor.h  |4 +-
 3rdparty/mshadow/mshadow/tensor_cpu-inl.h  |   11 +-
 3rdparty/mshadow/mshadow/tensor_gpu-inl.h  |4 +-
 3rdparty/ps-lite   |2 +-
 CMakeLists.txt |4 +-
 CONTRIBUTORS.md|1 +
 KEYS   |   58 +
 Makefile   |   39 +-
 README.md  |1 +
 benchmark/opperf/utils/benchmark_utils.py  |2 +-
 benchmark/opperf/utils/common_utils.py |   17 +-
 benchmark/opperf/utils/profiler_utils.py   |   29 +-
 cd/Jenkinsfile_cd_pipeline |   62 +
 cd/Jenkinsfile_release_job |   99 +
 cd/Jenkinsfile_utils.groovy|  101 +
 cd/README.md   |  181 ++
 cd/mxnet_lib/mxnet_lib_pipeline.groovy |  168 ++
 cd/mxnet_lib/static/Jenkins_pipeline.groovy|   59 +
 cd/utils/artifact_repository.md|  105 +
 cd/utils/artifact_repository.py|  619 +
 cd/utils/requirements.txt  |2 +
 cd/utils/test_artifact_repository.py   |  530 +
 ci/Jenkinsfile_utils.groovy|7 +-
 ci/build_windows.py|  165 +-
 ci/docker/Dockerfile.build.ubuntu_gpu_cu101|1 +
 ci/docker/install/ubuntu_python.sh |2 +-
 ci/docker/runtime_functions.sh |  107 +-
 ci/jenkins/Jenkins_steps.groovy|  139 +-
 ci/jenkins/Jenkinsfile_clang   |4 +-
 .../{Jenkinsfile_clang => Jenkinsfile_tools}   |   14 +-
 cmake/cmake_options.yml|1 -
 contrib/clojure-package/README.md  |8 +-
 .../examples/profiler/test/core_test.clj   |3 +-
 .../profiler/test/profile-matmul-20iter.json.ref   |  271 ---
 contrib/clojure-package/integration-tests.sh   |2 +-
 contrib/clojure-package/project.clj|4 +-
 docs/api/python/contrib/onnx.md|2 +-
 docs/conf.py   |2 +-
 docs/cpp_docs/Doxyfile | 2370 +++
 .../integration-tests.sh => docs/cpp_docs/Makefile |   20 +-
 docs/install/build_from_source.md  |2 +-
 docs/python_docs/README.md |   24 +
 docs/python_docs/_static/apache_incubator_logo.png |  Bin 0 -> 16552 bytes
 docs/python_docs/_static/google_analytics.js   |   26 +
 docs/python_docs/_static/minima-social-icons.svg   |   33 +
 docs/python_docs/_static/mxnet-icon.png|  Bin 0 -> 2741 bytes
 docs/python_docs/_static/mxnet.css |  199 ++
 docs/python_docs/_static/mxnet_logo.png|  Bin 0 -> 22390 bytes
 .../python_docs/environment.yml|   36 +-
 docs/python_docs/python/.gitignore |   20 +
 docs/python_docs/python/Makefile   |   57 +
 docs/python_docs/python/Makefile_sphinx|  216 ++
 docs/python_docs/python/README.md  |  130 ++
 docs/python_docs/python/api/advanced/index.rst |   74 +
 .../python/api/advanced/mxnet.engine.rst   |   34 +
 .../python/api/advanced/mxnet.executor.rst |   34 +
 .../python/api/advanced/mxnet.executor_manager.rst |   38 +
 .../python/api/advanced/mxnet.kvstore_server.rst   |   36 +
 docs/python_docs/python/api/advanced/mxnet.rtc.rst |   36 +
 .../python/api/advanced/mxnet.test_utils.rst   |   91 +
 .../python_docs/python/api/advanced/mxnet.util.rst |   31 +
 .../python_docs/python/api/gluon-related/index.rst |  111 +
 .../python/api/gluon-related/mxnet.autograd.rst|   38 +
 .../python/api/gluon-related/mxnet.context.rst |   33 +
 .../python/api/gluon-related/mxnet.image.rst   |   99 +
 .../python/api/gluon-related/mxnet.initializer.rst |   58 +
 .../python/api/gluon-related/mxnet.io.rst  |   52 +
 .../api/gluon-related/mxnet.kvstore.KVStore.rst|   61 +
 .../api/gluon-related/mxnet.kvstore.create.rst |   23 +
 .../python/api/gluon-related/mxnet.kvstore.rst |   27 +
 .../api/gluon-related/mxnet.lr_scheduler.rst   |   31 +
 

[incubator-mxnet] branch mkldnn-v1.0 updated (03b734b -> 5c5a619)

2019-09-06 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 03b734b  Merge remote-tracking branch 'origin' into mkldnn-v1.0
 add 72c180c  Correct ONNX documentation (#15914)
 add acc074f  Add AMP Conversion support for BucketingModule (#15528)
 add ab60214  Add Median,p50,p99 to python profiler (#15953)
 add 79ed678  [Numpy] random.randint() implemented (#15956)
 add 0e71fbd  Added tests to verify Large Vector Support for initial set of 
ops  (#15943)
 add 8df9469  Refines NDArray indexing and adds numpy ndarray indexing 
[READY FOR REVIEW] (#15942)
 add b2c0cbc  Windows cmake flags cleanup (#16013)
 add 9b906a5  Improve diagnose.py to display environment variables (#15715)
 add 649429d  Disable flaky test in test_amp_conversion (#16031)
 add 3f7b6ee  Improve quantization flow (#15961)
 add 2d86c70  Port ops from np branch (#16018)
 add 196d1f4  [MXNET-1399] multiclass-mcc metric enhancements (#14874)
 add b7cca01  [MXNET-895] ONNX import/export: TopK (#13627)
 add 61f3dbc  numpy-compatible cumsum upstream (#15924)
 add 5d0d335  Update README.md (#16035)
 add 36455b2  Add RROIAlign (#16017)
 add 35d943e  Updates git_init Jenkins utility function to support checking 
out a particular commit id
 add 1597498  Adds artifact repository scripts
 add 87207c5  Adds CD pipeline framework
 add b73b8d4  Adds static libmxnet release pipeline
 add 0ed97f1  Updates CD pipeline
 add 5fe1516  Adds documentation
 add 1196c15  Updates kvstore functions to use pushd and popd
 add 23a7a58  Throws exceptions instead o magic numbers
 add e539370  Updates artifact repository cli to use --libtype instead of 
--static or --dynamic
 add fff8c82  Clarifies ci_utils and cd_utils origin remark
 add 0570892  Adds clarifying note on why ubuntu 14.04 is being used for 
compilation
 add 2b12c59  Removes MXNET_SHA
 add a8c0fe8  Removes set_release_job_name
 add 5cb26fd  Adds license headers
 add 98cdf30  Updates artifact repository to expect licenses
 add 3027296  Moves ci/cd to cd directory
 add f6d0fc2  Takes downstream job name from environment
 add 8241c52  Updates order of parameters
 add 749492f  Updates job type parameter to dropdown
 add 759e76e  Adds libmxnet feature extraction code comments
 add 36ac85a  Removes ccache setup from static build
 add 65928b1  NumPy-compatible infrastructure on Gluon (#16024)
 add 9173dad  [MKLDNN] fix uint8 batch norm memory misuse (#16034)
 add 47f8ceb  Disable test coverage of C++ codebase on CI  (#15981)
 add 54d27cb  [OP] Support range as advanced index for ndarrays (#16047)
 add d80510a  Added more tests for Large Indices (#15960)
 add aab4ded  Numpy compatible max min (#16046)
 add 6997691  [Dev] update ps-lite dependency (#15936)
 add 36bab1c  Fix flaky clojure profile test (#16058)
 add d5670ff  fix test_pick test time  is too long (#16066)
 add 5699939  [fix] Support nullop in `transpose` (#15865)
 add 5def003  NumPy-compatible Mean, Std and Var (#16014)
 add 1abf05b  adding "total" (total time) to profiler aggregate stats 
sorting criteria (#16055)
 add a8ba6d9  fix flaky test (#16074)
 add 692f3c4  Graph Partition API (#15886)
 add 767e3f1  Add fluent methods mean, std, var for ndarray (#16077)
 add f195098  fix some test files test time is too long (#16067)
 add 4c72d27  new raise mode for nd.take and fix backward for wrap mode 
(#15887)
 add 5b301c6  Typedef cleanup (#15899)
 add 9b9326f  add KEY for Tao Lv (#16081)
 add 65e37ca  numpy multinomial op (#15878)
 add 07b4470  MKL-DNN RNN checks NDArray version (#16071)
 add c742ef1  add numpy operator remainder (#16080)
 add 4333a7b  Update readme and project.clj comment (#16084)
 add 6122dfc  Add Large tensor vector test cases (#15941)
 add bc90e20  typo in docs (#16094)
 add c6a92d9  remove 'foo' and other print msg from test (#16088)
 add b7071c4  Enable tvm_op for ci (#15889)
 add d0fa8c0  Test large vector mean operator and fix a few bugs (#16079)
 add d60be31  Fix gradient tensor mutate in 
`{adam/ftrl/rmprop/rmspropalex}_update`. (#15768)
 add 7f57e8e  [WIP] New Website: New Docs [1/3] (#15884)
 new 5c5a619  Merge remote-tracking branch 'origin/master' into mkldnn-v1.0

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 3rdparty/mshadow/mshadow/cuda/tensor_gpu-inl.cuh   |   26 +-
 3rdparty/mshadow/mshadow/extension/slice.h |4 +-
 3rdparty/mshadow/mshadow/tensor.h  |4 +-
 

svn commit: r35639 - in /dev/incubator/mxnet/1.5.1.rc0: ./ apache-mxnet-src-1.5.1.rc0-incubating.tar.gz apache-mxnet-src-1.5.1.rc0-incubating.tar.gz.asc apache-mxnet-src-1.5.1.rc0-incubating.tar.gz.sh

2019-09-06 Thread taolv
Author: taolv
Date: Fri Sep  6 13:55:31 2019
New Revision: 35639

Log:
Add mxnet-1.5.1.rc0

Added:
dev/incubator/mxnet/1.5.1.rc0/
dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz  
 (with props)

dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz.asc  
 (with props)

dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz.sha512

Added: 
dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz
==
Binary file - no diff available.

Propchange: 
dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz
--
svn:mime-type = application/x-gzip

Added: 
dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz.asc
==
Binary file - no diff available.

Propchange: 
dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz.asc
--
svn:mime-type = application/pgp-signature

Added: 
dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz.sha512
==
--- 
dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz.sha512
 (added)
+++ 
dev/incubator/mxnet/1.5.1.rc0/apache-mxnet-src-1.5.1.rc0-incubating.tar.gz.sha512
 Fri Sep  6 13:55:31 2019
@@ -0,0 +1 @@
+c0eab8e8728112d1c7b964934c9a665c4ca0336cf6e58f0c9c268ca0812173c78b0a94e6e173a23a06e0b53d15995f81b51b238eb167150455be22244b6c1a42
  apache-mxnet-src-1.5.1.rc0-incubating.tar.gz




[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-06 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new d6ccb29  Bump the publish timestamp.
d6ccb29 is described below

commit d6ccb29a2dc770801d3e55c7386206cf760026ce
Author: mxnet-ci 
AuthorDate: Fri Sep 6 13:33:14 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..e48d7bf
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Sep  6 13:33:14 UTC 2019



[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15984: [DO NOT REVIEW] [DO NOT MERGE] General reduce compute for tvm ops and TVM version of sum

2019-09-06 Thread GitBox
haojin2 commented on a change in pull request #15984: [DO NOT REVIEW] [DO NOT 
MERGE] General reduce compute for tvm ops and TVM version of sum
URL: https://github.com/apache/incubator-mxnet/pull/15984#discussion_r321724004
 
 

 ##
 File path: contrib/tvmop/basic/reduce.py
 ##
 @@ -0,0 +1,96 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+import tvm
+from .. import defop, AllTypes, RealTypes
+from .. import assign_by_req, reduce_axes
+# AllTypes = ["float32", "float64", "float16", "uint8", "int8", "int32", 
"int64"]
+# RealTypes = ["float32", "float64", "float16"]
+# AccTypes = {'float16': 'float32', 'float32': 'float64', 'float64': 'float64'}
+
+
+# def assign_by_req(a, req, otype):
+# b = tvm.placeholder(a.shape, name='assign_by_req_b', dtype=otype)
+# if (req == "kAddTo"):
+# c = tvm.compute(a.shape, lambda *idx: a[idx].astype(otype) + b[idx])
+# else:
+# c = tvm.compute(a.shape, lambda *idx: a[idx].astype(otype))
+# return b, c
+# 
+# def reduce_axes(X, axes, reducer, atype=None):
+# def get_index(idx, ridx):
+# j = 0
+# k = 0
+# ret = []
+# for val in axes:
+# ret.append(idx[j] if val == 0 else ridx[k])
+# j += (val == 0)
+# k += (val != 0)
+# return tuple(ret)
+# 
+# ishape = X.shape
+# odim = (len(ishape) + 1 - axes[0]) // 2
+# oshape = [tvm.var('odim.%d' % i, 'int32') for i in range(odim)]
+# if atype is None:
+# atype = X.dtype
+# ridx = [tvm.reduce_axis((0, ishape[i]), name='r%d' % i) for (i, val) in 
enumerate(axes) if val == 1]
+# ret = tvm.compute(oshape, lambda *idx: reducer(X[get_index(idx, 
ridx)].astype(atype), axis=ridx), name='ret')
+# return ret
+
+def compute_reduce(dtype, otype, reducer, initial, ndim, reduce1st, req):
 
 Review comment:
   Okay


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on issue #15531: [MXNET-978] Higher Order Gradient Support `arctan`, `arctanh`, `radians`.

2019-09-06 Thread GitBox
kshitij12345 commented on issue #15531: [MXNET-978] Higher Order Gradient 
Support `arctan`, `arctanh`, `radians`.
URL: https://github.com/apache/incubator-mxnet/pull/15531#issuecomment-528842012
 
 
   @apeforest @larroy Gentle ping. Could you please review it again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit closed issue #15640: Performance regression for MXNet 1.5.0

2019-09-06 Thread GitBox
QueensGambit closed issue #15640: Performance regression for MXNet 1.5.0
URL: https://github.com/apache/incubator-mxnet/issues/15640
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit commented on issue #15640: Performance regression for MXNet 1.5.0

2019-09-06 Thread GitBox
QueensGambit commented on issue #15640: Performance regression for MXNet 1.5.0
URL: 
https://github.com/apache/incubator-mxnet/issues/15640#issuecomment-528804630
 
 
   Great news!
   I'm happy to update to MXNet-1.6.0 or MXNet-1.6.0-Dev for the next 
_CrazyAra_ release.
   Thank you for awesome work.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ElaineBao commented on issue #15640: Performance regression for MXNet 1.5.0

2019-09-06 Thread GitBox
ElaineBao commented on issue #15640: Performance regression for MXNet 1.5.0
URL: 
https://github.com/apache/incubator-mxnet/issues/15640#issuecomment-528768295
 
 
   Hi, @QueensGambit, this issue has been fixed in mxnet 1.6. As version of 1.6 
has not released yet, you can install nightly build version: `pip install 
mxnet-mkl==1.6.0b20190903` .
   
   I've tested above scripts on this version:
   
   ```
   op: relu
   shape:all, total_count:38500, total_time:314.6143236053, 
avg_time:0.008171800613246741
   shape:mb1ic128, total_count:5500, total_time:60.11501309865, 
avg_time:0.01093000238158
   shape:mb1ic256ih8iw8, total_count:4400, total_time:32.9472733746, 
avg_time:0.0074880166750001045
   shape:mb1ic128ih8iw8, total_count:2200, total_time:13.95386323001, 
avg_time:0.0063426651045454556
   ...
   
   op: sigmoid
   total_count:5500, total_time:62.9402109975, avg_time:0.011443674725454682
   shape:mb1ic256, total_count:5500, total_time:62.9402109975, 
avg_time:0.011443674725454682
   ```
   
   As you can see, relu with shape `mb1ic128` and sigmoid with shape `mb1ic256` 
have a faster performance than that in 1.5.0, and achieve the same result as 
1.4.1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-06 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 673b07a  Bump the publish timestamp.
673b07a is described below

commit 673b07ac8fb488f09430aaacd2c26ce0868701a4
Author: mxnet-ci 
AuthorDate: Fri Sep 6 07:40:22 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..1ee39b0
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Sep  6 07:40:22 UTC 2019



[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16110: ndarray treated uint8 as signed value

2019-09-06 Thread GitBox
mxnet-label-bot commented on issue #16110: ndarray treated uint8 as signed value
URL: 
https://github.com/apache/incubator-mxnet/issues/16110#issuecomment-528728517
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Bug


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] dwSun opened a new issue #16110: ndarray treated uint8 as signed value

2019-09-06 Thread GitBox
dwSun opened a new issue #16110: ndarray treated uint8 as signed value
URL: https://github.com/apache/incubator-mxnet/issues/16110
 
 
   ## Description
   ndarray treated uint8 as signed value, cause nd.mean and nd.sum return 
confused values.
   
   ## Environment info (Required)
   
   ```
   --Python Info--
   Version  : 3.7.4+
   Compiler : GCC 9.2.1 20190827
   Build: ('default', 'Sep  4 2019 08:03:05')
   Arch : ('64bit', 'ELF')
   Pip Info---
   Version  : 19.2.3
   Directory: /home/david/.local/lib/python3.7/site-packages/pip
   --MXNet Info---
   Version  : 1.5.0
   Directory: /home/david/.local/lib/python3.7/site-packages/mxnet
   Commit Hash   : 75a9e187d00a8b7ebc71412a02ed0e3ae489d91f
   Library  : 
['/home/david/.local/lib/python3.7/site-packages/mxnet/libmxnet.so']
   Build features:
   ✖ CUDA
   ✖ CUDNN
   ✖ NCCL
   ✖ CUDA_RTC
   ✖ TENSORRT
   ✔ CPU_SSE
   ✔ CPU_SSE2
   ✔ CPU_SSE3
   ✔ CPU_SSE4_1
   ✔ CPU_SSE4_2
   ✖ CPU_SSE4A
   ✔ CPU_AVX
   ✖ CPU_AVX2
   ✖ OPENMP
   ✖ SSE
   ✔ F16C
   ✖ JEMALLOC
   ✖ BLAS_OPEN
   ✖ BLAS_ATLAS
   ✖ BLAS_MKL
   ✖ BLAS_APPLE
   ✔ LAPACK
   ✔ MKLDNN
   ✔ OPENCV
   ✖ CAFFE
   ✖ PROFILER
   ✔ DIST_KVSTORE
   ✖ CXX14
   ✖ INT64_TENSOR_SIZE
   ✔ SIGNAL_HANDLER
   ✖ DEBUG
   --System Info--
   Platform : Linux-5.2.0-2-amd64-x86_64-with-debian-bullseye-sid
   system   : Linux
   node : Zarus
   release  : 5.2.0-2-amd64
   version  : #1 SMP Debian 5.2.9-2 (2019-08-21)
   --Hardware Info--
   machine  : x86_64
   processor: 
   Architecture:x86_64
   CPU op-mode(s):  32-bit, 64-bit
   Byte Order:  Little Endian
   Address sizes:   39 bits physical, 48 bits virtual
   CPU(s):  4
   On-line CPU(s) list: 0-3
   Thread(s) per core:  2
   Core(s) per socket:  2
   Socket(s):   1
   NUMA node(s):1
   Vendor ID:   GenuineIntel
   CPU family:  6
   Model:   78
   Model name:  Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz
   Stepping:3
   CPU MHz: 2700.094
   CPU max MHz: 3100.
   CPU min MHz: 400.
   BogoMIPS:5184.00
   Virtualization:  VT-x
   L1d cache:   64 KiB
   L1i cache:   64 KiB
   L2 cache:512 KiB
   L3 cache:4 MiB
   NUMA node0 CPU(s):   0-3
   Vulnerability L1tf:  Mitigation; PTE Inversion; VMX conditional 
cache flushes, SMT vulnerable
   Vulnerability Mds:   Mitigation; Clear CPU buffers; SMT vulnerab
le
   Vulnerability Meltdown:  Mitigation; PTI
   Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabl
ed via prctl and seccomp
   Vulnerability Spectre v1:Mitigation; usercopy/swapgs barriers and __
user pointer sanitization
   Vulnerability Spectre v2:Mitigation; Full generic retpoline, IBPB co
nditional, IBRS_FW, STIBP conditional, RSB 
filling
   Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep
 mtrr pge mca cmov pat pse36 clflush dts ac
pi mmx fxsr sse sse2 ss ht tm pbe syscall n
x pdpe1gb rdtscp lm constant_tsc art arch_p
erfmon pebs bts rep_good nopl xtopology non
stop_tsc cpuid aperfmperf tsc_known_freq pn
i pclmulqdq dtes64 monitor ds_cpl vmx est t
m2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_
1 sse4_2 x2apic movbe popcnt tsc_deadline_t
imer aes xsave avx f16c rdrand lahf_lm abm 
3dnowprefetch cpuid_fault epb invpcid_singl
e pti ssbd ibrs ibpb stibp tpr_shadow vnmi 
flexpriority ept vpid ept_ad fsgsbase tsc_a
djust bmi1 avx2 smep bmi2 erms invpcid mpx 
rdseed adx smap clflushopt intel_pt xsaveop
t xsavec xgetbv1 xsaves dtherm ida arat pln
 pts hwp hwp_notify hwp_act_window hwp_epp 
md_clear flush_l1d
   

[GitHub] [incubator-mxnet] xidulu opened a new pull request #16109: [Numpy] Numpy behavior normal distribution

2019-09-06 Thread GitBox
xidulu opened a new pull request #16109: [Numpy] Numpy behavior normal 
distribution
URL: https://github.com/apache/incubator-mxnet/pull/16109
 
 
   ## Description ##
   As title
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >