[incubator-mxnet] branch master updated (3f7b6ee -> 2d86c70)

2019-08-28 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3f7b6ee  Improve quantization flow (#15961)
 add 2d86c70  Port ops from np branch (#16018)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 187 +++-
 python/mxnet/ndarray/numpy/random.py   |  52 -
 python/mxnet/numpy/multiarray.py   | 187 +++-
 python/mxnet/numpy/random.py   |  38 +++-
 python/mxnet/numpy_extension/__init__.py   |   1 +
 python/mxnet/numpy_extension/random.py |  74 +++
 python/mxnet/symbol/numpy/_symbol.py   | 131 ++-
 python/mxnet/symbol/numpy/random.py|  52 -
 src/operator/numpy/np_broadcast_reduce_op_index.cc |  61 ++
 .../np_broadcast_reduce_op_index.cu}   |  19 +-
 src/operator/numpy/np_elemwise_broadcast_op.cc |  74 ---
 src/operator/numpy/np_elemwise_broadcast_op.cu |  12 --
 src/operator/tensor/elemwise_binary_broadcast_op.h |   4 +
 .../elemwise_binary_broadcast_op_extended.cc   |   2 +
 .../tensor/elemwise_binary_scalar_op_extended.cc   |   6 +-
 tests/python/unittest/test_numpy_op.py | 239 +
 16 files changed, 1022 insertions(+), 117 deletions(-)
 create mode 100644 python/mxnet/numpy_extension/random.py
 create mode 100644 src/operator/numpy/np_broadcast_reduce_op_index.cc
 copy src/operator/{contrib/fft.cu => numpy/np_broadcast_reduce_op_index.cu} 
(74%)



[GitHub] [incubator-mxnet] haojin2 merged pull request #16018: Port ops from np branch

2019-08-28 Thread GitBox
haojin2 merged pull request #16018: Port ops from np branch
URL: https://github.com/apache/incubator-mxnet/pull/16018
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16027: [v1.5.x] FP16 Support for C Predict API (#15245)

2019-08-28 Thread GitBox
TaoLv commented on issue #16027: [v1.5.x] FP16 Support for C Predict API 
(#15245)
URL: https://github.com/apache/incubator-mxnet/pull/16027#issuecomment-526044213
 
 
   @samskalicky Could you please take a look at the CI failures?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #15803: [v1.5.x] Fix _copy_to on MKLDNN backend (#15637)

2019-08-28 Thread GitBox
TaoLv commented on issue #15803: [v1.5.x] Fix _copy_to on MKLDNN backend 
(#15637)
URL: https://github.com/apache/incubator-mxnet/pull/15803#issuecomment-526044021
 
 
   @shufan @ZhennanQin Could you please rebase to see if it can pass the CI? I 
notice there are other PRs passed the CI properly last night and got merged 
into the branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16033: Building MXNet C++ from Source with VS 2017 Error LNK1248

2019-08-28 Thread GitBox
mxnet-label-bot commented on issue #16033: Building MXNet C++ from Source with 
VS 2017 Error LNK1248
URL: 
https://github.com/apache/incubator-mxnet/issues/16033#issuecomment-526042110
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Installation, C++, Build


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DerHaddy opened a new issue #16033: Building MXNet C++ from Source with VS 2017 Error LNK1248

2019-08-28 Thread GitBox
DerHaddy opened a new issue #16033: Building MXNet C++ from Source with VS 2017 
Error LNK1248
URL: https://github.com/apache/incubator-mxnet/issues/16033
 
 
   Hey all,
   
   I am currently trying to build MXNet C++ API. I have done this 
(https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html) 
guide for VS 2017.  After 4 hours of building time I got this Error:
   
   ```
"C:\testv1mxnet\incubator-mxnet\build1\mxnet.sln" (default target) (1) ->
  "C:\testv1mxnet\incubator-mxnet\build1\mxnet.vcxproj.metaproj" 
(default target) (8) ->
  "C:\testv1mxnet\incubator-mxnet\build1\mxnet.vcxproj" (default 
target) (15) ->
  (Link target) ->
LINK : fatal error LNK1248: image size (8C9B8000) exceeds maximum 
allowable size (8000) [C:\testv1mxnet\in
  cubator-mxnet\build1\mxnet.vcxproj]
   ```
   
   I just read that there is a maximum build size even from 32 bit even at 64 
bit, but I'm not sure.
   Can someone help me with this issue?
   
   Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-08-28 Thread GitBox
wkcn commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r316055342
 
 

 ##
 File path: example/lib_ops/mylib.cc
 ##
 @@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file mylib.cc
+ * \brief Sample library file
+ */
+
+#include 
+#include "lib_api.h"
+
+/*
+ * main matrix multiplication routine
+ */
+void gemm(float* A, float* B, float* C, unsigned n, unsigned k, unsigned m) {
+  unsigned i,j,kk;
+  for (i=0;i attrs,
+   std::vector inputs, std::vector outputs) {
+  //validate inputs
+  for(int i=0; i();
+  float* input2 = inputs[1].getData();
+  float* output = outputs[0].getData();
+  //set tensor shapes
+  unsigned n = inputs[0].shape[0];
+  unsigned k = inputs[0].shape[1];
+  unsigned m = inputs[1].shape[1];
+
+  gemm(input1, input2, output, n, k, m);
+  
+  return 1; //no error
+}
+
+int parseAttrs(std::map attrs,
+   int* num_in, int* num_out) {
+  /*
+  if(attrs.find("myParam") == attrs.end()) {
+std::cout << "Missing param 'myParam'" << std::endl;
+return 0;
+  }
+  */
+  *num_in = 2;
+  *num_out = 1;
+
+  return 1; //no error
+}
+
+int inferType(std::map attrs, std::vector 
&intypes,
+  std::vector &outtypes) {
+  outtypes[0] = intypes[0];
+  
+  return 1; //no error
+}
+
+int inferShape(std::map attrs, 
std::vector> &inshapes,
+   std::vector> &outshapes) {
+  //validate inputs
+  if(inshapes.size() != 2) {
+std::cout << "Expected 2 inputs to inferShape" << std::endl;
+return 0;
+  }
+
+  if(inshapes[0].size() != 2) {
+std::cout << "Expected 2D for first input to inferShape" << std::endl;
+return 0;
+  }
+
+  if(inshapes[1].size() != 2) {
+std::cout << "Expected 2D for second input to inferShape" << std::endl;
+return 0;
+  }
+  
+  unsigned n = inshapes[0][0];
+  unsigned k = inshapes[0][1];
+  unsigned kk = inshapes[1][0];
+  unsigned m = inshapes[1][1];
+
+  std::cout << "inshapes[0][0]=" << n << "  inshapes[0][1]=" << k << std::endl;
+  std::cout << "inshapes[1][0]=" << kk << "  inshapes[1][1]=" << m << 
std::endl;
+  
+  if(k != kk) return 0;
+  
+  outshapes[0].push_back(n);
+  outshapes[0].push_back(m);
+
+  return 1; //no error
+}
+
+REGISTER_OP(sam)
+.setFCompute(myFCompute)
+.setParseAttrs(parseAttrs)
+.setInferType(inferType)
+.setInferShape(inferShape);
+
+int initialize(int version) {
 
 Review comment:
   I prefer that the argument is a structure.
   e.g.
   ```c++
   struct MXNetIdentity {
 int magic_number; // we will update the magic_number if we update the 
structure
 int version; // `version` is the second element of the structure, and the 
offset is 32 bits.
 ...
   };
   ```
   
   ```c++
   int initialize(struct MXNetIdentity* id) {
 if (id->magic_number == 123) {
   int version = id->version;
   ...
 } else {
   ...
 }
   }
   ```
   
   In this way, we can check the version firstly, then check other attributions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-08-28 Thread GitBox
wkcn commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r316058905
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -25,26 +25,329 @@
 #ifndef MXNET_LIB_API_H_
 #define MXNET_LIB_API_H_
 
+#include 
+#include 
+#include 
+#include 
+
+/*!
+ * \brief External Tensor data types
+ */
+enum MXDType {
+  kFloat32 = 0,
+  kFloat64 = 1,
+  kFloat16 = 2,
+  kUint8 = 3,
+  kInt32 = 4,
+  kInt8  = 5,
+  kInt64 = 6,
+};
+
+/*!
+ * \brief External Tensor data structure
+ */
+struct MXTensor {
+  MXTensor() { data = nullptr; }
+  MXTensor(void *data, const std::vector &shape, MXDType dtype)
+  : data{data}, shape{shape}, dtype{dtype} {}
+
+  /*!
+   * \brief helper function to cast data pointer
+   */
+  template
+  data_type* getData() {
+return reinterpret_cast(data);
+  }
+
+  void *data;  // not owned
+  std::vector shape;
+  MXDType dtype;
+};
+
+/*!
+ * Custom Operator function templates
+ */
+typedef int (*fcomp_t)(std::map,
+   std::vector, std::vector);
+typedef int (*parseAttrs_t)(std::map,
+int*, int*);
+typedef int (*inferType_t)(std::map,
+   std::vector&, std::vector&);
+typedef int (*inferShape_t)(std::map,
+std::vector>&,
+std::vector>&);
+
+/*!
+ * \brief Class to hold custom operator registration
+ */
+class CustomOp {
+ public:
+  explicit CustomOp(const char* op_name) : name(op_name), fcompute(nullptr),
+parse_attrs(nullptr), infer_type(nullptr), infer_shape(nullptr) {}
+  ~CustomOp() {}
+  CustomOp& setFCompute(fcomp_t fcomp) {
+fcompute = fcomp;
+return *this;
+  }
+  CustomOp& setParseAttrs(parseAttrs_t func) {
+parse_attrs = func;
+return *this;
+  }
+  CustomOp& setInferType(inferType_t func) {
+infer_type = func;
+return *this;
+  }
+  CustomOp& setInferShape(inferShape_t func) {
+infer_shape = func;
+return *this;
+  }
+  /*! \brief operator name */
+  const char* name;
+  /*! \brief operator functions */
+  fcomp_t fcompute;
+  parseAttrs_t parse_attrs;
+  inferType_t infer_type;
+  inferShape_t infer_shape;
+};
+
+/*!
+ * \brief Registry class to registers things (ops, properties)
+ *   Singleton class
+ */
+template 
+class Registry {
+ public:
+  /*!
+   * \brief get singleton pointer to class
+   * \returns pointer to class
+   */
+  static Registry* get() {
+static Registry inst;
+return &inst;
+  }
+  /*!
+   * \brief add a new entry
+   * \returns new object associated with registered name
+   */
+  T& add(const char* name) {
+T *entry = new T(name);
+entries.push_back(entry);
+return *entry;
+  }
+  int size() {
+return entries.size();
+  }
+  T& get(int idx) {
+return *(entries[idx]);
+  }
+
+ private:
+  /*! \brief constructor */
+  Registry() {}
+  /*! \brief destructor */
+  ~Registry() {}
+  /*! \brief map of entries in registry */
+  std::vector entries;
+};
+
+/*
+ * Macros to help with string concat
+ * Annoyingly, the concat_ and concat macros are necessary to
+ * be able to use __COUNTER__ in an identifier name 
+ */
+#define _STR_CONCAT_(__a, __b) __a ## __b
+#define _STR_CONCAT(__a, __b) _STR_CONCAT_(__a, __b)
+
+/*!
+ * \brief convert a token to a string
+ */
+#define STRINGIFY(x) #x
+#define TOSTRING(x) STRINGIFY(x)
+
+/*!
+ * \brief declare a variable with custom name
+ */
+#define _REGISTER_NAME_(Name) MXNet ## _CustomOp ## _
+#define _REGISTER_DEF_(Name) CustomOp _REGISTER_NAME_(Name)
+
+/*!
+ * \brief assign a var to a value
+ */
+#define REGISTER_OP(Name) _STR_CONCAT(_REGISTER_DEF_(Name), __COUNTER__) = \
+Registry::get()->add(TOSTRING(Name))
+
+
 /*!
  * \brief Following are the APIs implemented in the external library
  * Each API has a #define string that is used to lookup the function in the 
library
  * Followed by the function declaration
  */
+
+
+#define MXLIB_OPREGSIZE_STR "_opRegSize"
+typedef int (*opRegSize_t)(void);
+
+#define MXLIB_OPREGGET_STR "_opRegGet"
+typedef int (*opRegGet_t)(int, const char**, fcomp_t*,
+  parseAttrs_t*, inferType_t*,
+  inferShape_t*);
+
+#define MXLIB_OPCALLFREE_STR "_opCallFree"
+typedef int (*opCallFree_t)(void*);
+
+#define MXLIB_OPCALLPARSEATTRS_STR "_opCallParseAttrs"
+typedef int (*opCallParseAttrs_t)(parseAttrs_t, const char* const*, const 
char* const*, int,
+  int*, int*);
+
+#define MXLIB_OPCALLINFERSHAPE_STR "_opCallInferShape"
+typedef int (*opCallInferShape_t)(inferShape_t, const char* const*, const 
char* const*, int,
+  unsigned int**, int*, int,
+  unsigned int***, int**, int);
+
+#define MXLIB_OPCALLFCOMP_STR "_opCallFCompute"
+typedef int (*opCallFComp_t)(fcomp_t, const char* const*, const char* const*, 
int,
+ const 

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-08-28 Thread GitBox
wkcn commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r316052812
 
 

 ##
 File path: example/lib_ops/mylib.cc
 ##
 @@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file mylib.cc
+ * \brief Sample library file
+ */
+
+#include 
+#include "lib_api.h"
+
+/*
+ * main matrix multiplication routine
+ */
+void gemm(float* A, float* B, float* C, unsigned n, unsigned k, unsigned m) {
+  unsigned i,j,kk;
+  for (i=0;i attrs,
+   std::vector inputs, std::vector outputs) {
+  //validate inputs
+  for(int i=0; i();
+  float* input2 = inputs[1].getData();
+  float* output = outputs[0].getData();
+  //set tensor shapes
+  unsigned n = inputs[0].shape[0];
+  unsigned k = inputs[0].shape[1];
+  unsigned m = inputs[1].shape[1];
+
+  gemm(input1, input2, output, n, k, m);
+  
+  return 1; //no error
 
 Review comment:
   Thank @samskalicky ! It may be better to use an Enumeration type as the 
return value.
   e.g.
   ```c++
   enum CustomOpState {
 SUCCESS,
 FAIL
   };
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16017: Add RROIAlign

2019-08-28 Thread GitBox
pengzhao-intel commented on issue #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#issuecomment-526039495
 
 
   @ciyongch could you take a look as well?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15973: Numpy . implement numpy op exp2 with tvm

2019-08-28 Thread GitBox
gyshi commented on a change in pull request #15973: Numpy . implement numpy  op 
exp2 with tvm
URL: https://github.com/apache/incubator-mxnet/pull/15973#discussion_r318889459
 
 

 ##
 File path: contrib/tvmop/basic/ufunc.py
 ##
 @@ -98,3 +99,71 @@ def backward_vadd_gpu(dtype, ndim, reduce1st, req):
 s[t].bind(bx, block_x)
 s[t].bind(tx, thread_x)
 return s, [X, in_grad_a, in_grad]
+
+def compute_exp2(dtype, ndim):
 
 Review comment:
   this day i am optimizing tvm op exp2, so i have no time to update this pr, 
today i will updata it.
   in my test, when input shape is vary large, the speed of tvm op exp2 is 
faster than mxnet exp2


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ElaineBao commented on a change in pull request #16017: Add RROIAlign

2019-08-28 Thread GitBox
ElaineBao commented on a change in pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#discussion_r318891412
 
 

 ##
 File path: src/operator/contrib/rroi_align.cc
 ##
 @@ -0,0 +1,316 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file rroi_align.cc
+ * \brief rroi align operator
+ * \author Yixin Bao
+ * Forward pass adapted from Caffe2
+ * link: 
https://github.com/pytorch/pytorch/blob/master/caffe2/operators/roi_align_rotated_op.cc
+ */
+#include "./rroi_align-inl.h"
+#include 
+#include "math.h"
+
+using std::max;
+using std::min;
+using std::floor;
+using std::ceil;
+
+namespace mxnet {
+namespace op {
+
+template 
+struct position_for_bilinear_interpolate {
+  // 4 positions and corresponding weights for
+  // computing bilinear interpolation
+  int pos1, pos2, pos3, pos4;
+  DType w1, w2, w3, w4;
+};
+
+template 
+void pre_calc_for_bilinear_interpolate(
+const int height, const int width, const int pooled_height, const int 
pooled_width,
+const int iy_upper, const int ix_upper, DType roi_start_h, DType 
roi_start_w,
+DType bin_size_h, DType bin_size_w, int roi_bin_grid_h, int roi_bin_grid_w,
+DType roi_center_h, DType roi_center_w, DType theta,
+std::vector> *pre_calc) {
+  int pre_calc_index = 0;
+  DType cosTheta = cos(theta);
+  DType sinTheta = sin(theta);
+  for (int ph = 0; ph < pooled_height; ph++) {
+for (int pw = 0; pw < pooled_width; pw++) {
+  // calc bin grid position (xx,yy)
+  for (int iy = 0; iy < iy_upper; iy++) {
+const DType yy = roi_start_h + ph * bin_size_h +
+static_cast(iy + .5f) * bin_size_h /
+static_cast(roi_bin_grid_h);  // e.g., 0.5, 1.5
+for (int ix = 0; ix < ix_upper; ix++) {
+  const DType xx = roi_start_w + pw * bin_size_w +
+  static_cast(ix + .5f) * bin_size_w /
+  static_cast(roi_bin_grid_w);
+
+  // Rotate by theta around the center and translate
+  DType x = xx * cosTheta + yy * sinTheta + roi_center_w;
+  DType y = yy * cosTheta - xx * sinTheta + roi_center_h;
+
+  // deal with: inverse elements are out of feature map boundary
+  if (y < -1.0 || y > height || x < -1.0 || x > width) {
+// empty
+position_for_bilinear_interpolate pc;
+pc.pos1 = 0;
+pc.pos2 = 0;
+pc.pos3 = 0;
+pc.pos4 = 0;
+pc.w1 = 0;
+pc.w2 = 0;
+pc.w3 = 0;
+pc.w4 = 0;
+pre_calc->at(pre_calc_index) = pc;
+pre_calc_index += 1;
+continue;
+  }
+  if (y <= 0) {
+y = 0;
+  }
+  if (x <= 0) {
+x = 0;
+  }
+
+  // calc 4 points for interpolation
+  int y_low = static_cast(y);
+  int x_low = static_cast(x);
+  int y_high;
+  int x_high;
+  if (y_low >= height - 1) {
+y_high = y_low = height - 1;
+y = (DType)y_low;
+  } else {
+y_high = y_low + 1;
+  }
+  if (x_low >= width - 1) {
+x_high = x_low = width - 1;
+x = (DType)x_low;
+  } else {
+x_high = x_low + 1;
+  }
+  DType ly = y - y_low;
+  DType lx = x - x_low;
+  DType hy = 1. - ly, hx = 1. - lx;
+  DType w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx;
+
+  // Save weights and indices
+  position_for_bilinear_interpolate pc;
 
 Review comment:
   thank you for your advices, @wkcn.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15973: Numpy . implement numpy op exp2 with tvm

2019-08-28 Thread GitBox
gyshi commented on a change in pull request #15973: Numpy . implement numpy  op 
exp2 with tvm
URL: https://github.com/apache/incubator-mxnet/pull/15973#discussion_r318889459
 
 

 ##
 File path: contrib/tvmop/basic/ufunc.py
 ##
 @@ -98,3 +99,71 @@ def backward_vadd_gpu(dtype, ndim, reduce1st, req):
 s[t].bind(bx, block_x)
 s[t].bind(tx, thread_x)
 return s, [X, in_grad_a, in_grad]
+
+def compute_exp2(dtype, ndim):
 
 Review comment:
   this day i am optimizing tvm op exp2, so i have no time to update this pr, 
today i will updata it.
   in my test, when input shape is vary large, the speed of tvm op exp2 is fast 
than mxnet exp2


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zixuanweeei commented on issue #15741: MKL-DNN LBR-GRU Inference Integration (FP32 LBR-GRU)

2019-08-28 Thread GitBox
zixuanweeei commented on issue #15741: MKL-DNN LBR-GRU Inference Integration 
(FP32 LBR-GRU)
URL: https://github.com/apache/incubator-mxnet/pull/15741#issuecomment-526026698
 
 
   @pengzhao-intel Sure. There are lots of refactor work both on MKL-DNN RNN 
and naive RNN. At present, MKL-DNN related stuff is under review. Perhaps, we 
can just drop this PR, and start a new one from current commit on master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15826: Numpy add numpy op moveaxis

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #15826: Numpy add numpy op 
moveaxis
URL: https://github.com/apache/incubator-mxnet/pull/15826#discussion_r318877886
 
 

 ##
 File path: src/operator/numpy/np_matrix_op.cc
 ##
 @@ -345,5 +346,95 @@ Examples::
 .add_argument("data", "NDArray-or-Symbol[]", "List of arrays to stack")
 .add_arguments(StackParam::__FIELDS__());
 
+bool NumpyMoveaxisShape(const nnvm::NodeAttrs& attrs,
+mxnet::ShapeVector *in_attrs,
+mxnet::ShapeVector *out_attrs) {
+  const NumpyMoveaxisParam& param = 
nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  mxnet::TShape& shp = (*in_attrs)[0];
+  CHECK_LE(shp.ndim(), 6) << "Transpose support at most 6 dimensions";
+  CHECK_EQ(param.source.ndim(), param.destination.ndim())
+<< "source and destination not equal.";
+  mxnet::TShape ret(shp.ndim(), -1);
+  mxnet::TShape axes(shp.ndim(), -1);
+  std::vector state_axes(shp.ndim(), false);
+  mxnet::TShape real_src(param.source.ndim(), -1);
+  mxnet::TShape real_des(param.destination.ndim(), -1);
+  for (int i = 0; i < param.source.ndim(); ++i) {
+if (param.source[i] >= 0) {
+  CHECK_LT(static_cast(param.source[i]), shp.ndim());
+  real_src[i] = param.source[i];
+} else {
+  CHECK_LT(param.source[i] + shp.ndim(), shp.ndim());
+  real_src[i] = param.source[i] + shp.ndim();
+}
+if (param.destination[i] >= 0) {
+  CHECK_LT(static_cast(param.destination[i]), shp.ndim());
+  real_des[i] = param.destination[i];
+} else {
+  CHECK_LT(param.destination[i] + shp.ndim(), shp.ndim());
+  real_des[i] = param.destination[i] + shp.ndim();
+}
+  }
+  if (shp.ndim() > 1) {
+for (int i = 0; i < param.source.ndim() - 1; ++i) {
+  for (int j = i + 1; j < param.source.ndim(); ++j) {
+CHECK_NE(real_src[i], real_src[j])
+  << "repeated axis in `source` argument";
+CHECK_NE(real_des[i], real_des[j])
+  << "repeated axis in `destination` argument";
+  }
+}
+  }
+  for (int i = 0; i < param.source.ndim(); ++i) {
+axes[real_des[i]] = real_src[i];
+state_axes[real_src[i]] = true;
+  }
+  for (int i = 0; i < axes.ndim(); ++i) {
+if (axes[i] < 0) {
+  for (int j = 0; j < axes.ndim(); ++j) {
+if (state_axes[j] == false) {
+  axes[i] = j;
+  state_axes[j] = true;
+  break;
+}
+  }
+}
+  }
+  for (int i = 0; i < shp.ndim(); ++i) {
+CHECK(axes[i] < static_cast(shp.ndim()));
+ret[i] = shp[axes[i]];
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, ret);
+  return shape_is_known(ret);
+}
+
+NNVM_REGISTER_OP(_np_moveaxis)
+.describe(R"code(Move axes of an array to new positions.
+Other axes remain in their original order.
+)code" ADD_FILELINE)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FInferShape", NumpyMoveaxisShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+ const NumpyMoveaxisParam& param = 
nnvm::get(n->attrs.parsed);
+std::ostringstream os1;
 
 Review comment:
   indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15902: Numpy add numpy op roll

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #15902: Numpy add numpy op roll
URL: https://github.com/apache/incubator-mxnet/pull/15902#discussion_r318875866
 
 

 ##
 File path: src/operator/numpy/np_matrix_op.cc
 ##
 @@ -345,5 +346,73 @@ Examples::
 .add_argument("data", "NDArray-or-Symbol[]", "List of arrays to stack")
 .add_arguments(StackParam::__FIELDS__());
 
+inline bool NumpyRollShape(const nnvm::NodeAttrs& attrs,
+   mxnet::ShapeVector *in_attrs,
+   mxnet::ShapeVector *out_attrs) {
+  using namespace mshadow;
+  const NumpyRollParam& param = nnvm::get(attrs.parsed);
+
+  if (!param.shift.has_value()) {
+LOG(FATAL) << "roll missing 1 required positional argument: 'shift'.";
+  }
+  if (param.shift.value().ndim() > 1 &&
+  param.axis.has_value() &&
+  param.axis.value().ndim() != param.shift.value().ndim()) {
+LOG(FATAL) << "shift and `axis` must be a tuple of the same size.";
+  }
+  if (!param.axis.has_value() && param.shift.has_value() && 
param.shift.value().ndim() > 1) {
+LOG(FATAL) << "shift must be an int.";
+  }
+  if (param.axis.has_value()) {
+mxnet::TShape axes(param.axis.value());
+const index_t ndim = (*in_attrs)[0].ndim();
+for (index_t i = 0; i < axes.ndim(); i++) {
+  if (axes[i] < 0) {
+axes[i] += ndim;
+  }
+}
+std::sort(axes.begin(), axes.end());
+for (index_t i = 1; i < axes.ndim(); i++) {
+  CHECK_LT(axes[i - 1], axes[i])
+<< "axes have duplicates " << axes;
+}
+CHECK_LT(axes[axes.ndim() - 1], ndim)
+  << "axis " << axes[axes.ndim() - 1]
+  << " Exceeds input dimensions " << (*in_attrs)[0];
+CHECK_GE(axes[0], 0)
+  << "Reduction axis " << param.axis.value()
+  << " Exceeds input dimensions " << (*in_attrs)[0];
+  }
+  return ElemwiseShape<1, 1>(attrs, in_attrs, out_attrs);
+}
+
+NNVM_REGISTER_OP(_np_roll)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+ return std::vector{"data"};
+})
+.set_attr("FInferShape", NumpyRollShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", NumpyRollCompute)
+.set_attr("FGradient",
+   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 
 Review comment:
   Indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15902: Numpy add numpy op roll

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #15902: Numpy add numpy op roll
URL: https://github.com/apache/incubator-mxnet/pull/15902#discussion_r318875757
 
 

 ##
 File path: src/operator/numpy/np_matrix_op.cc
 ##
 @@ -345,5 +346,73 @@ Examples::
 .add_argument("data", "NDArray-or-Symbol[]", "List of arrays to stack")
 .add_arguments(StackParam::__FIELDS__());
 
+inline bool NumpyRollShape(const nnvm::NodeAttrs& attrs,
+   mxnet::ShapeVector *in_attrs,
+   mxnet::ShapeVector *out_attrs) {
+  using namespace mshadow;
+  const NumpyRollParam& param = nnvm::get(attrs.parsed);
+
+  if (!param.shift.has_value()) {
+LOG(FATAL) << "roll missing 1 required positional argument: 'shift'.";
+  }
+  if (param.shift.value().ndim() > 1 &&
+  param.axis.has_value() &&
+  param.axis.value().ndim() != param.shift.value().ndim()) {
+LOG(FATAL) << "shift and `axis` must be a tuple of the same size.";
+  }
+  if (!param.axis.has_value() && param.shift.has_value() && 
param.shift.value().ndim() > 1) {
+LOG(FATAL) << "shift must be an int.";
+  }
+  if (param.axis.has_value()) {
+mxnet::TShape axes(param.axis.value());
+const index_t ndim = (*in_attrs)[0].ndim();
+for (index_t i = 0; i < axes.ndim(); i++) {
+  if (axes[i] < 0) {
+axes[i] += ndim;
+  }
+}
+std::sort(axes.begin(), axes.end());
+for (index_t i = 1; i < axes.ndim(); i++) {
+  CHECK_LT(axes[i - 1], axes[i])
+<< "axes have duplicates " << axes;
+}
+CHECK_LT(axes[axes.ndim() - 1], ndim)
+  << "axis " << axes[axes.ndim() - 1]
+  << " Exceeds input dimensions " << (*in_attrs)[0];
+CHECK_GE(axes[0], 0)
+  << "Reduction axis " << param.axis.value()
+  << " Exceeds input dimensions " << (*in_attrs)[0];
+  }
+  return ElemwiseShape<1, 1>(attrs, in_attrs, out_attrs);
+}
+
+NNVM_REGISTER_OP(_np_roll)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+ return std::vector{"data"};
+})
+.set_attr("FInferShape", NumpyRollShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", NumpyRollCompute)
+.set_attr("FGradient",
+   [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  const NumpyRollParam& param = nnvm::get(n->attrs.parsed);
+  std::ostringstream os1;
+  os1 << param.shift;
+  std::ostringstream os2;
+  os2 << param.axis;
+  return MakeNonlossGradNode("_np_roll", n, ograds, {},
+ {{"shift", os1.str()}, {"axis", os2.str()}});
+})
+.set_attr("FResourceRequest",
+[](const NodeAttrs& n) {
+   return std::vector{ResourceRequest::kTempSpace};
 
 Review comment:
   Indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15741: MKL-DNN LBR-GRU Inference Integration (FP32 LBR-GRU)

2019-08-28 Thread GitBox
pengzhao-intel commented on issue #15741: MKL-DNN LBR-GRU Inference Integration 
(FP32 LBR-GRU)
URL: https://github.com/apache/incubator-mxnet/pull/15741#issuecomment-526009302
 
 
   If it still needs lots of efforts to pass ci, we can drop it and wait to our 
1.0 upgrade.
   @zixuanweeei you can make a decision :) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15902: Numpy add numpy op roll

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #15902: Numpy add numpy op roll
URL: https://github.com/apache/incubator-mxnet/pull/15902#discussion_r318875806
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1126,6 +1126,54 @@ def test_np_randint():
 verify_generator(generator=generator_mx_same_seed, 
buckets=buckets, probs=probs, nrepeat=100)
 
 
+@with_seed()
+@use_np
+def test_np_roll():
+class TestRoll(HybridBlock):
+def __init__(self, shift=None, axis=None):
+super(TestRoll, self).__init__()
+self._shift = shift
+self._axis = axis
+
+def hybrid_forward(self, F, x):
+return F.np.roll(x, shift=self._shift, axis=self._axis)
+
+dtypes = ['int32', 'int64', 'float16', 'float32', 'float64']
+configs = [
+((), (3,), None),
+((1,), (-3,), None),
+((20,), (-3,), None),
+((3,), (2,), 0),
+((2, 3, 4), (12,), (1,)),
+((2, 3, 4), (10, -10), (0, 1)),
+((2, 3, 4, 5), (0, 1), (-1, 2)),
+((2, 3, 0, 1), (0, 1), (-1, 2)),
+((2, 3, 4, 5), 10, (0, 2)),
+]
+for dtype in dtypes:
+for config in configs:
+for hybridize in [False, True]:
+shape, shift, axis = config[0], config[1], config[2]
+x = rand_ndarray(shape=shape, dtype=dtype).as_np_ndarray()
+net = TestRoll(shift=shift, axis=axis)
+np_out = _np.roll(x.asnumpy(), shift=shift, axis=axis)
+if hybridize:
+net.hybridize()
+x.attach_grad()
+with mx.autograd.record():
+mx_out = net(x)
+assert mx_out.shape == np_out.shape
+mx_out.backward()
+assert same(mx_out.asnumpy(), np_out)
+assert same(x.grad.shape, x.shape)
+assert same(x.grad.asnumpy(), _np.ones(shape))
+
+# test imperativen
 
 Review comment:
   Get rid of this line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (649429d -> 3f7b6ee)

2019-08-28 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 649429d  Disable flaky test in test_amp_conversion (#16031)
 add 3f7b6ee  Improve quantization flow (#15961)

No new revisions were added by this update.

Summary of changes:
 example/quantization/README.md |  23 +-
 example/quantization/imagenet_gen_qsym.py  |  17 +-
 example/quantization/imagenet_gen_qsym_mkldnn.py   |   3 +-
 example/ssd/quantization.py|  11 +-
 include/mxnet/c_api.h  |   7 +-
 include/mxnet/op_attr_types.h  |  30 ++
 python/mxnet/contrib/quantization.py   | 285 ---
 src/c_api/c_api_symbolic.cc|  17 +-
 src/common/utils.h |  10 +
 src/executor/graph_executor.cc |  15 +-
 .../mkldnn/mkldnn_flatten-inl.h}   |  30 +-
 src/operator/nn/mkldnn/mkldnn_flatten.cc   |  10 +-
 .../np_init_op.cu => quantization/calibrate-inl.h} |  35 +--
 src/operator/quantization/calibrate.cc | 215 ++
 .../quantization/mkldnn/mkldnn_quantize_v2-inl.h   |   6 +-
 .../mkldnn/mkldnn_quantized_elemwise_add.cc|   4 +-
 .../mkldnn/mkldnn_quantized_flatten.cc |  61 
 src/operator/quantization/quantization_utils.h |   4 +-
 src/operator/quantization/quantize_graph_pass.cc   | 315 +
 src/operator/quantization/quantize_v2.cc   |   3 +
 src/operator/quantization/quantized_batch_norm.cc  |   3 +
 src/operator/quantization/quantized_conv.cc|   3 +
 .../quantization/quantized_fully_connected.cc  |   3 +
 src/operator/quantization/requantize.cc|   3 +
 src/operator/subgraph/mkldnn/mkldnn_conv.cc|   3 +
 src/operator/subgraph/mkldnn/mkldnn_fc.cc  |   3 +
 tests/python/mkl/test_subgraph.py  |   9 +-
 tests/python/quantization/test_quantization.py |  53 +++-
 tests/python/unittest/test_operator.py |   5 +-
 29 files changed, 776 insertions(+), 410 deletions(-)
 copy src/operator/{quantization/mkldnn/mkldnn_quantized_ops-inl.h => 
nn/mkldnn/mkldnn_flatten-inl.h} (54%)
 copy src/operator/{numpy/np_init_op.cu => quantization/calibrate-inl.h} (60%)
 create mode 100644 src/operator/quantization/calibrate.cc
 create mode 100644 src/operator/quantization/mkldnn/mkldnn_quantized_flatten.cc



[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15987: Numpy add numpy op rot90

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #15987: Numpy add numpy op rot90
URL: https://github.com/apache/incubator-mxnet/pull/15987#discussion_r318874589
 
 

 ##
 File path: src/operator/numpy/np_matrix_op-inl.h
 ##
 @@ -60,6 +60,246 @@ void NumpyTranspose(const nnvm::NodeAttrs& attrs,
   }
 }
 
+struct NumpyRot90Param : public dmlc::Parameter {
+  int k;
+  dmlc::optional axes;
+  DMLC_DECLARE_PARAMETER(NumpyRot90Param) {
+DMLC_DECLARE_FIELD(k)
+.set_default(1)
+.describe("Number of times the array is rotated by 90 degrees.");
+DMLC_DECLARE_FIELD(axes)
+.set_default(dmlc::optional())
+.describe(" The array is rotated in the plane defined by the axes. Axes 
must be different.");
+  }
+};
+
+struct rot90reverse {
+  MSHADOW_XINLINE static index_t ReverseIndex(index_t idx,
+  index_t nreversedim,
+  const index_t * stride_,
+  const index_t * trailing_) {
+index_t outputIndex = idx;
+for (index_t i = 0; i < nreversedim; ++i) {
+  const index_t low = outputIndex % trailing_[i];
+  index_t high = outputIndex / trailing_[i];
+  const index_t x = high % stride_[i];
+  high /= stride_[i];
+  outputIndex = (high * stride_[i] + stride_[i] - 1 - x) * trailing_[i] + 
low;
+}
+return outputIndex;
+  }
+  template
+  MSHADOW_XINLINE  static void Map(index_t index, index_t nreversedim, const 
DType *src, DType *dst,
+   const index_t * stride_,
+   const index_t * trailing_) {
+index_t new_idx = ReverseIndex(index, nreversedim, stride_, trailing_);
+dst[new_idx] = src[index];
+  }
+};
+
+template
+void NumpyRot90ComputeFlipIml(const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs,
+  const index_t axis0, const index_t axis1) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+
+  const mxnet::TShape& ishape = inputs[0].shape_;
+  Stream *s = ctx.get_stream();
+
+  std::vector stride_(2);
+  std::vector  trailing_(2);
+  index_t reverse_index = 0;
+  std::vector temp{axis0, axis1};
+  for (int axis : temp) {
+stride_[reverse_index] = ishape[axis];
+trailing_[reverse_index] = 1;
+for (int i2 = axis + 1; i2 < ishape.ndim(); ++i2) {
+  trailing_[reverse_index] *= ishape[i2];
+}
+reverse_index++;
+  }
+
+  index_t workspace_size = 2 * sizeof(index_t);
+  Tensor workspace =
+  ctx.requested[0].get_space_typed(Shape1(2 * 
workspace_size), s);
+  Tensor stride_cpu_tensor(stride_.data(), 
Shape1(stride_.size()));
+  Tensor stride_xpu_tensor(
+  reinterpret_cast(workspace.dptr_), Shape1(stride_.size()));
+  Tensor trailing_cpu_tensor(trailing_.data(), 
Shape1(trailing_.size()));
+  Tensor trailing_xpu_tensor(
+  reinterpret_cast(workspace.dptr_ + workspace_size), 
Shape1(trailing_.size()));
+
+  mshadow::Copy(stride_xpu_tensor, stride_cpu_tensor, s);
+  mshadow::Copy(trailing_xpu_tensor, trailing_cpu_tensor, s);
+  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  Kernel::Launch(s, inputs[0].Size(), 
reverse_index,
+  inputs[0].dptr(), outputs[0].dptr(),
+  stride_xpu_tensor.dptr_, trailing_xpu_tensor.dptr_);
+  });
+}
+
+struct rot90Transreverse {
+  MSHADOW_XINLINE static index_t ReverseIndex(index_t idx,
+  const index_t stride_,
+  const index_t trailing_) {
+index_t outputIndex = idx;
+const index_t low = outputIndex % trailing_;
+index_t high = outputIndex / trailing_;
+const index_t x = high % stride_;
+high /= stride_;
+outputIndex = (high * stride_ + stride_ - 1 - x) * trailing_ + low;
+
+return outputIndex;
+  }
+  template
+  MSHADOW_XINLINE  static void Map(index_t index, const DType *src, DType *dst,
+   const index_t  stride_,
+   const index_t  trailing_) {
+index_t new_idx = ReverseIndex(index, stride_, trailing_);
+dst[new_idx] = src[index];
+  }
+};
+
+template
+void NumpyRot90ComputeFlipTransposeIml(const OpContext& ctx,
+   const std::vector& inputs,
+   const std::vector& req,
+   const std::vector& outputs,
+   const mxnet::TShape axes_list,
+   const index_t axis) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+
+  const mxnet::TShape& ishape = inputs[0].shape_;
+  Stream *s = ctx.get_stream();
+
+  index_t stride_;
+  index_t trailing_;
+
+  stride_ = ishape[axis];
+  trailing_ = 1;

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15987: Numpy add numpy op rot90

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #15987: Numpy add numpy op rot90
URL: https://github.com/apache/incubator-mxnet/pull/15987#discussion_r318874396
 
 

 ##
 File path: src/operator/numpy/np_matrix_op-inl.h
 ##
 @@ -60,6 +60,246 @@ void NumpyTranspose(const nnvm::NodeAttrs& attrs,
   }
 }
 
+struct NumpyRot90Param : public dmlc::Parameter {
+  int k;
+  dmlc::optional axes;
+  DMLC_DECLARE_PARAMETER(NumpyRot90Param) {
+DMLC_DECLARE_FIELD(k)
+.set_default(1)
+.describe("Number of times the array is rotated by 90 degrees.");
+DMLC_DECLARE_FIELD(axes)
+.set_default(dmlc::optional())
+.describe(" The array is rotated in the plane defined by the axes. Axes 
must be different.");
+  }
+};
+
+struct rot90reverse {
+  MSHADOW_XINLINE static index_t ReverseIndex(index_t idx,
+  index_t nreversedim,
+  const index_t * stride_,
+  const index_t * trailing_) {
+index_t outputIndex = idx;
+for (index_t i = 0; i < nreversedim; ++i) {
+  const index_t low = outputIndex % trailing_[i];
+  index_t high = outputIndex / trailing_[i];
+  const index_t x = high % stride_[i];
+  high /= stride_[i];
+  outputIndex = (high * stride_[i] + stride_[i] - 1 - x) * trailing_[i] + 
low;
+}
+return outputIndex;
+  }
+  template
+  MSHADOW_XINLINE  static void Map(index_t index, index_t nreversedim, const 
DType *src, DType *dst,
 
 Review comment:
   should have only 1 space after `MSHADOW_XINLINE`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15987: Numpy add numpy op rot90

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #15987: Numpy add numpy op rot90
URL: https://github.com/apache/incubator-mxnet/pull/15987#discussion_r318874434
 
 

 ##
 File path: src/operator/numpy/np_matrix_op-inl.h
 ##
 @@ -60,6 +60,246 @@ void NumpyTranspose(const nnvm::NodeAttrs& attrs,
   }
 }
 
+struct NumpyRot90Param : public dmlc::Parameter {
+  int k;
+  dmlc::optional axes;
+  DMLC_DECLARE_PARAMETER(NumpyRot90Param) {
+DMLC_DECLARE_FIELD(k)
+.set_default(1)
+.describe("Number of times the array is rotated by 90 degrees.");
+DMLC_DECLARE_FIELD(axes)
+.set_default(dmlc::optional())
+.describe(" The array is rotated in the plane defined by the axes. Axes 
must be different.");
+  }
+};
+
+struct rot90reverse {
+  MSHADOW_XINLINE static index_t ReverseIndex(index_t idx,
+  index_t nreversedim,
+  const index_t * stride_,
+  const index_t * trailing_) {
+index_t outputIndex = idx;
+for (index_t i = 0; i < nreversedim; ++i) {
+  const index_t low = outputIndex % trailing_[i];
+  index_t high = outputIndex / trailing_[i];
+  const index_t x = high % stride_[i];
+  high /= stride_[i];
+  outputIndex = (high * stride_[i] + stride_[i] - 1 - x) * trailing_[i] + 
low;
+}
+return outputIndex;
+  }
+  template
+  MSHADOW_XINLINE  static void Map(index_t index, index_t nreversedim, const 
DType *src, DType *dst,
+   const index_t * stride_,
+   const index_t * trailing_) {
+index_t new_idx = ReverseIndex(index, nreversedim, stride_, trailing_);
+dst[new_idx] = src[index];
+  }
+};
+
+template
+void NumpyRot90ComputeFlipIml(const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs,
+  const index_t axis0, const index_t axis1) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+
+  const mxnet::TShape& ishape = inputs[0].shape_;
+  Stream *s = ctx.get_stream();
+
+  std::vector stride_(2);
+  std::vector  trailing_(2);
+  index_t reverse_index = 0;
+  std::vector temp{axis0, axis1};
+  for (int axis : temp) {
+stride_[reverse_index] = ishape[axis];
+trailing_[reverse_index] = 1;
+for (int i2 = axis + 1; i2 < ishape.ndim(); ++i2) {
+  trailing_[reverse_index] *= ishape[i2];
+}
+reverse_index++;
+  }
+
+  index_t workspace_size = 2 * sizeof(index_t);
+  Tensor workspace =
+  ctx.requested[0].get_space_typed(Shape1(2 * 
workspace_size), s);
+  Tensor stride_cpu_tensor(stride_.data(), 
Shape1(stride_.size()));
+  Tensor stride_xpu_tensor(
+  reinterpret_cast(workspace.dptr_), Shape1(stride_.size()));
+  Tensor trailing_cpu_tensor(trailing_.data(), 
Shape1(trailing_.size()));
+  Tensor trailing_xpu_tensor(
+  reinterpret_cast(workspace.dptr_ + workspace_size), 
Shape1(trailing_.size()));
+
+  mshadow::Copy(stride_xpu_tensor, stride_cpu_tensor, s);
+  mshadow::Copy(trailing_xpu_tensor, trailing_cpu_tensor, s);
+  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  Kernel::Launch(s, inputs[0].Size(), 
reverse_index,
 
 Review comment:
   Pay attention to the indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #15961: Improve quantization flow

2019-08-28 Thread GitBox
pengzhao-intel merged pull request #15961: Improve quantization flow
URL: https://github.com/apache/incubator-mxnet/pull/15961
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16025: Numpy add numpy op left_shift and right_shift

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #16025: Numpy add numpy op 
left_shift and right_shift
URL: https://github.com/apache/incubator-mxnet/pull/16025#discussion_r318873768
 
 

 ##
 File path: src/operator/operator_tune.cc
 ##
 @@ -84,6 +84,13 @@ struct static_init_var {
   __macro$(__VA_ARGS__, int32_t); \
   __macro$(__VA_ARGS__, int64_t);
 
+#define MSHADOW_MACRO_INT_FOREACH_TYPE(__macro$, ...) \
+  __macro$(__VA_ARGS__, uint8_t); \
+  __macro$(__VA_ARGS__, int8_t); \
+  __macro$(__VA_ARGS__, int32_t); \
+  __macro$(__VA_ARGS__, int64_t);
+
+
 
 Review comment:
   1 less blank line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16025: Numpy add numpy op left_shift and right_shift

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #16025: Numpy add numpy op 
left_shift and right_shift
URL: https://github.com/apache/incubator-mxnet/pull/16025#discussion_r318873830
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1126,6 +1126,83 @@ def test_np_randint():
 verify_generator(generator=generator_mx_same_seed, 
buckets=buckets, probs=probs, nrepeat=100)
 
 
+@with_seed()
+@use_np
+def test_np_left_shift():
+class TestLeftShift(HybridBlock):
+def __init__(self):
+super(TestLeftShift, self).__init__()
+
+def hybrid_forward(self, F, a, b):
+return F.np.left_shift(a, b)
+
+shapes = [
+((), ()),
+((), (2,)),
+((3,), ()),
+((3, 4), (3, 4)),
+((3, 4, 5, 6), (6,)),
+((2, 0, 3), (3,)),
+((2, 1, 3), (2, 3))
+]
+dtypes = ['uint8', 'int8', 'int32', 'int64']
+for shape in shapes:
+a, b = shape[0], shape[1]
+for dtype in dtypes:
+for hybridize in [True, False]:
+net = TestLeftShift()
+x1 = mx.nd.random.uniform(-10.0, 10.0, 
a).astype(dtype).as_np_ndarray()
+x2 = mx.nd.random.uniform(0, 10.0, 
b).astype(dtype).as_np_ndarray()
+if hybridize:
+net.hybridize()
+
+mx_out = net(x1, x2)
+np_out = _np.left_shift(x1.asnumpy(), x2.asnumpy())
+assert same(mx_out.asnumpy(), np_out)
+
+mx_out = np.left_shift(x1, x2)
+np_out = _np.left_shift(x1.asnumpy(), x2.asnumpy())
+assert same(mx_out.asnumpy(), np_out)
+
+
+@with_seed()
+@use_np
+def test_np_right_shift():
 
 Review comment:
   Merge the 2 tests into one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16025: Numpy add numpy op left_shift and right_shift

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #16025: Numpy add numpy op 
left_shift and right_shift
URL: https://github.com/apache/incubator-mxnet/pull/16025#discussion_r318873675
 
 

 ##
 File path: src/operator/numpy/np_elemwise_binary_broadcast_op.h
 ##
 @@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_elemwise_binary_broadcast_op.h
+ * \brief Function definition of elementwise binary broadcast related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_ELEMWISE_BINARY_BROADCAST_OP_H_
+#define MXNET_OPERATOR_NUMPY_NP_ELEMWISE_BINARY_BROADCAST_OP_H_
+
+#include 
+#include "../tensor/elemwise_binary_broadcast_op.h"
+#include "../tensor/elemwise_binary_scalar_op.h"
+
+namespace mxnet {
+namespace op {
+
+/*! \brief Minimum of three */
+static MSHADOW_XINLINE size_t minthree(const size_t a, const size_t b, const 
size_t c) {
+  return a < b ? (a < c ? a : c) : (b < c ? b : c);
+}
+
+template
+static void BitCompute(const nnvm::NodeAttrs &attrs,
+   const OpContext &ctx,
+   const std::vector &inputs,
+   const std::vector &req,
+   const std::vector &outputs) {
+  using namespace mxnet_op;
+  if (req[0] != kNullOp) {
+Stream *s = ctx.get_stream();
+CHECK_EQ(inputs.size(), 2U);
+CHECK_EQ(outputs.size(), 1U);
+MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+  MXNET_INT_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+const size_t size = (minthree(outputs[0].Size(), inputs[0].Size(), 
inputs[1].Size())
++ DataType::kLanes - 1) / DataType::kLanes;
+if (size != 0) {
+  Kernel, xpu>::Launch(s, size,
+  outputs[0].dptr(),
+  inputs[0].dptr(), inputs[1].dptr());
+}
+  });
+});
+  }
+}
+
+template
+void BitBinaryBroadcastCompute(const nnvm::NodeAttrs& attrs,
+   const OpContext& ctx,
+   const std::vector& inputs,
+   const std::vector& req,
+   const std::vector& outputs) {
+  if (outputs[0].shape_.Size() == 0U) return;
+  mxnet::TShape new_lshape, new_rshape, new_oshape;
+  int ndim = BinaryBroadcastShapeCompact(inputs[0].shape_, inputs[1].shape_, 
outputs[0].shape_,
+ &new_lshape, &new_rshape, 
&new_oshape);
+  if (!ndim) {
+BitCompute(attrs, ctx, inputs, req, outputs);
+  } else {
+if (req[0] != kNullOp) {
+  mshadow::Stream *s = ctx.get_stream();
+  MXNET_INT_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+BROADCAST_NDIM_SWITCH(ndim, NDim, {
+mshadow::Shape oshape = new_oshape.get();
 
 Review comment:
   2-space indentation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16025: Numpy add numpy op left_shift and right_shift

2019-08-28 Thread GitBox
haojin2 commented on a change in pull request #16025: Numpy add numpy op 
left_shift and right_shift
URL: https://github.com/apache/incubator-mxnet/pull/16025#discussion_r318873637
 
 

 ##
 File path: src/operator/numpy/np_elemwise_binary_broadcast_op.h
 ##
 @@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_elemwise_binary_broadcast_op.h
+ * \brief Function definition of elementwise binary broadcast related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_ELEMWISE_BINARY_BROADCAST_OP_H_
+#define MXNET_OPERATOR_NUMPY_NP_ELEMWISE_BINARY_BROADCAST_OP_H_
+
+#include 
+#include "../tensor/elemwise_binary_broadcast_op.h"
+#include "../tensor/elemwise_binary_scalar_op.h"
+
+namespace mxnet {
+namespace op {
+
+/*! \brief Minimum of three */
+static MSHADOW_XINLINE size_t minthree(const size_t a, const size_t b, const 
size_t c) {
+  return a < b ? (a < c ? a : c) : (b < c ? b : c);
+}
+
+template
+static void BitCompute(const nnvm::NodeAttrs &attrs,
+   const OpContext &ctx,
+   const std::vector &inputs,
+   const std::vector &req,
+   const std::vector &outputs) {
+  using namespace mxnet_op;
+  if (req[0] != kNullOp) {
+Stream *s = ctx.get_stream();
+CHECK_EQ(inputs.size(), 2U);
+CHECK_EQ(outputs.size(), 1U);
+MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+  MXNET_INT_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+const size_t size = (minthree(outputs[0].Size(), inputs[0].Size(), 
inputs[1].Size())
++ DataType::kLanes - 1) / DataType::kLanes;
 
 Review comment:
   indentation here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yangwenhuan commented on issue #16020: Is there any communication between parameter servers?

2019-08-28 Thread GitBox
yangwenhuan commented on issue #16020: Is there any communication between 
parameter servers?
URL: 
https://github.com/apache/incubator-mxnet/issues/16020#issuecomment-526001104
 
 
   > No, but there's all-to-all communication between workers and servers
   
   Could you show me the relative source code?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
ZhennanQin commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318868501
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   Finally you got my point. Thanks for your patients. Either solution is OK 
for me, you can make the decision.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318866580
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   Good stuff, totally agree! We should accept context and not use it from 
args. 
   
   We want EIA to be a backend like MKLDNN, not a context. 
   
   We could do as you suggest and query the supported context from the backend 
subgraph property. But it will still error out correctly if you partition with 
CPU and then bind with GPU when you finally call forward, since the subgraph op 
you inserted at partition wont support GPU FCompute. 
   
   Since we set g.attrs["context"] and then pass the graph to the PrePartition 
function on the subgraph property, the subgraph property can error out if the 
context is not supported. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xinyu-intel commented on issue #15796: Model Quantization with CUDNN

2019-08-28 Thread GitBox
xinyu-intel commented on issue #15796: Model Quantization with CUDNN
URL: 
https://github.com/apache/incubator-mxnet/issues/15796#issuecomment-525998897
 
 
   @jackchinor try to set ctx to cpu


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318866580
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   Good stuff, totally agree! We should accept context and not use it from 
args. 
   
   We want EIA to be a backend like MKLDNN, not a context. 
   
   We could do as you suggest and query the supported context from the backend 
subgraph property. But it will still error out correctly if you partition with 
CPU and then bind with GPU when you finally call forward, since the subgraph op 
you inserted at partition wont support GPU FCompute. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15796: Model Quantization with CUDNN

2019-08-28 Thread GitBox
pengzhao-intel commented on issue #15796: Model Quantization with CUDNN
URL: 
https://github.com/apache/incubator-mxnet/issues/15796#issuecomment-525998486
 
 
   @jackchinor could you paste your cmd and output? @xinyu-intel 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] jackchinor commented on issue #15796: Model Quantization with CUDNN

2019-08-28 Thread GitBox
jackchinor commented on issue #15796: Model Quantization with CUDNN
URL: 
https://github.com/apache/incubator-mxnet/issues/15796#issuecomment-525997352
 
 
   @pengzhao-intel  when I run quantize example with 1.Model Quantization with 
Intel® MKL-DNN , there is a error :"quantize_model_mkldnn only support Intel 
cpu platform with MKL-DNN Backend", I didn't change the example code. I install 
mxnet-mkl through"pip install mxnet-mkl --pre".Could you tell me how to fix it 
? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (9b906a5 -> 649429d)

2019-08-28 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 9b906a5  Improve diagnose.py to display environment variables (#15715)
 add 649429d  Disable flaky test in test_amp_conversion (#16031)

No new revisions were added by this update.

Summary of changes:
 tests/python/gpu/test_contrib_amp.py | 3 +++
 1 file changed, 3 insertions(+)



[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16029: [v1.5.x] Benchmark doc fix (#15769)

2019-08-28 Thread GitBox
pengzhao-intel merged pull request #16029: [v1.5.x] Benchmark doc fix (#15769)
URL: https://github.com/apache/incubator-mxnet/pull/16029
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.5.x updated: Benchmark doc fix (#15769) (#16029)

2019-08-28 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a commit to branch v1.5.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.5.x by this push:
 new 006486a  Benchmark doc fix (#15769) (#16029)
006486a is described below

commit 006486af3c912b67b73ecee26a2fc73762e6e9ee
Author: Chaitanya Prakash Bapat 
AuthorDate: Wed Aug 28 19:38:15 2019 -0700

Benchmark doc fix (#15769) (#16029)

* Update pre-req for opperf

* Update README.md

* correct command to import binary broadcast

* no such op called nd.sub, it is nd.subtract

* Trigger notification

* Trigger notification
---
 benchmark/opperf/README.md | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/benchmark/opperf/README.md b/benchmark/opperf/README.md
index 99c75be..c73592d 100644
--- a/benchmark/opperf/README.md
+++ b/benchmark/opperf/README.md
@@ -46,9 +46,10 @@ Hence, in this utility, we will build the functionality to 
allow users and devel
 
 ## Prerequisites
 
-This utility uses MXNet profiler under the hood to fetch compute and memory 
metrics. Hence, you need to build MXNet with `USE_PROFILER=1` flag.
+Provided you have MXNet installed (any version >= 1.5.1), all you need to use 
opperf utility is to add path to your cloned MXNet repository to the PYTHONPATH.
 
-Make sure to build the flavor of MXNet, for example - with/without MKL, with 
CUDA 9 or 10.1 etc., on which you would like to measure operator performance. 
Finally, you need to add path to your cloned MXNet repository to the PYTHONPATH.
+Note: 
+To install MXNet, refer [Installing MXNet 
page](https://mxnet.incubator.apache.org/versions/master/install/index.html)
 
 ```
 export PYTHONPATH=$PYTHONPATH:/path/to/incubator-mxnet/
@@ -76,7 +77,7 @@ For example, you want to run benchmarks for all NDArray 
Broadcast Binary Operato
 
 ```
 #!/usr/bin/python
-from benchmark.opperf.tensor_operations.binary_broadcast_operators import 
run_mx_binary_broadcast_operators_benchmarks
+from benchmark.opperf.nd_operations.binary_operators import 
run_mx_binary_broadcast_operators_benchmarks
 
 # Run all Binary Broadcast operations benchmarks with default input values
 print(run_mx_binary_broadcast_operators_benchmarks())
@@ -137,7 +138,7 @@ from mxnet import nd
 
 from benchmark.opperf.utils.benchmark_utils import run_performance_test
 
-add_res = run_performance_test([nd.add, nd.sub], run_backward=True, 
dtype='float32', ctx=mx.cpu(),
+add_res = run_performance_test([nd.add, nd.subtract], run_backward=True, 
dtype='float32', ctx=mx.cpu(),
inputs=[{"lhs": (1024, 1024),
 "rhs": (1024, 1024)}],
warmup=10, runs=25)



[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16031: Disable flaky test in test_amp_conversion

2019-08-28 Thread GitBox
pengzhao-intel merged pull request #16031: Disable flaky test in 
test_amp_conversion
URL: https://github.com/apache/incubator-mxnet/pull/16031
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kkurni commented on issue #15998: Build MXNET with GPU but fail to import mxnet on CPU machine

2019-08-28 Thread GitBox
kkurni commented on issue #15998: Build MXNET with GPU but fail to import mxnet 
on CPU machine
URL: 
https://github.com/apache/incubator-mxnet/issues/15998#issuecomment-525993478
 
 
   Yes, I link it with cuda path.
   
   I am using nvidia docker image. During runtime in CPU, it has the cuda 
driver there inside /usr/local/cuda
   
   But it is still complaining about cudalib


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] keetsky commented on issue #12018: onnx converter error

2019-08-28 Thread GitBox
keetsky commented on issue #12018: onnx converter error
URL: 
https://github.com/apache/incubator-mxnet/issues/12018#issuecomment-525993323
 
 
   If i let `mx.sym.BatchNorm(data=fc1, fix_gamma=True, eps=2e-5, 
momentum=bn_mom, name='bn3')`
   and save to onnx model,but when i load onnx mode ,find that mxnet default 
set fix_gamma to False in _op_translations.py `# in test mode "fix_gamma" 
should be unset.
   new_attrs['fix_gamma'] = not attrs.get('is_test', 1)` ,so my predict 
results are diffrent, how to fix it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin opened a new pull request #16032: [Don't merge] Update dmlc-core

2019-08-28 Thread GitBox
ZhennanQin opened a new pull request #16032: [Don't merge] Update dmlc-core
URL: https://github.com/apache/incubator-mxnet/pull/16032
 
 
   Update dmlc-core to private repo to demonstrate the  dmlc-core change effect 
on MXNet.
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] keetsky commented on issue #12018: onnx converter error

2019-08-28 Thread GitBox
keetsky commented on issue #12018: onnx converter error
URL: 
https://github.com/apache/incubator-mxnet/issues/12018#issuecomment-525988479
 
 
   Have solved? I met the same issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-08-28 Thread GitBox
wkcn commented on issue #14253: [RFC] Introducing NumPy-compatible coding 
experience into MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-525985724
 
 
   Hi @reminisce , I try to pass a numpy-compatible array into a legacy 
operator, and it raises this error.
   
   ```python
   >>> import mxnet.numpy as np
   >>> import mxnet as mx
   >>> import mxnet.numpy as np
   >>> a = np.array([1,2])
   >>> b = np.array([3,4])
   >>> mx.nd.broadcast_add(a,b)
   Traceback (most recent call last):
 File "", line 1, in 
 File "", line 56, in broadcast_add
 File "/home/wkcn/proj/mxnet/python/mxnet/ndarray/register.py", line 99, in 
_verify_all_legacy_ndarrays
   .format(op_name, func_name))
   TypeError: Operator `broadcast_add` registered in backend is known as 
`broadcast_add` in Python. This is a legacy operator which can only accept 
legacy ndarrays, while received an MXNet numpy ndarray. Please call 
`as_nd_ndarray()` upon the numpy ndarray to convert it to a legacy ndarray, and 
then feed the converted array to this operator.
   ```
   
   I hope that the legacy operator is the subset of 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn closed issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-08-28 Thread GitBox
wkcn closed issue #14253: [RFC] Introducing NumPy-compatible coding experience 
into MXNet
URL: https://github.com/apache/incubator-mxnet/issues/14253
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn removed a comment on issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-08-28 Thread GitBox
wkcn removed a comment on issue #14253: [RFC] Introducing NumPy-compatible 
coding experience into MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-525985724
 
 
   Hi @reminisce , I try to pass a numpy-compatible array into a legacy 
operator, and it raises this error.
   
   ```python
   >>> import mxnet.numpy as np
   >>> import mxnet as mx
   >>> import mxnet.numpy as np
   >>> a = np.array([1,2])
   >>> b = np.array([3,4])
   >>> mx.nd.broadcast_add(a,b)
   Traceback (most recent call last):
 File "", line 1, in 
 File "", line 56, in broadcast_add
 File "/home/wkcn/proj/mxnet/python/mxnet/ndarray/register.py", line 99, in 
_verify_all_legacy_ndarrays
   .format(op_name, func_name))
   TypeError: Operator `broadcast_add` registered in backend is known as 
`broadcast_add` in Python. This is a legacy operator which can only accept 
legacy ndarrays, while received an MXNet numpy ndarray. Please call 
`as_nd_ndarray()` upon the numpy ndarray to convert it to a legacy ndarray, and 
then feed the converted array to this operator.
   ```
   
   I hope that the legacy operator is the subset of 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce opened a new issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-08-28 Thread GitBox
reminisce opened a new issue #14253: [RFC] Introducing NumPy-compatible coding 
experience into MXNet
URL: https://github.com/apache/incubator-mxnet/issues/14253
 
 
   ## Motivation
   Today deep learning scientists spend majority of their time on data 
processing, debugging tensor algorithms, and tuning model parameters, instead 
of architecting models from scratch by themselves as a result from the abundant 
pre-trained models existing in many deep learning model zoos. This has 
highlighted the usability of tensor APIs as a key factor for a framework to be 
widely adopted.
   
   MXNet was firstly designed with the focus on memory efficiency, computation 
throughput and scalability. The usability problems begin to show up nowadays 
when more and more models demonstrate dynamic natures, e.g. unknown-shape 
tensors before runtime, control flow depending on a runtime result, etc. Here 
we highlight the most frequent complaints about usability from users.
   - Scalar tensors (aka zero-dim tensors) are not supported. For example, 
given `a = [0, 1, 2]`, `a[1]` will generate an `NDArray` of shape `(1,)`, 
instead of `()` as in NumPy.
   - Zero-size tensor is not supported. For example, a tensor of shape `(0, 16, 
256)` cannot be passed to an operator, because our system currently treats 0, 
the first dimension size, as unknown, rather than a concrete number.
   - Many operators' signatures and functionality are not NumPy compatible, 
e.g. `nd.dot` vs. `np.dot`, `nd.concatenate` vs. `np.concatenate`, etc.
   - Many NumPy operators are missing. See the [reference 
link](https://github.com/apache/incubator-mxnet/issues?q=is%3Aissue+numpy+label%3ANumpy)
 to GitHub issues.
   - Operators whose outputs' shapes can only be determined at runtime are not 
supported, e.g. `data[data < 0]` cannot run.
   - Diverged programming experience due to the separation of imperative and 
symbolic operators registered under `mxnet.ndarray` and `mxnet.symbol`.
   - Control flow operators are hard to use. Users have to understand the 
complicated signatures of control flow operators, instead of writing native 
Python code using `for`, `while`, `if/else`, etc.
   For example, we have learned (in a hard way) that it does not make a lot of 
sense to ask users to write code like the following to perform a cumulative sum.
   ```python
   def sum(state, i):
   s = state + data[i]
   return s, [s, i + 1]
   
   def sum_cond(state, i):
   return i < 4
   
   out, state = F.contrib.while_loop(sum_cond, sum, [F.zeros((1)), 
F.zeros((1))],
 max_iterations=5)
   ```
   Instead, users should be able to just write native Python code as the 
following and if required, let the framework serialize it into a computation 
graph for optimization and deployment.
   ```python
   data = np.arange(5)
   out = 0
   i = 0
   while i < 5:
   out = out + data[i]
   ```
   
   It is not hard to figure out that all of the above pain points can be 
summarized as a result from lack of NumPy-compatible coding experience in 
MXNet. While addressing the problems of better support of control flow 
operators and a consolidated coding style for writing imperative and symbolic 
code with more flexibility requires introducing fundamental changes into the 
codebase for building new infrastructures, such as a new graph IR and executor, 
which is extremely non-trivial and should be executed with a long-term plan, we 
can, at the moment, improve the usability by fixing the issue of zero-dim/size 
tensors and implementing NumPy operators in MXNet. Please allow us to discuss 
how to achieve these short-term goals in the following.
   
   ## Support of zero-dim and zero-size tensors
   ### What's the problem?
   Zero-dim and zero-size tensors are valid tensors in NumPy. The former, whose 
shapes are `()`, represent scalars in `numpy.ndarray` format. The latter, which 
have one or multiple zero dimension sizes in shapes, can be useful as a 
placeholder for many `ndarray` operations, such as concatenating a zero-size 
`ndarray` with another `ndarray`. MXNet does not support them due to the 
reserved semantics of empty shape `()` and shapes with zero dimension sizes 
indicating unknown shape information. Such information need to be filled out 
during the shape inference stage in order to move forward to tensor 
computations later.
   
   ### How to resolve the problem?
   We can first change the current semantics to comply with NumPy definition.
   1. Change the definition of unknown shapes from `ndim = 0` to `ndim = -1` in 
`TShape` class.
   2. Change the definition of unknown dimension sizes from `dim_size = 0` to 
`dim_size = -1` in `TShape` class.
   
   After this, we need to scan all over the codebase to modify the code 
accordingly where `shape.ndim() == 0` and `shape.Size() == 0` is used to 
perform unknown shape checks.
   
   Please note that although MXNet's shape is a type inheriting from 
`nnvm::Tuple`, which is 

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-08-28 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ef553d4  Bump the publish timestamp.
ef553d4 is described below

commit ef553d47ac9bee0403a2f018630945894ac54f65
Author: mxnet-ci 
AuthorDate: Thu Aug 29 01:36:16 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..9a3780a
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Aug 29 01:36:16 UTC 2019



[GitHub] [incubator-mxnet] gyshi commented on issue #16025: Numpy add numpy op left_shift and right_shift

2019-08-28 Thread GitBox
gyshi commented on issue #16025: Numpy add numpy op left_shift and right_shift
URL: https://github.com/apache/incubator-mxnet/pull/16025#issuecomment-525984724
 
 
   @haojin2  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16017: Add RROIAlign

2019-08-28 Thread GitBox
wkcn commented on a change in pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#discussion_r318850914
 
 

 ##
 File path: src/operator/contrib/rroi_align-inl.h
 ##
 @@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file rroi_align-inl.h
+ * \brief rroi align operator and symbol
+ * \author Yixin Bao
+*/
+#ifndef MXNET_OPERATOR_CONTRIB_RROI_ALIGN_INL_H_
+#define MXNET_OPERATOR_CONTRIB_RROI_ALIGN_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "../mshadow_op.h"
+#include "../operator_common.h"
+
+namespace mxnet {
+namespace op {
+
+// Declare enumeration of input order to make code more intuitive.
+// These enums are only visible within this header
+namespace rroialign {
+enum RROIAlignOpInputs{kData, kBox};
+enum RROIAlignOpOutputs {kOut};
+}  // rroialign
+
+struct RROIAlignParam : public dmlc::Parameter {
+  mxnet::TShape pooled_size;
+  float spatial_scale;
+  int sampling_ratio;
+  DMLC_DECLARE_PARAMETER(RROIAlignParam) {
+DMLC_DECLARE_FIELD(pooled_size)
+.set_expect_ndim(2).enforce_nonzero()
+.describe("RROI align output shape (h,w) ");
+DMLC_DECLARE_FIELD(spatial_scale).set_range(0.0, 1.0)
+.describe("Ratio of input feature map height (or w) to raw image height 
(or w). "
 
 Review comment:
   It will be better to clarify what is 'or w', and I think it is 'or width'.
   By the way, the copyright year can be changed to 2019 : )


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16017: Add RROIAlign

2019-08-28 Thread GitBox
wkcn commented on a change in pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#discussion_r318853296
 
 

 ##
 File path: src/operator/contrib/rroi_align.cc
 ##
 @@ -0,0 +1,316 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file rroi_align.cc
+ * \brief rroi align operator
+ * \author Yixin Bao
+ * Forward pass adapted from Caffe2
+ * link: 
https://github.com/pytorch/pytorch/blob/master/caffe2/operators/roi_align_rotated_op.cc
+ */
+#include "./rroi_align-inl.h"
+#include 
+#include "math.h"
+
+using std::max;
+using std::min;
+using std::floor;
+using std::ceil;
+
+namespace mxnet {
+namespace op {
+
+template 
+struct position_for_bilinear_interpolate {
+  // 4 positions and corresponding weights for
+  // computing bilinear interpolation
+  int pos1, pos2, pos3, pos4;
+  DType w1, w2, w3, w4;
+};
+
+template 
+void pre_calc_for_bilinear_interpolate(
+const int height, const int width, const int pooled_height, const int 
pooled_width,
+const int iy_upper, const int ix_upper, DType roi_start_h, DType 
roi_start_w,
+DType bin_size_h, DType bin_size_w, int roi_bin_grid_h, int roi_bin_grid_w,
+DType roi_center_h, DType roi_center_w, DType theta,
+std::vector> *pre_calc) {
+  int pre_calc_index = 0;
+  DType cosTheta = cos(theta);
+  DType sinTheta = sin(theta);
+  for (int ph = 0; ph < pooled_height; ph++) {
+for (int pw = 0; pw < pooled_width; pw++) {
+  // calc bin grid position (xx,yy)
+  for (int iy = 0; iy < iy_upper; iy++) {
+const DType yy = roi_start_h + ph * bin_size_h +
+static_cast(iy + .5f) * bin_size_h /
+static_cast(roi_bin_grid_h);  // e.g., 0.5, 1.5
+for (int ix = 0; ix < ix_upper; ix++) {
+  const DType xx = roi_start_w + pw * bin_size_w +
+  static_cast(ix + .5f) * bin_size_w /
+  static_cast(roi_bin_grid_w);
+
+  // Rotate by theta around the center and translate
+  DType x = xx * cosTheta + yy * sinTheta + roi_center_w;
+  DType y = yy * cosTheta - xx * sinTheta + roi_center_h;
+
+  // deal with: inverse elements are out of feature map boundary
+  if (y < -1.0 || y > height || x < -1.0 || x > width) {
+// empty
+position_for_bilinear_interpolate pc;
+pc.pos1 = 0;
+pc.pos2 = 0;
+pc.pos3 = 0;
+pc.pos4 = 0;
+pc.w1 = 0;
+pc.w2 = 0;
+pc.w3 = 0;
+pc.w4 = 0;
+pre_calc->at(pre_calc_index) = pc;
+pre_calc_index += 1;
+continue;
+  }
+  if (y <= 0) {
+y = 0;
+  }
+  if (x <= 0) {
+x = 0;
+  }
+
+  // calc 4 points for interpolation
+  int y_low = static_cast(y);
+  int x_low = static_cast(x);
+  int y_high;
+  int x_high;
+  if (y_low >= height - 1) {
+y_high = y_low = height - 1;
+y = (DType)y_low;
+  } else {
+y_high = y_low + 1;
+  }
+  if (x_low >= width - 1) {
+x_high = x_low = width - 1;
+x = (DType)x_low;
+  } else {
+x_high = x_low + 1;
+  }
+  DType ly = y - y_low;
+  DType lx = x - x_low;
+  DType hy = 1. - ly, hx = 1. - lx;
+  DType w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx;
+
+  // Save weights and indices
+  position_for_bilinear_interpolate pc;
 
 Review comment:
   `position_for_bilinear_interpolate &pc = (*pre_calc)[pre_calc_index];`
   There will be no extra copy, and access by index is faster than .at()


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16017: Add RROIAlign

2019-08-28 Thread GitBox
wkcn commented on a change in pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#discussion_r318850262
 
 

 ##
 File path: src/operator/contrib/rroi_align.cc
 ##
 @@ -0,0 +1,316 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file rroi_align.cc
+ * \brief rroi align operator
+ * \author Yixin Bao
+ * Forward pass adapted from Caffe2
+ * link: 
https://github.com/pytorch/pytorch/blob/master/caffe2/operators/roi_align_rotated_op.cc
+ */
+#include "./rroi_align-inl.h"
+#include 
+#include "math.h"
+
+using std::max;
+using std::min;
+using std::floor;
+using std::ceil;
+
+namespace mxnet {
+namespace op {
+
+template 
+struct position_for_bilinear_interpolate {
+  // 4 positions and corresponding weights for
+  // computing bilinear interpolation
+  int pos1, pos2, pos3, pos4;
+  DType w1, w2, w3, w4;
+};
+
+template 
+void pre_calc_for_bilinear_interpolate(
+const int height, const int width, const int pooled_height, const int 
pooled_width,
+const int iy_upper, const int ix_upper, DType roi_start_h, DType 
roi_start_w,
+DType bin_size_h, DType bin_size_w, int roi_bin_grid_h, int roi_bin_grid_w,
+DType roi_center_h, DType roi_center_w, DType theta,
+std::vector> *pre_calc) {
+  int pre_calc_index = 0;
+  DType cosTheta = cos(theta);
+  DType sinTheta = sin(theta);
+  for (int ph = 0; ph < pooled_height; ph++) {
+for (int pw = 0; pw < pooled_width; pw++) {
+  // calc bin grid position (xx,yy)
+  for (int iy = 0; iy < iy_upper; iy++) {
+const DType yy = roi_start_h + ph * bin_size_h +
+static_cast(iy + .5f) * bin_size_h /
+static_cast(roi_bin_grid_h);  // e.g., 0.5, 1.5
+for (int ix = 0; ix < ix_upper; ix++) {
+  const DType xx = roi_start_w + pw * bin_size_w +
+  static_cast(ix + .5f) * bin_size_w /
+  static_cast(roi_bin_grid_w);
+
+  // Rotate by theta around the center and translate
+  DType x = xx * cosTheta + yy * sinTheta + roi_center_w;
+  DType y = yy * cosTheta - xx * sinTheta + roi_center_h;
+
+  // deal with: inverse elements are out of feature map boundary
+  if (y < -1.0 || y > height || x < -1.0 || x > width) {
+// empty
+position_for_bilinear_interpolate pc;
+pc.pos1 = 0;
+pc.pos2 = 0;
+pc.pos3 = 0;
+pc.pos4 = 0;
+pc.w1 = 0;
+pc.w2 = 0;
+pc.w3 = 0;
+pc.w4 = 0;
+pre_calc->at(pre_calc_index) = pc;
+pre_calc_index += 1;
+continue;
+  }
+  if (y <= 0) {
+y = 0;
+  }
+  if (x <= 0) {
+x = 0;
+  }
+
+  // calc 4 points for interpolation
+  int y_low = static_cast(y);
+  int x_low = static_cast(x);
+  int y_high;
+  int x_high;
+  if (y_low >= height - 1) {
+y_high = y_low = height - 1;
+y = (DType)y_low;
+  } else {
+y_high = y_low + 1;
+  }
+  if (x_low >= width - 1) {
+x_high = x_low = width - 1;
+x = (DType)x_low;
+  } else {
+x_high = x_low + 1;
+  }
+  DType ly = y - y_low;
+  DType lx = x - x_low;
+  DType hy = 1. - ly, hx = 1. - lx;
+  DType w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx;
+
+  // Save weights and indices
+  position_for_bilinear_interpolate pc;
+  pc.pos1 = y_low * width + x_low;
+  pc.pos2 = y_low * width + x_high;
+  pc.pos3 = y_high * width + x_low;
+  pc.pos4 = y_high * width + x_high;
+  pc.w1 = w1;
+  pc.w2 = w2;
+  pc.w3 = w3;
+  pc.w4 = w4;
+  pre_calc->at(pre_calc_index) = pc;
+
+  pre_calc_index += 1;
+}
+  }
+}
+  }
+}
+
+template 
+inline void RROIAlignForward(const OpContext &ctx, const RROIAlignParam ¶m,
+ const std::vector &in_data, const 
std::vector &req,
+ con

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16017: Add RROIAlign

2019-08-28 Thread GitBox
wkcn commented on a change in pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#discussion_r318853237
 
 

 ##
 File path: src/operator/contrib/rroi_align.cc
 ##
 @@ -0,0 +1,316 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file rroi_align.cc
+ * \brief rroi align operator
+ * \author Yixin Bao
+ * Forward pass adapted from Caffe2
+ * link: 
https://github.com/pytorch/pytorch/blob/master/caffe2/operators/roi_align_rotated_op.cc
+ */
+#include "./rroi_align-inl.h"
+#include 
+#include "math.h"
+
+using std::max;
+using std::min;
+using std::floor;
+using std::ceil;
+
+namespace mxnet {
+namespace op {
+
+template 
+struct position_for_bilinear_interpolate {
+  // 4 positions and corresponding weights for
+  // computing bilinear interpolation
+  int pos1, pos2, pos3, pos4;
+  DType w1, w2, w3, w4;
+};
+
+template 
+void pre_calc_for_bilinear_interpolate(
+const int height, const int width, const int pooled_height, const int 
pooled_width,
+const int iy_upper, const int ix_upper, DType roi_start_h, DType 
roi_start_w,
+DType bin_size_h, DType bin_size_w, int roi_bin_grid_h, int roi_bin_grid_w,
+DType roi_center_h, DType roi_center_w, DType theta,
+std::vector> *pre_calc) {
+  int pre_calc_index = 0;
+  DType cosTheta = cos(theta);
+  DType sinTheta = sin(theta);
+  for (int ph = 0; ph < pooled_height; ph++) {
+for (int pw = 0; pw < pooled_width; pw++) {
+  // calc bin grid position (xx,yy)
+  for (int iy = 0; iy < iy_upper; iy++) {
+const DType yy = roi_start_h + ph * bin_size_h +
+static_cast(iy + .5f) * bin_size_h /
+static_cast(roi_bin_grid_h);  // e.g., 0.5, 1.5
+for (int ix = 0; ix < ix_upper; ix++) {
+  const DType xx = roi_start_w + pw * bin_size_w +
+  static_cast(ix + .5f) * bin_size_w /
+  static_cast(roi_bin_grid_w);
+
+  // Rotate by theta around the center and translate
+  DType x = xx * cosTheta + yy * sinTheta + roi_center_w;
+  DType y = yy * cosTheta - xx * sinTheta + roi_center_h;
+
+  // deal with: inverse elements are out of feature map boundary
+  if (y < -1.0 || y > height || x < -1.0 || x > width) {
+// empty
+position_for_bilinear_interpolate pc;
 
 Review comment:
   `position_for_bilinear_interpolate &pc = (*pre_calc)[pre_calc_index];`
   There will be no extra copy, and access by index is faster than .at()


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
ZhennanQin commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318845381
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   > Instead we'll pull the context from where the args reside.
   
   args are normally loaded by
   ```
   sym, args, aux = mx.model.load_checkpoint('resnet-50', 0)
   ```
   No contexts are specified. That's why we need to pass the expected contexts 
during bind. The contexts from args doesn't mean anything.
   
   >We expect that users could call this more than once for the same model. For 
example, users could first optimize for EIA (that would group supported nodes 
into subgraphs) and then optimize for MKLDNN for the remaining nodes that were 
not partitioned into subgraphs.
   
   That's the comments you provided earlier. So I thought you're working to 
support multi-context execution. If not, then you mean we only support running 
multiple `optimize_for` for same contexts? Even for that, we need to pass 
correct contexts for `optimize_for`, not the gathered one from args.
   
   Now again, the correct contexts are important for `InferStorageTypes` for 
MKLDNN backend, so we need to pass correct context(same as bind) to 
`optimize_for`. If you don't want to introduce `ctx` for `optimize_for` API, 
then I suggest to query the expected context from backend property, and use it 
to pass `InferStorageTypes`. Or, you can figure out other solution to solve 
this, I just want to ensure that subgraph partition can get same 
`InferStorageTypes` results as bind.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch beta-site created (now a41bb1c)

2019-08-28 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a change to branch beta-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git.


  at a41bb1c  Nightly build

This branch includes the following new commits:

 new 9399101  Nightly build
 new 042f077  Nightly build
 new d0df68a  Bump the publish timestamp.
 new 0a689b8  Nightly build
 new 8763f3e  Bump the publish timestamp.
 new 68e215d  Nightly build
 new 8f2367f  Bump the publish timestamp.
 new f3a5db0  Nightly build
 new 48db7f8  Bump the publish timestamp.
 new d10997b  Nightly build
 new a975849  Bump the publish timestamp.
 new 640df3d  Nightly build
 new 33f7334  Nightly build
 new f18c8f3  Bump the publish timestamp.
 new 9750706  Nightly build
 new 3232c84  Bump the publish timestamp.
 new c9e3e68  Nightly build
 new 0ffee7e  Bump the publish timestamp.
 new 6cee844  Nightly build
 new e0dc4c0  Bump the publish timestamp.
 new 78d6f2a  Nightly build
 new 3711dd4  Bump the publish timestamp.
 new 40e405b  Nightly build
 new a86a8ca  Bump the publish timestamp.
 new 832527e  Nightly build
 new e0ce022  Bump the publish timestamp.
 new af8e89c  Nightly build
 new b49410e  Bump the publish timestamp.
 new c30ac37  Nightly build
 new a88895d  Bump the publish timestamp.
 new 1de773b  Nightly build
 new c24272a  Bump the publish timestamp.
 new 5289ded  Nightly build
 new bbc0359  Bump the publish timestamp.
 new dedbdb5  Nightly build
 new 2b983b1  Bump the publish timestamp.
 new 9ec240c  Nightly build
 new d6d4e40  Bump the publish timestamp.
 new 265727a  Nightly build
 new b6504d7  Bump the publish timestamp.
 new 409d11d  Nightly build
 new d5f9413  Bump the publish timestamp.
 new 086340b  Nightly build
 new 91eb702  Bump the publish timestamp.
 new bad227f  Nightly build
 new 368b9c4  Bump the publish timestamp.
 new 7f98f17  Nightly build
 new 7ce5f01  Bump the publish timestamp.
 new 466f5ad  Nightly build
 new 19b27d8  Bump the publish timestamp.
 new d9105f9  Nightly build
 new afbe32e  Bump the publish timestamp.
 new 89c5e25  Nightly build
 new 98a3ec8  Bump the publish timestamp.
 new 25a16cf  Nightly build
 new ff02aa6  Bump the publish timestamp.
 new 966a15b  Nightly build
 new 6c738fe  Bump the publish timestamp.
 new 763799f  Nightly build
 new 003c4ef  Bump the publish timestamp.
 new 7a30464  Nightly build
 new fdbabfd  Bump the publish timestamp.
 new d3d1775  Nightly build
 new 5c43006  Bump the publish timestamp.
 new eea36db  Nightly build
 new 29c8818  Bump the publish timestamp.
 new 107da96  Nightly build
 new 1fffe9e  Bump the publish timestamp.
 new eb03af0  Nightly build
 new db11fa9  Bump the publish timestamp.
 new 06d379e  Nightly build
 new 1b59243  Bump the publish timestamp.
 new 6ba692c  Nightly build
 new 96faddb  Bump the publish timestamp.
 new 657dcb3  Nightly build
 new 086a0b1  Bump the publish timestamp.
 new a320c67  Nightly build
 new 80d84a7  Bump the publish timestamp.
 new ac411b2  Nightly build
 new 95e4678  Bump the publish timestamp.
 new 1a46f06  Nightly build
 new bf79110  Bump the publish timestamp.
 new 73b3aaf  Nightly build
 new 6b228c5  Bump the publish timestamp.
 new 9987af1  Nightly build
 new 2fe80ca  Bump the publish timestamp.
 new 4bfa1d5  Nightly build
 new 0bee588  Bump the publish timestamp.
 new 51f12cd  Nightly build
 new 24d3fa7  Bump the publish timestamp.
 new ed11ec2  Nightly build
 new 3cf886b  Nightly build
 new a41bb1c  Nightly build

The 93 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-mxnet-site] branch beta-site updated: add staging trigger file

2019-08-28 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch beta-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/beta-site by this push:
 new 449d9d2  add staging trigger file
449d9d2 is described below

commit 449d9d2cc9cbc2a179196a215a79d4a675893a98
Author: Aaron Markham 
AuthorDate: Wed Aug 28 17:03:22 2019 -0700

add staging trigger file
---
 .asf.yaml | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/.asf.yaml b/.asf.yaml
new file mode 100644
index 000..968aa01
--- /dev/null
+++ b/.asf.yaml
@@ -0,0 +1,3 @@
+staging:
+  profile: beta
+



[GitHub] [incubator-mxnet] apeforest commented on issue #16023: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes."

2019-08-28 Thread GitBox
apeforest commented on issue #16023: Revert "Refactor LibraryInitializer so 
it's thread safe. Fixes random sporadical concurrency crashes."
URL: https://github.com/apache/incubator-mxnet/pull/16023#issuecomment-525937670
 
 
   Sure! @larroy could you please create a PR. I can merge it for you,


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16018: Port ops from np branch

2019-08-28 Thread GitBox
reminisce commented on issue #16018: Port ops from np branch
URL: https://github.com/apache/incubator-mxnet/pull/16018#issuecomment-525936882
 
 
   @eric-haibin-lin I made a mistake by exposing `seed` directly in `npx`. Now 
it's moved to `npx.random`. The reason of putting it under `npx` instead of 
`np` is because our `seed` provides a different signature (with the extra 
parameter `ctx`) than the official NumPy seed function. In addition, we only 
accept integers, while numpy can also take `None` as seed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (b2c0cbc -> 9b906a5)

2019-08-28 Thread roywei
This is an automated email from the ASF dual-hosted git repository.

roywei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b2c0cbc  Windows cmake flags cleanup (#16013)
 add 9b906a5  Improve diagnose.py to display environment variables (#15715)

No new revisions were added by this update.

Summary of changes:
 tools/diagnose.py | 53 -
 1 file changed, 36 insertions(+), 17 deletions(-)



[GitHub] [incubator-mxnet] roywei merged pull request #15715: Improve diagnose.py to display environment variables

2019-08-28 Thread GitBox
roywei merged pull request #15715: Improve diagnose.py to display environment 
variables
URL: https://github.com/apache/incubator-mxnet/pull/15715
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #14836: Refactor AGInfo and Imperative

2019-08-28 Thread GitBox
larroy commented on a change in pull request #14836: Refactor AGInfo and 
Imperative
URL: https://github.com/apache/incubator-mxnet/pull/14836#discussion_r318802544
 
 

 ##
 File path: src/imperative/imperative.cc
 ##
 @@ -316,181 +308,220 @@ std::vector Imperative::Backward(
   info.outputs.back() = static_cast(1.0);
 }
   }
+  return ograd_entries;
+}
 
-  // Get gradient graph
-  Symbol sym;
-  sym.outputs = graph.outputs;
-  std::vector xs;
-  std::vector x_grads;
-  std::vector x_reqs;
-  if (variables.size()) {
-xs.reserve(variables.size());
-x_grads.reserve(variables.size());
-x_reqs.reserve(variables.size());
+struct Imperative::GradientVariableNodes {
+  std::vector variable_nodes;
+  std::vector gradients;
+  std::vector op_req_types;
+};
+
+Imperative::GradientVariableNodes Imperative::CreateGradientVariableNodes(
+const std::vector &variables,
+const std::vector &outputs) {
+  GradientVariableNodes var_nodes;
+  if (!variables.empty()) {
+var_nodes.variable_nodes.reserve(variables.size());
+var_nodes.gradients.reserve(variables.size());
+var_nodes.op_req_types.reserve(variables.size());
 for (size_t i = 0; i < variables.size(); ++i) {
   CHECK(!AGInfo::IsNone(*variables[i]) &&
-AGInfo::IsVariable(variables[i]->entry_.node))
+AGInfo::IsVariable(variables[i]->autograd_.node))
   << "Cannot differentiate with respect to the " << i+1 << "-th 
variable"
-  << " because it does not require gradient.";
-  xs.emplace_back(variables[i]->entry_);
-  x_grads.push_back(new NDArray());
-  x_reqs.push_back(kWriteTo);
+  << " because it does not require gradient. Did you forget 
attach_grad()?";
+  var_nodes.variable_nodes.emplace_back(variables[i]->autograd_);
+  var_nodes.gradients.push_back(new NDArray());
+  var_nodes.op_req_types.push_back(kWriteTo);
 }
   } else {
-std::vector args = sym.ListInputs(Symbol::kReadOnlyArgs);
-xs.reserve(args.size());
-x_grads.reserve(args.size());
-x_reqs.reserve(args.size());
-for (const auto& i : args) {
-  AGInfo& info = AGInfo::Get(i);
-  if (info.grad_req == kNullOp) continue;
-  xs.emplace_back(NodeEntry{i, 0, 0});
-  x_grads.push_back(&info.out_grads[0]);
-  x_reqs.push_back(info.grad_req);
-  info.fresh_out_grad = true;
+nnvm::Symbol s;
+s.outputs = outputs;
+std::vector input_ro_nodes = 
s.ListInputs(Symbol::kReadOnlyArgs);
+var_nodes.variable_nodes.reserve(input_ro_nodes.size());
+var_nodes.gradients.reserve(input_ro_nodes.size());
+var_nodes.op_req_types.reserve(input_ro_nodes.size());
+for (const auto& node : input_ro_nodes) {
+  AGInfo& info = AGInfo::Get(node);
+  if (info.grad_req != kNullOp) {
+var_nodes.variable_nodes.emplace_back(node);
+var_nodes.gradients.push_back(&info.out_grads[0]);
+var_nodes.op_req_types.push_back(info.grad_req);
+info.fresh_out_grad = true;
+  }
 }
-CHECK_GT(xs.size(), 0)
+CHECK_GT(var_nodes.variable_nodes.size(), 0)
 << "There are no inputs in computation graph that require gradients.";
   }
+  return var_nodes;
+}
 
-  Graph g_graph = pass::MXGradient(
-  graph, graph.outputs, xs, ograd_entries,
+std::vector Imperative::Backward(
+const std::vector& outputs,
+const std::vector& ograds,
+const std::vector& variables,
+bool is_train, bool retain_graph,
+bool create_graph) {
+  using namespace nnvm;
+  using namespace imperative;
+  static const std::vector zero_ops{Op::Get("zeros_like"), 
Op::Get("_zeros")};
+  static const Op* copy_op = Op::Get("_copy");
+
+  Graph graph = CreateGraph(outputs);
+
+  // Prepare head gradient nodes
+  std::vector ograd_entries = CreateHeadGradientNodes(outputs, 
ograds);
+
+  // Get variable nodes
+  GradientVariableNodes gvars = CreateGradientVariableNodes(variables, 
graph.outputs);
+
+  // Run backward on the graph
+  Graph gradient_graph = pass::MXGradient(
+  graph, graph.outputs, gvars.variable_nodes, ograd_entries,
   exec::AggregateGradient, nullptr, nullptr,
   zero_ops, "_copy");
-  CHECK_EQ(g_graph.outputs.size(), xs.size());
-  for (const auto& e : g_graph.outputs) {
-if (e.node->op() == nullptr) {
+
+  CHECK_EQ(gradient_graph.outputs.size(), gvars.variable_nodes.size());
+  std::vector forward_outputs = graph.outputs;
+  const size_t num_forward_outputs = graph.outputs.size();
+
+  // TODO(larroy): move inside pass::MXGradient
+  for (const auto& backward_node : gradient_graph.outputs) {
+if (backward_node.node->is_variable()) {
   auto node = Node::Create();
   node->attrs.op = copy_op;
-  node->inputs.push_back(e);
+  node->inputs.push_back(backward_node);
   graph.outputs.emplace_back(std::move(node));
 } else {
-  graph.outputs.push_back(e);
+  graph.outputs.push_back(backward_node);
 }
   }
-  const auto& idx = graph.indexed_graph

[GitHub] [incubator-mxnet] larroy commented on issue #15715: Improve diagnose.py to display environment variables

2019-08-28 Thread GitBox
larroy commented on issue #15715: Improve diagnose.py to display environment 
variables
URL: https://github.com/apache/incubator-mxnet/pull/15715#issuecomment-525930415
 
 
   @mxnet-label-bot add [pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #15715: Improve diagnose.py to display environment variables

2019-08-28 Thread GitBox
larroy commented on issue #15715: Improve diagnose.py to display environment 
variables
URL: https://github.com/apache/incubator-mxnet/pull/15715#issuecomment-525930359
 
 
   @mxnet-label-bot remove [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318703729
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   First, lets clarify that we already help users avoid this problem by not 
explicitly accepting a ctx argument to optimize_for. Instead we'll pull the 
context from where the args reside. 
   
   
https://github.com/apache/incubator-mxnet/blob/9ccf6c60f38b1589595d4d5f6d31d00b846150dd/src/c_api/c_api_symbolic.cc#L1219-L1232
   
   We're not trying to support multi-context execution yet. 
   
   We checked and theres no way to "mark" or set an attribute on the 
partitioned symbol that we return. So for now we cannot enforce the requirement 
that the same context be used for subsequent calls to optimize_for or calls to 
bind. 
   
   But we expect to have optimization passes that could be context-agnostic. 
And some that are context-specific. So the way MXNet currently enforces this is 
by having context-specific FCompute functions for operators. This just means 
that context-specific optimizations should insert operators that only have an 
FCompute function registered for that context. 
   
   So multiple calls to optimize_for with different contexts would succeed. But 
at runtime executing the subgraph operator on a different context would fail. 
Thats the current and only way to enforce this. So the requirement will be that 
the subgraph property inserts subgraph operators that only support that context 
by only defining FCompute for the desired context. 
   
   This is already implemented: 
   
   
https://github.com/apache/incubator-mxnet/blob/b2c0cbc8ea4defa0a8b53fb628a9679accca281a/src/operator/subgraph/mkldnn/mkldnn_conv.cc#L780
   
   or
   
   
https://github.com/apache/incubator-mxnet/blob/b2c0cbc8ea4defa0a8b53fb628a9679accca281a/src/operator/subgraph/tensorrt/tensorrt.cu#L64
   
   So if we ran the following example:
   
   ```
   sym = sym.optimize_for('MKLDNN', ctx= mx.cpu(), args) 
   sym = sym.optimize_for('TensorRT', ctx= mx.gpu(), args) 
   ```
   
   the calls to optimize_for would succeed. Presumably there will be both 
mkldnn and tensorrt subgraph ops inserted. But at runtime the user will see an 
error about one or the other not being supported on that context. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (8df9469 -> b2c0cbc)

2019-08-28 Thread yuxihu
This is an automated email from the ASF dual-hosted git repository.

yuxihu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8df9469  Refines NDArray indexing and adds numpy ndarray indexing 
[READY FOR REVIEW] (#15942)
 add b2c0cbc  Windows cmake flags cleanup (#16013)

No new revisions were added by this update.

Summary of changes:
 ci/build_windows.py | 165 
 cmake/cmake_options.yml |   1 -
 2 files changed, 83 insertions(+), 83 deletions(-)



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318703729
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   First, lets clarify that we already help users avoid this problem by not 
explicitly accepting a ctx argument to optimize_for. Instead we'll pull the 
context from where the args reside. 
   
   
https://github.com/apache/incubator-mxnet/blob/9ccf6c60f38b1589595d4d5f6d31d00b846150dd/src/c_api/c_api_symbolic.cc#L1219-L1232
   
   We're not trying to support multi-context execution yet. 
   
   We checked and theres no way to "mark" or set an attribute on the 
partitioned symbol that we return. So for now we cannot enforce the requirement 
that the same context be used for subsequent calls to optimize_for or calls to 
bind. 
   
   But we expect to have optimization passes that could be context-agnostic. 
And some that are context-specific. So the way MXNet currently enforces this is 
by having context-specific FCompute functions for operators. This just means 
that context-specific optimizations should insert operators that only have an 
FCompute function registered for that context. 
   
   So multiple calls to optimize_for with different contexts would succeed. But 
at runtime executing the subgraph operator on a different context would fail. 
Thats the current and only way to enforce this. So the requirement will be that 
the subgraph property inserts subgraph operators that only support that context 
by only defining FCompute for the desired context. 
   
   This is already implemented: 
   
   
https://github.com/apache/incubator-mxnet/blob/b2c0cbc8ea4defa0a8b53fb628a9679accca281a/src/operator/subgraph/mkldnn/mkldnn_conv.cc#L780
   
   or
   
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/subgraph/tensorrt/tensorrt.cu#L64
   
   So if we ran the following example:
   
   ```
   sym = sym.optimize_for('MKLDNN', ctx= mx.cpu(), args) 
   sym = sym.optimize_for('TensorRT', ctx= mx.gpu(), args) 
   ```
   
   the calls to optimize_for would succeed. Presumably there will be both 
mkldnn and tensorrt subgraph ops inserted. But at runtime the user will see an 
error about one or the other not being supported on that context. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yuxihu merged pull request #16013: Windows cmake flags cleanup

2019-08-28 Thread GitBox
yuxihu merged pull request #16013: Windows cmake flags cleanup
URL: https://github.com/apache/incubator-mxnet/pull/16013
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #16023: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes."

2019-08-28 Thread GitBox
larroy commented on issue #16023: Revert "Refactor LibraryInitializer so it's 
thread safe. Fixes random sporadical concurrency crashes."
URL: https://github.com/apache/incubator-mxnet/pull/16023#issuecomment-525929544
 
 
   @apeforest another option is not to revert the fix and just restore the 
CMakefile.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318703729
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   First, lets clarify that we already help users avoid this problem by not 
explicitly accepting a ctx argument to optimize_for. Instead we'll pull the 
context from where the args reside. 
   
   
https://github.com/apache/incubator-mxnet/blob/9ccf6c60f38b1589595d4d5f6d31d00b846150dd/src/c_api/c_api_symbolic.cc#L1219-L1232
   
   We're not trying to support multi-context execution yet. 
   
   We checked and theres no way to "mark" or set an attribute on the 
partitioned symbol that we return. So for now we cannot enforce the requirement 
that the same context be used for subsequent calls to optimize_for or calls to 
bind. 
   
   But we expect to have optimization passes that could be context-agnostic. 
And some that are context-specific. So the way MXNet currently enforces this is 
by having context-specific FCompute functions for operators. This just means 
that context-specific optimizations should insert operators that only have an 
FCompute function registered for that context. 
   
   So multiple calls to optimize_for with different contexts would succeed. But 
at runtime executing the subgraph operator on a different context would fail. 
Thats the current and only way to enforce this. So the requirement will be that 
the subgraph property inserts subgraph operators that only support that context 
by only defining FCompute for the desired context. 
   
   This is already implemented: 
   
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/subgraph/mkldnn/mkldnn_conv.cc#L780
   
   or
   
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/subgraph/tensorrt/tensorrt.cu#L64
   
   So if we ran the following example:
   
   ```
   sym = sym.optimize_for('MKLDNN', ctx= mx.cpu(), args) 
   sym = sym.optimize_for('TensorRT', ctx= mx.gpu(), args) 
   ```
   
   the calls to optimize_for would succeed. Presumably there will be both 
mkldnn and tensorrt subgraph ops inserted. But at runtime the user will see an 
error about one or the other not being supported on that context. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #16013: Windows cmake flags cleanup

2019-08-28 Thread GitBox
larroy commented on issue #16013: Windows cmake flags cleanup
URL: https://github.com/apache/incubator-mxnet/pull/16013#issuecomment-525929174
 
 
   USE_PROFILER was deprecated by @ChaiBapchya ON OFF is preferred as it was 
requested to have multi value flag for the openmp flag in a different PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yuxihu commented on issue #16013: Windows cmake flags cleanup

2019-08-28 Thread GitBox
yuxihu commented on issue #16013: Windows cmake flags cleanup
URL: https://github.com/apache/incubator-mxnet/pull/16013#issuecomment-525928813
 
 
   The changes look straightforward. Two questisons:
   1. ON/OFF is the preferred/recommended way of setting cmake flags? 0/1 also 
worked, right?
   2. Why removing USE_PROFILER? Is it ON in other platforms? I think we have 
profiler related tests. Would it be affected?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.5.x updated (bf78959 -> 49c6ee2)

2019-08-28 Thread roywei
This is an automated email from the ASF dual-hosted git repository.

roywei pushed a change to branch v1.5.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bf78959  added check for empty params file and unknown param (not 
arg/aux) (#15917)
 add 49c6ee2  remove Julia cat image for license issue (#15964) (#16026)

No new revisions were added by this update.

Summary of changes:
 .../Prediction with Pre-trained Model.ipynb|   4 ++--
 .../imagenet/ijulia-pretrained-predict/cat.png | Bin 123126 -> 0 bytes
 2 files changed, 2 insertions(+), 2 deletions(-)
 delete mode 100644 julia/examples/imagenet/ijulia-pretrained-predict/cat.png



[GitHub] [incubator-mxnet] roywei merged pull request #16026: [v1.5.x] remove Julia cat image for license issue (#15964)

2019-08-28 Thread GitBox
roywei merged pull request #16026: [v1.5.x] remove Julia cat image for license 
issue (#15964)
URL: https://github.com/apache/incubator-mxnet/pull/16026
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #15998: Build MXNET with GPU but fail to import mxnet on CPU machine

2019-08-28 Thread GitBox
larroy commented on issue #15998: Build MXNET with GPU but fail to import mxnet 
on CPU machine
URL: 
https://github.com/apache/incubator-mxnet/issues/15998#issuecomment-525924577
 
 
   I think you need to figure out the paths for runtime libraries, if you link 
with cuda you need cuda or a cuda stub on runtime.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #15998: Build MXNET with GPU but fail to import mxnet on CPU machine

2019-08-28 Thread GitBox
larroy commented on issue #15998: Build MXNET with GPU but fail to import mxnet 
on CPU machine
URL: 
https://github.com/apache/incubator-mxnet/issues/15998#issuecomment-525924358
 
 
   @mxnet-label-bot add [Question]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-08-28 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new d2d55c1  Bump the publish timestamp.
d2d55c1 is described below

commit d2d55c18ca4381a3ac307c7bc1f8475fca4bdf2d
Author: mxnet-ci 
AuthorDate: Wed Aug 28 21:02:52 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..9a45687
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Aug 28 21:02:52 UTC 2019



[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-08-28 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 402fac1  Bump the publish timestamp.
402fac1 is described below

commit 402fac1c8f19484c5cea0897963b3fb91add1a24
Author: mxnet-ci 
AuthorDate: Wed Aug 28 19:29:34 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..e730bdd
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Aug 28 19:29:34 UTC 2019



[GitHub] [incubator-mxnet] leleamol commented on issue #14203: [Bug]Cannot compile mxnet on windows

2019-08-28 Thread GitBox
leleamol commented on issue #14203: [Bug]Cannot compile mxnet on windows
URL: 
https://github.com/apache/incubator-mxnet/issues/14203#issuecomment-525876884
 
 
   @mxnet-label-bot add [Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 opened a new pull request #16031: Disable flaky test in test_amp_conversion

2019-08-28 Thread GitBox
anirudh2290 opened a new pull request #16031: Disable flaky test in 
test_amp_conversion
URL: https://github.com/apache/incubator-mxnet/pull/16031
 
 
   ## Description ##
   Disable Flaky test, Please see : 
https://github.com/apache/incubator-mxnet/issues/16030
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16030: Flaky test : test_amp_conversion

2019-08-28 Thread GitBox
mxnet-label-bot commented on issue #16030: Flaky test : test_amp_conversion
URL: 
https://github.com/apache/incubator-mxnet/issues/16030#issuecomment-525872074
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Test, Flaky


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #16020: Is there any communication between parameter servers?

2019-08-28 Thread GitBox
eric-haibin-lin commented on issue #16020: Is there any communication between 
parameter servers?
URL: 
https://github.com/apache/incubator-mxnet/issues/16020#issuecomment-525872064
 
 
   No, but there's all-to-all communication between workers and servers


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 opened a new issue #16030: Flaky test : test_amp_conversion

2019-08-28 Thread GitBox
anirudh2290 opened a new issue #16030: Flaky test : test_amp_conversion
URL: https://github.com/apache/incubator-mxnet/issues/16030
 
 
   ### Description
   Flaky test, test_amp_conversion : 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-15943/26/pipeline/315
 . 
   
   This seems to be only happening when cast_optional_param=True. Will add a PR 
commenting out this part of the test for now and will investigate further.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #15757: [Discussion] Unified performance tests and dashboard

2019-08-28 Thread GitBox
ChaiBapchya edited a comment on issue #15757: [Discussion] Unified performance 
tests and dashboard
URL: 
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-525860730
 
 
   Yes. (This error is probably caused because incorrect file is being used. It 
was previously used for testing on my branch. But now with latest master, 
`opperf.py` file is good to use.)
   
   Few pointers -
   1. Don't use separate file for testing large tensor 
`opperf_large_tensor.py`. Functionality has been merged into the original 
opperf.py file.
   2. All the operators that have been benchmarked so far in the opperf utility 
(in the master branch) can be profiled with native/python. 
   3. Inclusion of python time module via flag
   4. Adding more operators to improve coverage
   
   For current master branch,
   All you've to do now for the opperf utility is run 
   `python opperf.py` with your desired flags `--ctx=cpu -p python`
   It will run all the ops supported without error.
   
   Let me know if that helps.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #15757: [Discussion] Unified performance tests and dashboard

2019-08-28 Thread GitBox
ChaiBapchya edited a comment on issue #15757: [Discussion] Unified performance 
tests and dashboard
URL: 
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-525860730
 
 
   Yes. (This error is probably caused because incorrect file is being used (it 
was used for testing previously on my branch. But now master's opperf file is 
good to use.)
   
   1. Don't use separate file for testing large tensor 
`opperf_large_tensor.py`. Functionality has been merged into the original 
opperf.py file.
   2. All the operators that have been benchmarked so far in the opperf utility 
(in the master branch) can be profiled with native/python. 
   3. Inclusion of python time module via flag
   4. Adding more operators to improve coverage
   
   For current master branch,
   All you've to do now for the opperf utility is run 
   `python opperf.py` with your desired flags `--ctx=cpu -p python`
   It will run all the ops supported without error.
   
   Let me know if that helps.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #15757: [Discussion] Unified performance tests and dashboard

2019-08-28 Thread GitBox
ChaiBapchya edited a comment on issue #15757: [Discussion] Unified performance 
tests and dashboard
URL: 
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-525860730
 
 
   Yes. (This error is probably caused because incorrect file is being used (it 
was used for testing previously on my branch. But now master's opperf file is 
good to use.)
   
   1. Don't use separate file for testing large tensor 
`opperf_large_tensor.py`. Functionality has been merged into the original 
opperf.py file.
   2. All the operators that have been benchmarked so far in the opperf utility 
(in the master branch) can be profiled with native/python. 
   3. Inclusion of python time module via flag
   4. Adding more operators to improve coverage
   
   For current master branch,
   All you've to do now for the opperf utility is run 
   python opperf.py with your desired flags `--ctx=cpu -p python`
   It will run all the ops supported without error.
   
   Let me know if that helps.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #15757: [Discussion] Unified performance tests and dashboard

2019-08-28 Thread GitBox
ChaiBapchya commented on issue #15757: [Discussion] Unified performance tests 
and dashboard
URL: 
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-525860730
 
 
   Yes. (This error is probably caused because using the incorrect file being 
used)
   
   1. Don't use separate file for testing large tensor 
`opperf_large_tensor.py`. Functionality has been merged into the original 
opperf.py file.
   2. All the operators that have been benchmarked so far in the opperf utility 
(in the master branch) can be profiled with native/python. 
   3. Inclusion of python time module via flag
   4. Adding more operators to improve coverage
   
   For current master branch,
   All you've to do now for the opperf utility is run 
   python opperf.py with your desired flags `--ctx=cpu -p python`
   It will run all the ops supported without error.
   
   Let me know if that helps.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.5.x updated (33f4de1 -> bf78959)

2019-08-28 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a change to branch v1.5.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 33f4de1  Revert "Fix a memory misalignment in topk operator" (#15999)
 add bf78959  added check for empty params file and unknown param (not 
arg/aux) (#15917)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/model.py | 19 +--
 1 file changed, 13 insertions(+), 6 deletions(-)



[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16028: added check for empty params file and unknown param (not arg/aux) (#1…

2019-08-28 Thread GitBox
anirudh2290 commented on issue #16028: added check for empty params file and 
unknown param (not arg/aux) (#1…
URL: https://github.com/apache/incubator-mxnet/pull/16028#issuecomment-525853819
 
 
   Rebased and merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 merged pull request #16028: added check for empty params file and unknown param (not arg/aux) (#1…

2019-08-28 Thread GitBox
anirudh2290 merged pull request #16028: added check for empty params file and 
unknown param (not arg/aux) (#1…
URL: https://github.com/apache/incubator-mxnet/pull/16028
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318703729
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   First, lets clarify that we already help users avoid this problem by not 
explicitly accepting a ctx argument to optimize_for. Instead we'll pull the 
context from where the args reside. 
   
   
https://github.com/apache/incubator-mxnet/blob/9ccf6c60f38b1589595d4d5f6d31d00b846150dd/src/c_api/c_api_symbolic.cc#L1219-L1232
   
   We're not trying to support multi-context execution yet. So I think we 
should add some check/guard to prevent this situation for now. We can remove it 
later when we actively support multi-context and auto-insertion of copy 
operators from one context to another. 
   
   We already set a context on the graph here:
   
https://github.com/apache/incubator-mxnet/blob/9ccf6c60f38b1589595d4d5f6d31d00b846150dd/src/c_api/c_api_symbolic.cc#L1235-L1236
   
   We can add a check if it already exists and validate that the same context 
is used for each partitioning and bind. 
   
   That way we can make sure users have a good experience by enforcing what we 
currently support. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol edited a comment on issue #15997: MxNet triggered Segmentation Fault when using together with Ray or PyTorch

2019-08-28 Thread GitBox
leleamol edited a comment on issue #15997: MxNet triggered Segmentation Fault 
when using together with Ray or PyTorch
URL: 
https://github.com/apache/incubator-mxnet/issues/15997#issuecomment-525479325
 
 
   @mxnet-label-bot add [Bug, Memory, Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318703729
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   First, lets clarify that we already help users avoid this problem by not 
explicitly accepting a ctx argument to optimize_for. Instead we'll pull the 
context from where the args reside. 
   
   
https://github.com/apache/incubator-mxnet/blob/9ccf6c60f38b1589595d4d5f6d31d00b846150dd/src/c_api/c_api_symbolic.cc#L1219-L1232
   
   We're not trying to support multi-context execution yet. So I think we 
should add some check/guard to prevent this situation for now. We can remove it 
later when we actively support multi-context and auto-insertion of copy 
operators from one context to another. 
   
   Lets set an attribute on the graph like "PartitionedContext" and assign it 
the first time we do a partition. And we'll check it if it already exists and 
validate that the same context is used for each partitioning and bind. 
   
   That way we can make sure users have a good experience by enforcing what we 
currently support. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #15886: Graph Partition API

2019-08-28 Thread GitBox
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318703729
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   First, lets clarify that we already help users avoid this problem by not 
explicitly accepting a ctx argument to optimize_for. Instead we'll pull the 
context from where the args reside. 
   
   
https://github.com/apache/incubator-mxnet/pull/15886/files#diff-48b64958e69c6674970a3ddbd1bd91e5R1219-R1232
   
   We're not trying to support multi-context execution yet. So I think we 
should add some check/guard to prevent this situation for now. We can remove it 
later when we actively support multi-context and auto-insertion of copy 
operators from one context to another. 
   
   Lets set an attribute on the graph like "PartitionedContext" and assign it 
the first time we do a partition. And we'll check it if it already exists and 
validate that the same context is used for each partitioning and bind. 
   
   That way we can make sure users have a good experience by enforcing what we 
currently support. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #16029: Benchmark doc fix (#15769)

2019-08-28 Thread GitBox
ChaiBapchya opened a new pull request #16029: Benchmark doc fix (#15769)
URL: https://github.com/apache/incubator-mxnet/pull/16029
 
 
   * Update pre-req for opperf
   
   * Update README.md
   
   * correct command to import binary broadcast
   
   * no such op called nd.sub, it is nd.subtract
   
   * Trigger notification
   
   * Trigger notification
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16029: Benchmark doc fix (#15769)

2019-08-28 Thread GitBox
ChaiBapchya commented on issue #16029: Benchmark doc fix (#15769)
URL: https://github.com/apache/incubator-mxnet/pull/16029#issuecomment-525840777
 
 
   @TaoLv 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol edited a comment on issue #15927: sym.Variable input init need init.dumps(), but NDarray is not JSON serializable

2019-08-28 Thread GitBox
leleamol edited a comment on issue #15927: sym.Variable input init need 
init.dumps(), but NDarray is not JSON serializable
URL: 
https://github.com/apache/incubator-mxnet/issues/15927#issuecomment-525834629
 
 
   @mxnet-label-bot add [Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol commented on issue #15927: sym.Variable input init need init.dumps(), but NDarray is not JSON serializable

2019-08-28 Thread GitBox
leleamol commented on issue #15927: sym.Variable input init need init.dumps(), 
but NDarray is not JSON serializable
URL: 
https://github.com/apache/incubator-mxnet/issues/15927#issuecomment-525834629
 
 
   @mxnet-label-bot add [Pending Requster Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on issue #16028: added check for empty params file and unknown param (not arg/aux) (#1…

2019-08-28 Thread GitBox
samskalicky commented on issue #16028: added check for empty params file and 
unknown param (not arg/aux) (#1…
URL: https://github.com/apache/incubator-mxnet/pull/16028#issuecomment-525832832
 
 
   @TaoLv 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky opened a new pull request #16028: added check for empty params file and unknown param (not arg/aux) (#1…

2019-08-28 Thread GitBox
samskalicky opened a new pull request #16028: added check for empty params file 
and unknown param (not arg/aux) (#1…
URL: https://github.com/apache/incubator-mxnet/pull/16028
 
 
   …5917)
   
   * added check for empty params file and unknown param (not arg/aux)
   
   * changed exception to warning for unknown params
   
   * removed unnecessary MXNetError import
   
   * added warning message is params is empty
   
   * fixed print
   
   * fixed formatting
   
   * missing paren
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol commented on issue #15597: c_predict_api.h not working with mxnet amalgamation

2019-08-28 Thread GitBox
leleamol commented on issue #15597: c_predict_api.h not working with mxnet 
amalgamation
URL: 
https://github.com/apache/incubator-mxnet/issues/15597#issuecomment-525831884
 
 
   @mxnet-label-bot add [C API]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >