[GitHub] gigasquid commented on a change in pull request #13993: [Clojure] Add resource scope to clojure package

2019-01-25 Thread GitBox
gigasquid commented on a change in pull request #13993: [Clojure] Add resource 
scope to clojure package
URL: https://github.com/apache/incubator-mxnet/pull/13993#discussion_r251132120
 
 

 ##
 File path: 
contrib/clojure-package/test/org/apache/clojure_mxnet/resource_scope_test.clj
 ##
 @@ -0,0 +1,65 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns org.apache.clojure-mxnet.resource-scope-test
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.resource-scope :as resource-scope]
+[clojure.test :refer :all]))
+
+(deftest test-resource-scope-with-ndarray
+  (let [x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(def temp-x (ndarray/ones [3 1]))
+(def temp-y (ndarray/ones [3 1]))
+(let [z {:just-a-test (def temp-z (ndarray/ones [3 3]))}
 
 Review comment:
   I agree the def are weird - do you think an atom with a map of ndarrays 
under test that could be `swap` ed in would be better?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #13964: Addresses comments in runtime feature discovery API

2019-01-25 Thread GitBox
larroy commented on a change in pull request #13964: Addresses comments in 
runtime feature discovery API
URL: https://github.com/apache/incubator-mxnet/pull/13964#discussion_r251133342
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -208,6 +208,24 @@ MXNET_DLL const char *MXGetLastError();
 //-
 // Part 0: Global State setups
 //-
+
+/*!
+ * \brief Check if a feature is enabled in the runtime
+ * \param feature to check mxruntime.h
+ * \param out set to true if the feature is enabled, false otherwise
+ * \return 0 when success, -1 when failure happens.
+ */
+MXNET_DLL int MXRuntimeHasFeature(const mx_uint feature, bool *out);
 
 Review comment:
   I don't like two functions returning strings, there's some type checking, 
verification happening if you have to test the enum explicitly that you don't 
get with two sets of strings. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13802: Image normalize operator - GPU support, 3D/4D inputs

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13802: Image 
normalize operator - GPU support, 3D/4D inputs
URL: https://github.com/apache/incubator-mxnet/pull/13802#discussion_r251134919
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -123,28 +146,157 @@ inline bool NormalizeShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
-void Normalize(const nnvm::NodeAttrs ,
+// Type Inference
+inline bool NormalizeOpType(const nnvm::NodeAttrs& attrs,
+  std::vector* in_attrs,
+  std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  // Normalized Tensor will be a float
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat32);
+  return out_attrs->at(0) != -1;
+}
+
+template
+struct normalize_forward {
+template
+MSHADOW_XINLINE static void Map(int j, DType* out_data, const DType* 
in_data,
+const int i, const int length, const int 
step,
+const DType mean, const DType std_dev) {
+KERNEL_ASSIGN(out_data[step + i*length + j], req,
+  (in_data[step + i*length + j] - mean) / std_dev);
+}
+};
+
+template
+void NormalizeImpl(const OpContext ,
+  const std::vector ,
+  const std::vector ,
+  const std::vector ,
+  const NormalizeParam ,
+  const int length,
+  const int channel,
+  const int step = 0) {
+mshadow::Stream *s = ctx.get_stream();
+
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+DType* input = inputs[0].dptr();
+DType* output = outputs[0].dptr();
+
+for (int i = 0; i < channel; ++i) {
+DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
+DType std_dev = param.std[param.std.ndim() > 1 ? i : 0];
+mxnet_op::Kernel, xpu>::Launch(
+s, length, output, input,
+i, length, step, mean, std_dev);
+}
+  });
+});
+}
+
+template
+void NormalizeOpForward(const nnvm::NodeAttrs ,
   const OpContext ,
   const std::vector ,
   const std::vector ,
   const std::vector ) {
+  CHECK_EQ(inputs.size(), 1U);
+  CHECK_EQ(outputs.size(), 1U);
+  CHECK_EQ(req.size(), 1U);
+
   const NormalizeParam  = nnvm::get(attrs.parsed);
 
-  int nchannels = inputs[0].shape_[0];
-  int length = inputs[0].shape_[1] * inputs[0].shape_[2];
+  // 3D input (c, h, w)
+  if (inputs[0].ndim() == 3) {
+const int length = inputs[0].shape_[1] * inputs[0].shape_[2];
+const int channel = inputs[0].shape_[0];
+NormalizeImpl(ctx, inputs, outputs, req, param, length, channel);
+  } else if (inputs[0].ndim() == 4) {
+// 4D input (n, c, h, w)
+const int batch_size = inputs[0].shape_[0];
+const int length = inputs[0].shape_[2] * inputs[0].shape_[3];
+const int channel = inputs[0].shape_[1];
+const int step = channel * length;
+
+#pragma omp parallel for
+for (auto n = 0; n < batch_size; ++n) {
+  NormalizeImpl(ctx, inputs, outputs, req, param, length, channel, 
n*step);
+}
+  }
+}
 
-  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
-DType* input = inputs[0].dptr();
-DType* output = outputs[0].dptr();
+// Backward function
+template
+struct normalize_backward {
+  template
+  MSHADOW_XINLINE static void Map(int j, DType* in_grad, const DType* out_grad,
+  const DType* in_data, const int i, const int 
length,
+  const int step, const DType std_dev) {
+// d/dx{(x - mean) / std_dev} => (1 / std_dev)
+KERNEL_ASSIGN(in_grad[step + i*length + j], req,
+  out_grad[step + i*length + j] * (1.0 / std_dev));
+  }
+};
 
-for (int i = 0; i < nchannels; ++i) {
-  DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
-  DType std = param.std[param.std.ndim() > 1 ? i : 0];
-  for (int j = 0; j < length; ++j) {
-output[i*length + j] = (input[i*length + j] - mean) / std;
-  }
+template
+void NormalizeBackwardImpl(const OpContext ,
+   const std::vector ,
+   const std::vector ,
+   const std::vector ,
+   const NormalizeParam ,
+   const int length,
+   const int channel,
+   const int step = 0) {
+mshadow::Stream *s = ctx.get_stream();
+const TBlob& out_grad = inputs[0];
+const TBlob& in_data = inputs[1];
+const TBlob& in_grad = outputs[0];
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+

[GitHub] sandeep-krishnamurthy commented on a change in pull request #13802: Image normalize operator - GPU support, 3D/4D inputs

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13802: Image 
normalize operator - GPU support, 3D/4D inputs
URL: https://github.com/apache/incubator-mxnet/pull/13802#discussion_r251134828
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -123,28 +146,157 @@ inline bool NormalizeShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
-void Normalize(const nnvm::NodeAttrs ,
+// Type Inference
+inline bool NormalizeOpType(const nnvm::NodeAttrs& attrs,
+  std::vector* in_attrs,
+  std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  // Normalized Tensor will be a float
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat32);
+  return out_attrs->at(0) != -1;
+}
+
+template
+struct normalize_forward {
+template
+MSHADOW_XINLINE static void Map(int j, DType* out_data, const DType* 
in_data,
+const int i, const int length, const int 
step,
+const DType mean, const DType std_dev) {
+KERNEL_ASSIGN(out_data[step + i*length + j], req,
+  (in_data[step + i*length + j] - mean) / std_dev);
+}
+};
+
+template
+void NormalizeImpl(const OpContext ,
+  const std::vector ,
+  const std::vector ,
+  const std::vector ,
+  const NormalizeParam ,
+  const int length,
+  const int channel,
+  const int step = 0) {
+mshadow::Stream *s = ctx.get_stream();
+
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+DType* input = inputs[0].dptr();
+DType* output = outputs[0].dptr();
+
+for (int i = 0; i < channel; ++i) {
+DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
+DType std_dev = param.std[param.std.ndim() > 1 ? i : 0];
 
 Review comment:
   same as above


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13802: Image normalize operator - GPU support, 3D/4D inputs

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13802: Image 
normalize operator - GPU support, 3D/4D inputs
URL: https://github.com/apache/incubator-mxnet/pull/13802#discussion_r251134761
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -123,28 +146,157 @@ inline bool NormalizeShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
-void Normalize(const nnvm::NodeAttrs ,
+// Type Inference
+inline bool NormalizeOpType(const nnvm::NodeAttrs& attrs,
+  std::vector* in_attrs,
+  std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  // Normalized Tensor will be a float
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat32);
+  return out_attrs->at(0) != -1;
+}
+
+template
+struct normalize_forward {
+template
+MSHADOW_XINLINE static void Map(int j, DType* out_data, const DType* 
in_data,
+const int i, const int length, const int 
step,
+const DType mean, const DType std_dev) {
+KERNEL_ASSIGN(out_data[step + i*length + j], req,
+  (in_data[step + i*length + j] - mean) / std_dev);
+}
+};
+
+template
+void NormalizeImpl(const OpContext ,
+  const std::vector ,
+  const std::vector ,
+  const std::vector ,
+  const NormalizeParam ,
+  const int length,
+  const int channel,
+  const int step = 0) {
+mshadow::Stream *s = ctx.get_stream();
+
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+DType* input = inputs[0].dptr();
+DType* output = outputs[0].dptr();
+
+for (int i = 0; i < channel; ++i) {
+DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
 
 Review comment:
   mean param can be a single value or user needs to provide mean for each 
dimension. Also, checked in line 136,
   ```
 CHECK((param.mean.ndim() == 1) || (param.mean.ndim() == nchannels))
 << "Invalid mean for input with shape " << dshape
 << ". mean must have either 1 or " << nchannels
 << " elements, but got " << param.mean;
 CHECK(param.std.ndim() == 1 || param.std.ndim() == nchannels)
 << "Invalid std for input with shape " << dshape
 << ". std must have either 1 or " << nchannels
 << " elements, but got " << param.std;
   ```
   Hence, it checks if param ndim() is > 1 or not.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13802: Image normalize operator - GPU support, 3D/4D inputs

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13802: Image 
normalize operator - GPU support, 3D/4D inputs
URL: https://github.com/apache/incubator-mxnet/pull/13802#discussion_r251139655
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -85,32 +85,55 @@ void ToTensor(const nnvm::NodeAttrs ,
   });
 }
 
+// Normalize Operator
+// Parameter registration for image Normalize operator
 struct NormalizeParam : public dmlc::Parameter {
   nnvm::Tuple mean;
+  nnvm::Tuple default_mean = {0.0f, 0.0f, 0.0f, 0.0f};
   nnvm::Tuple std;
+  nnvm::Tuple default_std = {1.0f, 1.0f, 1.0f, 1.0f};
+
   DMLC_DECLARE_PARAMETER(NormalizeParam) {
 DMLC_DECLARE_FIELD(mean)
-.describe("Sequence of mean for each channel.");
+.set_default(default_mean)
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #13964: Addresses comments in runtime feature discovery API

2019-01-25 Thread GitBox
szha commented on issue #13964: Addresses comments in runtime feature discovery 
API
URL: https://github.com/apache/incubator-mxnet/pull/13964#issuecomment-457739621
 
 
   @larroy I will need more information to offer advice on how to make it work. 
Alternatively, I can fork the branch and try to make it work. Let me know what 
you prefer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13802: Image normalize operator - GPU support, 3D/4D inputs

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13802: Image 
normalize operator - GPU support, 3D/4D inputs
URL: https://github.com/apache/incubator-mxnet/pull/13802#discussion_r251139169
 
 

 ##
 File path: src/operator/image/image_random.cc
 ##
 @@ -49,21 +49,64 @@ NNVM_REGISTER_OP(_image_to_tensor)
 .add_argument("data", "NDArray-or-Symbol", "The input.");
 
 NNVM_REGISTER_OP(_image_normalize)
-.describe(R"code()code" ADD_FILELINE)
+.describe(R"code(Normalize an tensor of shape (C x H x W) or (N x C x H x W) 
with mean and
+standard deviation.
+
+Given mean `(m1, ..., mn)` and std `(s\ :sub:`1`\ , ..., s\ :sub:`n`)` for 
`n` channels,
+this transform normalizes each channel of the input tensor with:
+
+.. math::
+
+output[i] = (input[i] - m\ :sub:`i`\ ) / s\ :sub:`i`
+
+If mean or std is scalar, the same value will be applied to all channels.
+
+Default value for mean is 0.0 and stand deviation is 1.0.
+
+Example:
+
+.. code-block:: python
+image = mx.nd.random.uniform(0, 1, (3, 4, 2))
+normalize(image, mean=(0, 1, 2), std=(3, 2, 1))
+[[[ 0.18293785  0.19761486]
+  [ 0.23839645  0.28142193]
+  [ 0.20092112  0.28598186]
+  [ 0.18162774  0.28241724]]
+ [[-0.2881726  -0.18821815]
+  [-0.17705294 -0.30780914]
+  [-0.2812064  -0.3512327 ]
+  [-0.05411351 -0.4716435 ]]
+ [[-1.0363373  -1.7273437 ]
+  [-1.6165586  -1.5223348 ]
+  [-1.208275   -1.1878313 ]
+  [-1.4711051  -1.5200229 ]]]
+
+)code" ADD_FILELINE)
+.set_attr_parser(ParamParser)
 .set_num_inputs(1)
 .set_num_outputs(1)
-.set_attr_parser(ParamParser)
-.set_attr("FInferShape", NormalizeShape)
-.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+return std::vector{"data"};
+  })
+.set_attr("FInferShape", NormalizeOpShape)
+.set_attr("FInferType", NormalizeOpType)
+.set_attr("FCompute", NormalizeOpForward)
 .set_attr("FInplaceOption",
-  [](const NodeAttrs& attrs){
+  [](const NodeAttrs& attrs) {
 return std::vector >{{0, 0}};
   })
-.set_attr("FCompute", Normalize)
-.set_attr("FGradient", ElemwiseGradUseNone{ "_copy" })
-.add_argument("data", "NDArray-or-Symbol", "The input.")
+.set_attr("FGradient", ElemwiseGradUseIn{ 
"_backward_image_normalize"})
+.add_argument("data", "NDArray-or-Symbol", "Input ndarray")
 .add_arguments(NormalizeParam::__FIELDS__());
 
+NNVM_REGISTER_OP(_backward_image_normalize)
+.set_attr_parser(ParamParser)
+.set_num_inputs(2)
 
 Review comment:
   Thanks for flagging. Not necessary, we only need 1 - out_grad. Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #13993: [Clojure] Add resource scope to clojure package

2019-01-25 Thread GitBox
gigasquid commented on a change in pull request #13993: [Clojure] Add resource 
scope to clojure package
URL: https://github.com/apache/incubator-mxnet/pull/13993#discussion_r251151544
 
 

 ##
 File path: 
contrib/clojure-package/test/org/apache/clojure_mxnet/resource_scope_test.clj
 ##
 @@ -0,0 +1,65 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns org.apache.clojure-mxnet.resource-scope-test
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.resource-scope :as resource-scope]
+[clojure.test :refer :all]))
+
+(deftest test-resource-scope-with-ndarray
+  (let [x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(def temp-x (ndarray/ones [3 1]))
+(def temp-y (ndarray/ones [3 1]))
+(let [z {:just-a-test (def temp-z (ndarray/ones [3 3]))}
 
 Review comment:
   Thanks for the feedback and review @benkamphaus - I updated the test. They 
are an improvement. I feel like it could be further improved as well, but I 
can't think of an immediate way :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-01-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new dceb6a4  Bump the publish timestamp.
dceb6a4 is described below

commit dceb6a4d73b0a1410772f59fe8bb955af9801f4a
Author: mxnet-ci 
AuthorDate: Fri Jan 25 20:43:35 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..bfa6eb0
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Jan 25 20:43:35 UTC 2019



[GitHub] aaronmarkham commented on a change in pull request #13983: build docs with CPP package

2019-01-25 Thread GitBox
aaronmarkham commented on a change in pull request #13983: build docs with CPP 
package
URL: https://github.com/apache/incubator-mxnet/pull/13983#discussion_r251131423
 
 

 ##
 File path: docs/mxdoc.py
 ##
 @@ -87,10 +87,10 @@ def generate_doxygen(app):
 def build_mxnet(app):
 """Build mxnet .so lib"""
 if not os.path.exists(os.path.join(app.builder.srcdir, '..', 'config.mk')):
-_run_cmd("cd %s/.. && cp make/config.mk config.mk && make -j$(nproc) 
DEBUG=1 USE_MKLDNN=0" %
+_run_cmd("cd %s/.. && cp make/config.mk config.mk && make -j$(nproc) 
DEBUG=1 USE_MKLDNN=0 USE_CPP_PACKAGE=1" %
 app.builder.srcdir)
 else:
-_run_cmd("cd %s/.. && make -j$(nproc) DEBUG=1 USE_MKLDNN=0" %
+_run_cmd("cd %s/.. && make -j$(nproc) DEBUG=1 USE_MKLDNN=0 
USE_CPP_PACKAGE=1" %
 
 Review comment:
   It's always been there. Not sure of the impact of removing it. Maybe I can 
test it out and remove it in another PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #13964: Addresses comments in runtime feature discovery API

2019-01-25 Thread GitBox
larroy commented on issue #13964: Addresses comments in runtime feature 
discovery API
URL: https://github.com/apache/incubator-mxnet/pull/13964#issuecomment-457725875
 
 
   @szha I did the changes that you requested, but doesn't work due to 
libinfo.py being used before loading the library, is a rabbit hole, see this 
branch: https://github.com/larroy/mxnet/tree/feature_discovery_libinfo
   
   I discarded that idea. Can we provide a path to completion?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] benkamphaus commented on a change in pull request #13993: [Clojure] Add resource scope to clojure package

2019-01-25 Thread GitBox
benkamphaus commented on a change in pull request #13993: [Clojure] Add 
resource scope to clojure package
URL: https://github.com/apache/incubator-mxnet/pull/13993#discussion_r251135323
 
 

 ##
 File path: 
contrib/clojure-package/test/org/apache/clojure_mxnet/resource_scope_test.clj
 ##
 @@ -0,0 +1,65 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns org.apache.clojure-mxnet.resource-scope-test
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.resource-scope :as resource-scope]
+[clojure.test :refer :all]))
+
+(deftest test-resource-scope-with-ndarray
+  (let [x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(def temp-x (ndarray/ones [3 1]))
+(def temp-y (ndarray/ones [3 1]))
+(let [z {:just-a-test (def temp-z (ndarray/ones [3 3]))}
 
 Review comment:
   Hmm. I'd say yes _cautiously_. Tradeoff IMO: it would look a bit more 
idiomatic, but also put more indirection in the tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on a change in pull request #13802: Image normalize operator - GPU support, 3D/4D inputs

2019-01-25 Thread GitBox
zhreshold commented on a change in pull request #13802: Image normalize 
operator - GPU support, 3D/4D inputs
URL: https://github.com/apache/incubator-mxnet/pull/13802#discussion_r251137506
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -123,28 +146,157 @@ inline bool NormalizeShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
-void Normalize(const nnvm::NodeAttrs ,
+// Type Inference
+inline bool NormalizeOpType(const nnvm::NodeAttrs& attrs,
+  std::vector* in_attrs,
+  std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  // Normalized Tensor will be a float
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat32);
+  return out_attrs->at(0) != -1;
+}
+
+template
+struct normalize_forward {
+template
+MSHADOW_XINLINE static void Map(int j, DType* out_data, const DType* 
in_data,
+const int i, const int length, const int 
step,
+const DType mean, const DType std_dev) {
+KERNEL_ASSIGN(out_data[step + i*length + j], req,
+  (in_data[step + i*length + j] - mean) / std_dev);
+}
+};
+
+template
+void NormalizeImpl(const OpContext ,
+  const std::vector ,
+  const std::vector ,
+  const std::vector ,
+  const NormalizeParam ,
+  const int length,
+  const int channel,
+  const int step = 0) {
+mshadow::Stream *s = ctx.get_stream();
+
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+DType* input = inputs[0].dptr();
+DType* output = outputs[0].dptr();
+
+for (int i = 0; i < channel; ++i) {
+DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
 
 Review comment:
   It's equvilent but safer


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13802: Image normalize operator - GPU support, 3D/4D inputs

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13802: Image 
normalize operator - GPU support, 3D/4D inputs
URL: https://github.com/apache/incubator-mxnet/pull/13802#discussion_r251139850
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -123,28 +146,157 @@ inline bool NormalizeShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
-void Normalize(const nnvm::NodeAttrs ,
+// Type Inference
+inline bool NormalizeOpType(const nnvm::NodeAttrs& attrs,
+  std::vector* in_attrs,
+  std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  // Normalized Tensor will be a float
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat32);
+  return out_attrs->at(0) != -1;
+}
+
+template
+struct normalize_forward {
+template
+MSHADOW_XINLINE static void Map(int j, DType* out_data, const DType* 
in_data,
+const int i, const int length, const int 
step,
+const DType mean, const DType std_dev) {
+KERNEL_ASSIGN(out_data[step + i*length + j], req,
+  (in_data[step + i*length + j] - mean) / std_dev);
+}
+};
+
+template
+void NormalizeImpl(const OpContext ,
+  const std::vector ,
+  const std::vector ,
+  const std::vector ,
+  const NormalizeParam ,
+  const int length,
+  const int channel,
+  const int step = 0) {
+mshadow::Stream *s = ctx.get_stream();
+
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+DType* input = inputs[0].dptr();
+DType* output = outputs[0].dptr();
+
+for (int i = 0; i < channel; ++i) {
+DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
 
 Review comment:
   Agreed. Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #13802: Image normalize operator - GPU support, 3D/4D inputs

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on issue #13802: Image normalize operator - GPU 
support, 3D/4D inputs
URL: https://github.com/apache/incubator-mxnet/pull/13802#issuecomment-457736766
 
 
   @zhreshold - Thanks! I addressed your review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #13964: Addresses comments in runtime feature discovery API

2019-01-25 Thread GitBox
szha commented on a change in pull request #13964: Addresses comments in 
runtime feature discovery API
URL: https://github.com/apache/incubator-mxnet/pull/13964#discussion_r251145001
 
 

 ##
 File path: python/mxnet/runtime.py
 ##
 @@ -0,0 +1,82 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=not-an-iterable
+
+"""runtime detection of compile time features in the native library"""
+
+import ctypes
+import enum
+from .base import _LIB, check_call, mx_uint, py_str
+
+
+def _feature_names_available():
+"""
+
+:return:
+"""
+feature_list = ctypes.POINTER(ctypes.c_char_p)()
+feature_list_sz = ctypes.c_size_t()
+check_call(_LIB.MXRuntimeFeatureList(ctypes.byref(feature_list_sz), 
ctypes.byref(feature_list)))
+feature_names = []
+for i in range(feature_list_sz.value):
+feature_names.append(py_str(feature_list[i]))
+return feature_names
+
+Feature = enum.Enum('Feature', {name: index for index, name in 
enumerate(_feature_names_available())})
+
+def features_available():
 
 Review comment:
   What's wrong with having a single method to return the complete list of 
features in string, the mask number, and a binary state indicator for each 
feature on whether a feature is enabled or not?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #12781: Fixed issue #12745

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on issue #12781: Fixed issue #12745
URL: https://github.com/apache/incubator-mxnet/pull/12781#issuecomment-457783989
 
 
   @LuckyPigeon - When you update, please let me know, we can take this to 
completion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy merged pull request #12983: Sample python bilinear initializer at integral points in y-direction

2019-01-25 Thread GitBox
sandeep-krishnamurthy merged pull request #12983: Sample python bilinear 
initializer at integral points in y-direction
URL: https://github.com/apache/incubator-mxnet/pull/12983
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Sample python bilinear initializer at integral points in y-direction (#12983)

2019-01-25 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 28c20fb  Sample python bilinear initializer at integral points in 
y-direction (#12983)
28c20fb is described below

commit 28c20fb6c1ad66548336c817d07031ccbbccf435
Author: vlado 
AuthorDate: Fri Jan 25 17:54:10 2019 -0700

Sample python bilinear initializer at integral points in y-direction 
(#12983)

* Sample python bilinear initializer at integral points in y-direction

* Add unit test for bilinear initializer
---
 python/mxnet/initializer.py| 4 ++--
 tests/python/unittest/test_init.py | 9 +
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/python/mxnet/initializer.py b/python/mxnet/initializer.py
index b67ab62..611592a 100755
--- a/python/mxnet/initializer.py
+++ b/python/mxnet/initializer.py
@@ -217,7 +217,7 @@ class Initializer(object):
 c = (2 * f - 1 - f % 2) / (2. * f)
 for i in range(np.prod(shape)):
 x = i % shape[3]
-y = (i / shape[3]) % shape[2]
+y = (i // shape[3]) % shape[2]
 weight[i] = (1 - abs(x / f - c)) * (1 - abs(y / f - c))
 arr[:] = weight.reshape(shape)
 
@@ -657,7 +657,7 @@ class Bilinear(Initializer):
 c = (2 * f - 1 - f % 2) / (2. * f)
 for i in range(np.prod(shape)):
 x = i % shape[3]
-y = (i / shape[3]) % shape[2]
+y = (i // shape[3]) % shape[2]
 weight[i] = (1 - abs(x / f - c)) * (1 - abs(y / f - c))
 arr[:] = weight.reshape(shape)
 
diff --git a/tests/python/unittest/test_init.py 
b/tests/python/unittest/test_init.py
index efd6ef3..c8bf01f 100644
--- a/tests/python/unittest/test_init.py
+++ b/tests/python/unittest/test_init.py
@@ -60,8 +60,17 @@ def test_rsp_const_init():
 check_rsp_const_init(mx.initializer.Zero(), 0.)
 check_rsp_const_init(mx.initializer.One(), 1.)
 
+def test_bilinear_init():
+bili = mx.init.Bilinear()
+bili_weight = mx.ndarray.empty((1,1,4,4))
+bili._init_weight(None, bili_weight)
+bili_1d = np.array([[1/float(4), 3/float(4), 3/float(4), 1/float(4)]])
+bili_2d = bili_1d * np.transpose(bili_1d)
+assert (bili_2d == bili_weight.asnumpy()).all()
+
 if __name__ == '__main__':
 test_variable_init()
 test_default_init()
 test_aux_init()
 test_rsp_const_init()
+test_bilinear_init()



[GitHub] sandeep-krishnamurthy commented on issue #13226: [MXNet-1211] Factor and "Like" modes in BilinearResize2D operator

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on issue #13226: [MXNet-1211] Factor and "Like" 
modes in BilinearResize2D operator
URL: https://github.com/apache/incubator-mxnet/pull/13226#issuecomment-457784651
 
 
   @apeforest - Can you please help review this PR?
   @lobanov-m - Can you please rebase? Thanks for your contributions


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-703] Minor refactor of TensorRT code (#13311)

2019-01-25 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 3df0917  [MXNET-703] Minor refactor of TensorRT code (#13311)
3df0917 is described below

commit 3df091705587eb83991dd9346a230085b585a2cc
Author: Kellen Sunderland 
AuthorDate: Fri Jan 25 16:59:21 2019 -0800

[MXNET-703] Minor refactor of TensorRT code (#13311)
---
 src/executor/onnx_to_tensorrt.cc|  4 ++--
 src/executor/trt_graph_executor.cc  |  7 +++
 src/operator/contrib/nnvm_to_onnx-inl.h | 14 +++---
 src/operator/contrib/nnvm_to_onnx.cc|  4 ++--
 4 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/src/executor/onnx_to_tensorrt.cc b/src/executor/onnx_to_tensorrt.cc
index c37b856..f7fbc8f 100644
--- a/src/executor/onnx_to_tensorrt.cc
+++ b/src/executor/onnx_to_tensorrt.cc
@@ -100,8 +100,8 @@ nvinfer1::ICudaEngine* onnxToTrtCtx(
   }
 
   if ( !trt_parser->parse(onnx_model.c_str(), onnx_model.size()) ) {
-  int nerror = trt_parser->getNbErrors();
-  for ( int i=0; i < nerror; ++i ) {
+  size_t nerror = trt_parser->getNbErrors();
+  for ( size_t i=0; i < nerror; ++i ) {
 nvonnxparser::IParserError const* error = trt_parser->getError(i);
 if ( error->node() != -1 ) {
   ::ONNX_NAMESPACE::NodeProto const& node =
diff --git a/src/executor/trt_graph_executor.cc 
b/src/executor/trt_graph_executor.cc
index 92bdcab..85ce168 100644
--- a/src/executor/trt_graph_executor.cc
+++ b/src/executor/trt_graph_executor.cc
@@ -133,7 +133,7 @@ void TrtGraphExecutor::Init(nnvm::Symbol symbol,
   }
 
   auto trt_groups = GetTrtCompatibleSubsets(g, shared_buffer);
-  for (auto trt_group : trt_groups) {
+  for (const auto _group : trt_groups) {
 if (trt_group.size() > 1) {
   g = ReplaceSubgraph(std::move(g), trt_group, shared_buffer);
   g = ReinitGraph(std::move(g), default_ctx, ctx_map, in_arg_ctxes, 
arg_grad_ctxes,
@@ -142,7 +142,6 @@ void TrtGraphExecutor::Init(nnvm::Symbol symbol,
 }
   }
 
-
   InitArguments(g.indexed_graph(), g.GetAttr("shape"),
 g.GetAttr("dtype"),
 g.GetAttr("storage_type"),
@@ -188,7 +187,7 @@ void TrtGraphExecutor::InitArguments(const 
nnvm::IndexedGraph& idx,
 const uint32_t eid = idx.entry_id(nid, 0);
 const TShape& inferred_shape = inferred_shapes[eid];
 const int inferred_dtype = inferred_dtypes[eid];
-const NDArrayStorageType inferred_stype = (NDArrayStorageType) 
inferred_stypes[eid];
+const auto inferred_stype = (NDArrayStorageType) inferred_stypes[eid];
 const std::string& arg_name = idx[nid].source->attrs.name;
 // aux_states
 if (mutable_nodes.count(nid)) {
@@ -427,7 +426,7 @@ Executor *TrtGraphExecutor::TensorRTBind(nnvm::Symbol 
symbol,
  std::unordered_map *shared_buffer,
  Executor *shared_exec) {
   auto exec = new exec::TrtGraphExecutor();
-  exec->Init(symbol, default_ctx, group2ctx,
+  exec->Init(std::move(symbol), default_ctx, group2ctx,
  in_arg_ctxes, arg_grad_ctxes, aux_state_ctxes,
  arg_shape_map, arg_dtype_map, arg_stype_map,
  grad_req_types, param_names,
diff --git a/src/operator/contrib/nnvm_to_onnx-inl.h 
b/src/operator/contrib/nnvm_to_onnx-inl.h
index e0c4d93..0994f7e 100644
--- a/src/operator/contrib/nnvm_to_onnx-inl.h
+++ b/src/operator/contrib/nnvm_to_onnx-inl.h
@@ -70,7 +70,7 @@ struct ONNXParam : public dmlc::Parameter {
   nnvm_to_onnx::InferenceMap_t output_map;
   ::onnx::ModelProto onnx_pb_graph;
 
-  ONNXParam() {}
+  ONNXParam() = default;
 
   ONNXParam(const ::onnx::ModelProto& onnx_graph,
const nnvm_to_onnx::InferenceMap_t& input_map,
@@ -104,14 +104,14 @@ std::unordered_map 
GetOutputLookup(const nnvm::IndexedGra
 void ConvertPlaceholder(
   const std::string& node_name,
   const std::unordered_map& placeholder_shapes,
-  GraphProto* const graph_proto);
+  GraphProto* graph_proto);
 
-void ConvertConstant(GraphProto* const graph_proto,
+void ConvertConstant(GraphProto* graph_proto,
   const std::string& node_name,
-  std::unordered_map* const shared_buffer);
+  std::unordered_map* shared_buffer);
 
-void ConvertOutput(op::nnvm_to_onnx::InferenceMap_t* const trt_output_map,
-   GraphProto* const graph_proto,
+void ConvertOutput(op::nnvm_to_onnx::InferenceMap_t* trt_output_map,
+   GraphProto* graph_proto,
const std::unordered_map::iterator& 
out_iter,
const std::string& node_name,
const nnvm::Graph& g,
@@ -169,7 +169,7 @@ void ConvertElementwiseAdd(NodeProto *node_proto,
 
 ONNXParam ConvertNnvmGraphToOnnx(
 const nnvm::Graph ,
-std::unordered_map *const shared_buffer);
+std::unordered_map* shared_buffer);
 
 

[GitHub] sandeep-krishnamurthy merged pull request #13311: [MXNET-703] Minor refactor of TensorRT code

2019-01-25 Thread GitBox
sandeep-krishnamurthy merged pull request #13311: [MXNET-703] Minor refactor of 
TensorRT code
URL: https://github.com/apache/incubator-mxnet/pull/13311
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zachgk opened a new pull request #13995: [MXNET-1287][WIP] Scala compiler warnings

2019-01-25 Thread GitBox
zachgk opened a new pull request #13995: [MXNET-1287][WIP] Scala compiler 
warnings
URL: https://github.com/apache/incubator-mxnet/pull/13995
 
 
   ## Description ##
   Address and resolve Scala compiler warnings
   
   @lanking520 @andrewfayres @piyushghai 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Scala version mismatch warnings addressed where possible (spark has 
inconsistent versioning which can't be addressed without upgrading to new 
breaking major version)
   - [x] Upgrade scala compiler plugin
   - [x] Enable flag to expand and address feature warnings
   - [x] Enable flag to expand and address deprecation warnings
 - [x] Warnings due to deprecation of provideLabel and provideData
 - [] Warnings due to use of Any in macros
   - [x] assembly plugin warnings


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zachgk commented on issue #13995: [MXNET-1287][WIP] Scala compiler warnings

2019-01-25 Thread GitBox
zachgk commented on issue #13995: [MXNET-1287][WIP] Scala compiler warnings
URL: https://github.com/apache/incubator-mxnet/pull/13995#issuecomment-457785109
 
 
   @mxnet-label-bot add [Scala, Maven]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yuyijie1995 opened a new issue #13996: Make error

2019-01-25 Thread GitBox
yuyijie1995 opened a new issue #13996: Make error 
URL: https://github.com/apache/incubator-mxnet/issues/13996
 
 
   When I want to build cpp package ,this error below just happened.
   g++ -std=c++11 -DMSHADOW_FORCE_STREAM -Wall -Wsign-compare -O3 -DNDEBUG=1 
-I/home/users/yijie.yu/yuyijie/mxnet/3rdparty/mshadow/ 
-I/home/users/yijie.yu/yuyijie/mxnet/3rdparty/dmlc-core/include -fPIC 
-I/home/users/yijie.yu/yuyijie/mxnet/3rdparty/tvm/nnvm/include 
-I/home/users/yijie.yu/yuyijie/mxnet/3rdparty/dlpack/include 
-I/home/users/yijie.yu/yuyijie/mxnet/3rdparty/tvm/include -Iinclude 
-funroll-loops -Wno-unused-parameter -Wno-unknown-pragmas 
-Wno-unused-local-typedefs -msse3 -mf16c -I/usr/local/cuda-8.0/include 
-DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 
-I/home/users/yijie.yu/yuyijie/mxnet/3rdparty/mkldnn/build/install/include 
-DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DMSHADOW_USE_PASCAL=0 
-DMXNET_USE_MKLDNN=1 -DUSE_MKL=1 
-I/home/users/yijie.yu/yuyijie/mxnet/src/operator/nn/mkldnn/ 
-I/home/users/yijie.yu/yuyijie/mxnet/3rdparty/mkldnn/build/install/include 
-DMXNET_USE_OPENCV=1 -I/usr/include/opencv   -fopenmp 
-DMXNET_USE_OPERATOR_TUNING=1 -DMSHADOW_USE_CUDNN=1 -fno-builtin-malloc 
-fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free  
-I/usr/include/openblas/ -I/home/users/yijie.yu/yuyijie/mxnet/3rdparty/cub 
-DMXNET_ENABLE_CUDA_RTC=1 -DMXNET_USE_NCCL=0 -DMXNET_USE_LIBJPEG_TURBO=0 
-Icpp-package/include -o build/cpp-package/example/lenet_with_mxdataiter 
cpp-package/example/lenet_with_mxdataiter.cpp -pthread -lm -lcudart -lcublas 
-lcurand -lcusolver -L/usr/local/cuda-8.0/lib64 -L/usr/local/cuda-8.0/lib 
-Wl,--as-needed -lmklml_intel -lmklml_gnu -liomp5 
-L/home/users/yijie.yu/yuyijie/mxnet/3rdparty/mkldnn/build/install/lib/ 
-lopenblas -fopenmp -lrt -L/usr/local/hadoop-2.7.2/lib/native -lhdfs 
-L/usr/lib/jvm/java-1.7.0/jre/lib/amd64/server -ljvm 
-Wl,-rpath=/usr/lib/jvm/java-1.7.0/jre/lib/amd64/server 
-L/home/users/yijie.yu/yuyijie/mxnet/3rdparty/mkldnn/build/install/lib -lmkldnn 
-Wl,-rpath,'${ORIGIN}' -lopencv_calib3d -lopencv_contrib -lopencv_core 
-lopencv_features2d -lopencv_flann -lopencv_highgui -lopencv_imgproc 
-lopencv_legacy -lopencv_ml -lopencv_objdetect -lopencv_photo 
-lopencv_stitching -lopencv_superres -lopencv_video -lopencv_videostab -lcudnn 
/usr/lib64/libtcmalloc.so  -lcufft -lcuda -lnvrtc -L/usr/local/cuda/lib64/stubs 
-L/home/users/yijie.yu/yuyijie/mxnet/lib -lmxnet
   make: *** [build/cpp-package/example/mlp_csv] Error 1
   make: *** Waiting for unfinished jobs
   But I still got the MxnetCpp.h file .How can I solve it or I just ignore it ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13679: add crop gluon transform

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13679: add crop 
gluon transform
URL: https://github.com/apache/incubator-mxnet/pull/13679#discussion_r251179545
 
 

 ##
 File path: python/mxnet/gluon/data/vision/transforms.py
 ##
 @@ -168,6 +168,57 @@ def hybrid_forward(self, F, x):
 return F.image.normalize(x, self._mean, self._std)
 
 
+class Crop(HybridBlock):
+"""Crop the input image with and optionally resize it.
+
+Makes a crop of the original image then optionally resize it to the 
specified size.
+
+Parameters
+--
+x0 : int
+Left boundary of the cropping area
+y0 : int
+Top boundary of the cropping area
+w : int
+Width of the cropping area
+h : int
+Height of the cropping area
+size : int or tuple of (w, h)
+Optional, resize to new size after cropping
+interp : int, optional
+Optional, interpolation method. See opencv for details.
+
+Inputs:
+- **data**: input tensor with (H x W x C) or (N x H x W x C) shape.
+
+Outputs:
+- **out**: output tensor with (H x W x C) or (N x H x W x C) shape.
+
+Examples
+
+>>> transformer = vision.transforms.Crop(0, 0, 100, 100)
+>>> image = mx.nd.random.uniform(0, 255, (224, 224, 
3)).astype(dtype=np.uint8)
+>>> transformer(image)
+
+>>> image = mx.nd.random.uniform(0, 255, (3, 224, 224, 
3)).astype(dtype=np.uint8)
+
+>>> transformer = vision.transforms.Crop(0, 0, 100, 100, (50, 50), 1)
+>>> transformer(image)
+
+"""
+def __init__(self, x0, y0, width, height, size=None, interpolation=None):
+super(Crop, self).__init__()
+self._x0 = x0
+self._y0 = y0
+self._width = width
+self._height = height
+self._size = size
+self._interpolation = interpolation
+
+def hybrid_forward(self, F, x):
+return F.image.crop(x, self._x0, self._y0, self._width, self._height, 
self._size, self._interpolation)
 
 Review comment:
   @stu1130 - I think this is a good suggestion. What do you suggest?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-01-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new bfc28e7  Bump the publish timestamp.
bfc28e7 is described below

commit bfc28e7e11a45ae39dd03fcc5314fb4d15eeae70
Author: mxnet-ci 
AuthorDate: Sat Jan 26 01:02:25 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..2f38359
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Jan 26 01:02:25 UTC 2019



[GitHub] mxnet-label-bot commented on issue #13996: Make error

2019-01-25 Thread GitBox
mxnet-label-bot commented on issue #13996: Make error 
URL: 
https://github.com/apache/incubator-mxnet/issues/13996#issuecomment-457785656
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Installation, Build


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed pull request #13918: Create test

2019-01-25 Thread GitBox
sandeep-krishnamurthy closed pull request #13918: Create test
URL: https://github.com/apache/incubator-mxnet/pull/13918
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #13918: Create test

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on issue #13918: Create test
URL: https://github.com/apache/incubator-mxnet/pull/13918#issuecomment-457785871
 
 
   @gzherb - Please reopen if you have specific reason for this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #13884: [CI][URGENT] Fixes for docker cache generation

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on issue #13884: [CI][URGENT] Fixes for docker 
cache generation
URL: https://github.com/apache/incubator-mxnet/pull/13884#issuecomment-457785970
 
 
   @marcoabreu  @larroy - What is the next step here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #13906: [MXNET-703] Update onnx-tensorrt for fp16 support

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on issue #13906: [MXNET-703] Update 
onnx-tensorrt for fp16 support
URL: https://github.com/apache/incubator-mxnet/pull/13906#issuecomment-457786110
 
 
   @Roshrini @vandanavk @marcoabreu - Can you please take a look at this PR?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on issue #13923: Cannot install MXNet R (cannot open URL)

2019-01-25 Thread GitBox
ChaiBapchya commented on issue #13923: Cannot install MXNet R (cannot open URL)
URL: 
https://github.com/apache/incubator-mxnet/issues/13923#issuecomment-457787971
 
 
   @Kurokokoro Great to know its working for you. Should we close this issue in 
that case?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yuyijie1995 commented on issue #13996: Make error

2019-01-25 Thread GitBox
yuyijie1995 commented on issue #13996: Make error 
URL: 
https://github.com/apache/incubator-mxnet/issues/13996#issuecomment-457788975
 
 
   When I  #include "mxnet-cpp/MxNetCpp.h" ,and this error happened ,fatal 
error:mxnet/c_api.h:No such file or directory...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yuyijie1995 commented on issue #9763: Too many header files need to be included when using C++ api

2019-01-25 Thread GitBox
yuyijie1995 commented on issue #9763: Too many header files need to be included 
when using C++ api
URL: 
https://github.com/apache/incubator-mxnet/issues/9763#issuecomment-457789469
 
 
   @meanmee  I met the same problam like you .Have you solved it? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yuyijie1995 commented on issue #9763: Too many header files need to be included when using C++ api

2019-01-25 Thread GitBox
yuyijie1995 commented on issue #9763: Too many header files need to be included 
when using C++ api
URL: 
https://github.com/apache/incubator-mxnet/issues/9763#issuecomment-457789914
 
 
   @nicklhy  Hi , I have the same problem like you, but I do not understand 
“Thus, I have to add a lot more dirs to avoid the similar errors:” . I mean 
where to add these infomation ,I still confused about it . 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya closed pull request #13855: Fix symbolic example for rand_zipfian function

2019-01-25 Thread GitBox
ChaiBapchya closed pull request #13855: Fix symbolic example for rand_zipfian 
function
URL: https://github.com/apache/incubator-mxnet/pull/13855
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #12440: Add stable nrm2 for L2 normalization

2019-01-25 Thread GitBox
sandeep-krishnamurthy commented on issue #12440: Add stable nrm2 for L2 
normalization
URL: https://github.com/apache/incubator-mxnet/pull/12440#issuecomment-457766584
 
 
   @TD - Can you please take look at @ZhennanQin comment, I think we are 
closer to getting this merged. Thanks for your contributions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on issue #13965: mxnet_coreml_converter unsupported type error

2019-01-25 Thread GitBox
ChaiBapchya commented on issue #13965: mxnet_coreml_converter unsupported type 
error
URL: 
https://github.com/apache/incubator-mxnet/issues/13965#issuecomment-457773097
 
 
   @matthewberryman 
   Upon digging in the file 
https://github.com/apache/incubator-mxnet/blob/master/tools/coreml/converter/_mxnet_converter.py
   
   I found that : Most likely reason why it's giving `unsupported type error` 
is because in your __image-classification-symbol.json__
   ```
   "op" : "_copy"
   ```
   is mentioned
   
   Your error is thrown because of this line 
   
https://github.com/apache/incubator-mxnet/blob/5dc138d95f6ac302e3f0e1c9dc9dcb774d83f69e/tools/coreml/converter/_mxnet_converter.py#L98
   
   which is triggered when the layer variable isn't part of the MXNet layer 
registry (i.e. permissible layers)
   
https://github.com/apache/incubator-mxnet/blob/5dc138d95f6ac302e3f0e1c9dc9dcb774d83f69e/tools/coreml/converter/_mxnet_converter.py#L95
   
   Permissible layers are
   
https://github.com/apache/incubator-mxnet/blob/5dc138d95f6ac302e3f0e1c9dc9dcb774d83f69e/tools/coreml/converter/_mxnet_converter.py#L28
   ```
   'FullyConnected' : _layers.convert_dense,
   'Activation' : _layers.convert_activation,
   'SoftmaxOutput'  : _layers.convert_softmax,
   'Convolution': _layers.convert_convolution,
   'Pooling': _layers.convert_pooling,
   'Flatten': _layers.convert_flatten,
   'transpose'  : _layers.convert_transpose,
   'Concat' : _layers.convert_concat,
   'BatchNorm'  : _layers.convert_batchnorm,
   'elemwise_add'   : _layers.convert_elementwise_add,
   'Reshape': _layers.convert_reshape,
   'Deconvolution'  : _layers.convert_deconvolution,
   ```
   
   Hope this helps.
   
   Solution - Use only above permitted layers.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini merged pull request #12399: ONNX export: Add Crop, Deconvolution and fix the default stride of Pooling to 1

2019-01-25 Thread GitBox
Roshrini merged pull request #12399: ONNX export: Add Crop, Deconvolution and 
fix the default stride of Pooling to 1
URL: https://github.com/apache/incubator-mxnet/pull/12399
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on issue #12399: ONNX export: Add Crop, Deconvolution and fix the default stride of Pooling to 1

2019-01-25 Thread GitBox
Roshrini commented on issue #12399: ONNX export: Add Crop, Deconvolution and 
fix the default stride of Pooling to 1
URL: https://github.com/apache/incubator-mxnet/pull/12399#issuecomment-457776417
 
 
   Thank you for working on this ops @ptrendx :) I will add test case for crop 
in my PR as I am already adding tests for export ops in this PR: 
https://github.com/apache/incubator-mxnet/pull/13981


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini closed issue #12807: No conversion function registered for op type Deconvolution yet

2019-01-25 Thread GitBox
Roshrini closed issue #12807: No conversion function registered for op type 
Deconvolution yet
URL: https://github.com/apache/incubator-mxnet/issues/12807
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on issue #13944: how to Compute the eigenvalues and eigenvectors of ndarray/hidden layer?

2019-01-25 Thread GitBox
ChaiBapchya commented on issue #13944: how to Compute the eigenvalues and 
eigenvectors of ndarray/hidden layer? 
URL: 
https://github.com/apache/incubator-mxnet/issues/13944#issuecomment-457781413
 
 
   @weihua04 
   Our discussion forum is a good place for these questions. But regardless, 
here's the answer
   On our documentation - 
https://mxnet.incubator.apache.org/api/python/ndarray/linalg.html
   `mxnet.ndarray.linalg.syevd` seems to be the function for your usecase.
   
   Documentation on the website also gives the following 2 examples
   ```
   // Single symmetric eigendecomposition
   A = [[1., 2.], [2., 4.]]
   U, L = syevd(A)
   U = [[0.89442719, -0.4472136],
[0.4472136, 0.89442719]]
   L = [0., 5.]
   
   // Batch symmetric eigendecomposition
   A = [[[1., 2.], [2., 4.]],
[[1., 2.], [2., 5.]]]
   U, L = syevd(A)
   U = [[[0.89442719, -0.4472136],
 [0.4472136, 0.89442719]],
[[0.92387953, -0.38268343],
 [0.38268343, 0.92387953]]]
   L = [[0., 5.],
[0.17157288, 5.82842712]]
   ```
   
   Hope this helps. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya edited a comment on issue #13944: how to Compute the eigenvalues and eigenvectors of ndarray/hidden layer?

2019-01-25 Thread GitBox
ChaiBapchya edited a comment on issue #13944: how to Compute the eigenvalues 
and eigenvectors of ndarray/hidden layer? 
URL: 
https://github.com/apache/incubator-mxnet/issues/13944#issuecomment-457781413
 
 
   @weihua04 
   Our discussion forum is a good place for these questions. But regardless, 
here's the answer
   On our documentation - 
https://mxnet.incubator.apache.org/api/python/ndarray/linalg.html
   `mxnet.ndarray.linalg.syevd` seems to be the function for your usecase.
   
   Documentation on the website also gives the following 2 examples
   ```
   // Single symmetric eigendecomposition
   A = [[1., 2.], [2., 4.]]
   U, L = syevd(A)
   U = [[0.89442719, -0.4472136],
[0.4472136, 0.89442719]]
   L = [0., 5.]
   
   // Batch symmetric eigendecomposition
   A = [[[1., 2.], [2., 4.]],
[[1., 2.], [2., 5.]]]
   U, L = syevd(A)
   U = [[[0.89442719, -0.4472136],
 [0.4472136, 0.89442719]],
[[0.92387953, -0.38268343],
 [0.38268343, 0.92387953]]]
   L = [[0., 5.],
[0.17157288, 5.82842712]]
   ```
   where
   `U` is orthonormal matrix of eigenvectors,  `L` is vector of eigenvalues
   Hope this helps. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on issue #13855: Fix symbolic example for rand_zipfian function

2019-01-25 Thread GitBox
ChaiBapchya commented on issue #13855: Fix symbolic example for rand_zipfian 
function
URL: https://github.com/apache/incubator-mxnet/pull/13855#issuecomment-457760645
 
 
   Never mind. Other PR #13978 Got merged into the repo (despite being a 
duplicate of this PR). 
   Anyway, closing this PR. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya edited a comment on issue #13965: mxnet_coreml_converter unsupported type error

2019-01-25 Thread GitBox
ChaiBapchya edited a comment on issue #13965: mxnet_coreml_converter 
unsupported type error
URL: 
https://github.com/apache/incubator-mxnet/issues/13965#issuecomment-457773097
 
 
   @matthewberryman 
   
   _Short answer_ - op variable needs to contain layers that are in 
MXNET_LAYER_REGISTRY
   
   Why?
   _Long answer_ -
   Upon digging in the file 
https://github.com/apache/incubator-mxnet/blob/master/tools/coreml/converter/_mxnet_converter.py
   
   I found that : Most likely reason why it's giving `unsupported type error` 
is because in your __image-classification-symbol.json__
   ```
   "op" : "_copy"
   ```
   is mentioned
   
   Your error is thrown because of this line 
   
https://github.com/apache/incubator-mxnet/blob/5dc138d95f6ac302e3f0e1c9dc9dcb774d83f69e/tools/coreml/converter/_mxnet_converter.py#L98
   
   which is triggered when the layer variable isn't part of the MXNet layer 
registry (i.e. permissible layers)
   
https://github.com/apache/incubator-mxnet/blob/5dc138d95f6ac302e3f0e1c9dc9dcb774d83f69e/tools/coreml/converter/_mxnet_converter.py#L95
   
   Permissible layers are
   
https://github.com/apache/incubator-mxnet/blob/5dc138d95f6ac302e3f0e1c9dc9dcb774d83f69e/tools/coreml/converter/_mxnet_converter.py#L28
   ```
   'FullyConnected' : _layers.convert_dense,
   'Activation' : _layers.convert_activation,
   'SoftmaxOutput'  : _layers.convert_softmax,
   'Convolution': _layers.convert_convolution,
   'Pooling': _layers.convert_pooling,
   'Flatten': _layers.convert_flatten,
   'transpose'  : _layers.convert_transpose,
   'Concat' : _layers.convert_concat,
   'BatchNorm'  : _layers.convert_batchnorm,
   'elemwise_add'   : _layers.convert_elementwise_add,
   'Reshape': _layers.convert_reshape,
   'Deconvolution'  : _layers.convert_deconvolution,
   ```
   
   Hope this helps.
   
   Solution - Use only above permitted layers.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: ONNX export: Add Crop, Deconvolution and fix the default stride of Pooling to 1 (#12399)

2019-01-25 Thread roshrini
This is an automated email from the ASF dual-hosted git repository.

roshrini pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 25e915b  ONNX export: Add Crop, Deconvolution and fix the default 
stride of Pooling to 1 (#12399)
25e915b is described below

commit 25e915bd401f7ac4c639c935f775deccebec96d3
Author: Przemyslaw Tredak 
AuthorDate: Fri Jan 25 16:04:56 2019 -0800

ONNX export: Add Crop, Deconvolution and fix the default stride of Pooling 
to 1 (#12399)

* Added Deconvolution and Crop to ONNX exporter

* Added default for pool_type
---
 .../mxnet/contrib/onnx/mx2onnx/_op_translations.py | 66 +-
 tests/python-pytest/onnx/test_cases.py |  3 +-
 2 files changed, 66 insertions(+), 3 deletions(-)

diff --git a/python/mxnet/contrib/onnx/mx2onnx/_op_translations.py 
b/python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
index 51deb4f..8e3c46d 100644
--- a/python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
+++ b/python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
@@ -219,6 +219,68 @@ def convert_convolution(node, **kwargs):
 return [conv_node]
 
 
+@mx_op.register("Deconvolution")
+def convert_deconvolution(node, **kwargs):
+"""Map MXNet's deconvolution operator attributes to onnx's ConvTranspose 
operator
+and return the created node.
+"""
+name, inputs, attrs = get_inputs(node, kwargs)
+
+kernel_dims = list(parse_helper(attrs, "kernel"))
+stride_dims = list(parse_helper(attrs, "stride", [1, 1]))
+pad_dims = list(parse_helper(attrs, "pad", [0, 0]))
+num_group = int(attrs.get("num_group", 1))
+dilations = list(parse_helper(attrs, "dilate", [1, 1]))
+adj_dims = list(parse_helper(attrs, "adj", [0, 0]))
+
+pad_dims = pad_dims + pad_dims
+
+deconv_node = onnx.helper.make_node(
+"ConvTranspose",
+inputs=inputs,
+outputs=[name],
+kernel_shape=kernel_dims,
+strides=stride_dims,
+dilations=dilations,
+output_padding=adj_dims,
+pads=pad_dims,
+group=num_group,
+name=name
+)
+
+return [deconv_node]
+
+
+@mx_op.register("Crop")
+def convert_crop(node, **kwargs):
+"""Map MXNet's crop operator attributes to onnx's Crop operator
+and return the created node.
+"""
+name, inputs, attrs = get_inputs(node, kwargs)
+num_inputs = len(inputs)
+
+y, x = list(parse_helper(attrs, "offset", [0, 0]))
+h, w = list(parse_helper(attrs, "h_w", [0, 0]))
+if num_inputs > 1:
+h, w = kwargs["out_shape"][-2:]
+border = [x, y, x + w, y + h]
+
+crop_node = onnx.helper.make_node(
+"Crop",
+inputs=[inputs[0]],
+outputs=[name],
+border=border,
+scale=[1, 1],
+name=name
+)
+
+logging.warning(
+"Using an experimental ONNX operator: Crop. " \
+"Its definition can change.")
+
+return [crop_node]
+
+
 @mx_op.register("FullyConnected")
 def convert_fully_connected(node, **kwargs):
 """Map MXNet's FullyConnected operator attributes to onnx's Gemm operator
@@ -583,8 +645,8 @@ def convert_pooling(node, **kwargs):
 name, input_nodes, attrs = get_inputs(node, kwargs)
 
 kernel = eval(attrs["kernel"])
-pool_type = attrs["pool_type"]
-stride = eval(attrs["stride"]) if attrs.get("stride") else None
+pool_type = attrs["pool_type"] if attrs.get("pool_type") else "max"
+stride = eval(attrs["stride"]) if attrs.get("stride") else (1, 1)
 global_pool = get_boolean_attribute_value(attrs, "global_pool")
 p_value = attrs.get('p_value', 'None')
 
diff --git a/tests/python-pytest/onnx/test_cases.py 
b/tests/python-pytest/onnx/test_cases.py
index 6ec3709..b20db23 100644
--- a/tests/python-pytest/onnx/test_cases.py
+++ b/tests/python-pytest/onnx/test_cases.py
@@ -113,7 +113,8 @@ BASIC_MODEL_TESTS = {
  'test_Softmax',
  'test_softmax_functional',
  'test_softmax_lastdim',
- ]
+ ],
+'export': ['test_ConvTranspose2d']
 }
 
 STANDARD_MODEL = {



[GitHub] zboldyga commented on issue #13227: Missing return value documentation for nd.random.* and sym.random.*

2019-01-25 Thread GitBox
zboldyga commented on issue #13227: Missing return value documentation for 
nd.random.* and sym.random.*
URL: 
https://github.com/apache/incubator-mxnet/issues/13227#issuecomment-457782632
 
 
   Just filed it: https://github.com/apache/incubator-mxnet/pull/13994  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zboldyga opened a new pull request #13994: Return value docs for nd.random.* and sym.random.*

2019-01-25 Thread GitBox
zboldyga opened a new pull request #13994:  Return value docs for nd.random.* 
and sym.random.*
URL: https://github.com/apache/incubator-mxnet/pull/13994
 
 
   ## Description ##
   Documented return values & types for nd.random.* and sym.random.*
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   **NO JIRA, PR CHANGE IS TINY (DOCUMENTATION ADDITIONS ONLY)**
   - [YES ] Changes are complete (i.e. I finished coding on this PR)
   
   ### Changes ###
   - nd.random.* , documentation for return types
   - sym.random.* , documentation for return types
   
   ## Comments ##
   Based on other parts of the Python API documentation as of V 1.3.1, I wasn't 
100% clear on how detailed return type documentation should be. I also wasn't 
sure of the preferred terms to use for certain aspects of the MXNet Symbol 
library. For instance, is saying that a Symbol 'resolves' to shape (m,n,x,y) an 
accurate description? Given the complexity of some of the .random.* APIs, it 
seems useful to get into the details of the structure of the output. 
   
   Happy to make changes if someone more central to the project has preferences!
   
   Also, I was not able to build the docs on OSX or Ubuntu, this seemed to be a 
general problem with any version of the MXNet project. I'm going to lean on the 
CI tools for the moment.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


<    1   2