[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-11 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344615755
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -2294,6 +2294,26 @@ def hybrid_forward(self, F, x):
 assert_almost_equal(mx_ret.asnumpy(), np_ret, atol=1e-4, 
rtol=1e-3, use_broadcast=False)
 
 
+@with_seed()
+@use_np
+def test_npx_random_bernoulli():
+shapes = [(), (1,), (2, 3), (4, 0, 5), 6, (7, 8), None]
+dtypes = ['float16', 'float32', 'float64', 'int32', 'bool']
+epsilon = 1e-4
+for shape, dtype in itertools.product(shapes, dtypes):
+prob = np.random.uniform(size=shape)
+logit = np.log(prob) - np.log(1 - prob)
+expected_shape = shape
+if not isinstance(shape, tuple):
+expected_shape = () if shape is None else (shape,)
+out_prob = npx.random.bernoulli(prob=prob, size=shape, dtype=dtype)
+assert out_prob.shape == expected_shape
+assert int((out_prob.asnumpy() == 0).sum() + (out_prob.asnumpy() == 
1).sum()) == out_prob.size
+out_logit = npx.random.bernoulli(logit=logit, size=shape, dtype=dtype)
+assert out_logit.shape == expected_shape
+assert int((out_logit.asnumpy() == 0).sum() + (out_logit.asnumpy() == 
1).sum()) == out_logit.size
 
 Review comment:
   Sure, I will add corresponding test case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-11 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344610223
 
 

 ##
 File path: src/operator/numpy/random/np_bernoulli_op.cc
 ##
 @@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_bernoulli_op.cc
+ * \brief Operator for numpy sampling from bernoulli distributions
+ */
+
+#include "./np_bernoulli_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+DMLC_REGISTER_PARAMETER(NumpyBernoulliParam);
+
+NNVM_REGISTER_OP(_npi_bernoulli)
+.describe("Sample frmo bernoulli distribution")
+.set_num_inputs(
+  [](const nnvm::NodeAttrs& attrs) {
+const NumpyBernoulliParam& param = 
nnvm::get(attrs.parsed);
+int num_inputs = 1;
+if (param.logit.has_value() || param.prob.has_value()) {
+  num_inputs -= 1;
+}
+return num_inputs;
+  }
+)
+.set_num_outputs(1)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+const NumpyBernoulliParam& param = 
nnvm::get(attrs.parsed);
+int num_inputs = 1;
+if (param.logit.has_value() || param.prob.has_value()) {
+  num_inputs -= 1;
+}
+if (num_inputs == 0) return std::vector();
+return std::vector{"input1"};
 
 Review comment:
   @sxjscience  Using one if-else would cause lint error if my memory serves.
   @haojin2 's solution is a feasible one, I will switch to the ternary 
expression form.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-11 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344610882
 
 

 ##
 File path: src/operator/numpy/random/np_bernoulli_op.h
 ##
 @@ -0,0 +1,201 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_bernoulli_op.h
+ * \brief Operator for numpy sampling from bernoulli distribution.
+ */
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyBernoulliParam : public dmlc::Parameter {
+  dmlc::optional prob;
+  dmlc::optional logit;
+  std::string ctx;
+  int dtype;
+  bool is_logit;
+  dmlc::optional> size;
+  DMLC_DECLARE_PARAMETER(NumpyBernoulliParam) {
+DMLC_DECLARE_FIELD(prob);
+DMLC_DECLARE_FIELD(logit);
+DMLC_DECLARE_FIELD(size)
+.set_default(dmlc::optional>())
+.describe(
+"Output shape. If the given shape is, "
+"e.g., (m, n, k), then m * n * k samples are drawn. "
+"Default is None, in which case a single value is returned.");
+DMLC_DECLARE_FIELD(ctx).set_default("cpu").describe(
+"Context of output, in format [cpu|gpu|cpu_pinned](n)."
+" Only used for imperative calls.");
+DMLC_DECLARE_FIELD(dtype)
+.add_enum("uint8", mshadow::kUint8)
+.add_enum("int32", mshadow::kInt32)
+.add_enum("float32", mshadow::kFloat32)
+.add_enum("float64", mshadow::kFloat64)
+.add_enum("float16", mshadow::kFloat16)
+.add_enum("bool", mshadow::kBool)
+.set_default(mshadow::kFloat32)
+.describe(
+"DType of the output in case this can't be inferred. "
+"Defaults to float32 if not defined (dtype=None).");
+DMLC_DECLARE_FIELD(is_logit);
+  }
+};
+
+inline bool NumpyBernoulliOpType(const nnvm::NodeAttrs &attrs,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  const NumpyBernoulliParam ¶m = 
nnvm::get(attrs.parsed);
+  int otype = param.dtype;
+  (*out_attrs)[0] = otype;
+  return true;
+}
+
+namespace mxnet_op {
+
+struct prob_to_logit {
+  MSHADOW_XINLINE static void Map(index_t i, float* uniforms) {
+float prob = uniforms[i];
+uniforms[i] = log(prob) - log(1 - prob);
+  }
+};
+
+template 
+struct bernoulli_kernel {
+  MSHADOW_XINLINE static void Map(index_t i,
+  const Shape &stride,
+  const Shape &oshape,
+  IType *inputs, float* threshold, OType *out) 
{
+Shape coord = unravel(i, oshape);
+auto idx = static_cast(dot(coord, stride));
+out[i] =  inputs[idx] > threshold[i] ? OType(1) : OType(0);
+  }
+};
+
+template 
+struct scalar_bernoulli_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, float inputs, float *threshold,
+  OType *out) {
+out[i] = inputs > threshold[i] ? OType(1) : OType(0);
+  }
+};
+
+template 
+struct check_legal_prob_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, IType *scalar, float* flag) {
+if (scalar[i] < 0.0 || scalar[i] > 1.0) {
+  flag[0] = -1.0;
+}
+  }
+};
+
+}  // namespace mxnet_op
+
+template 
+void NumpyBernoulliForward(const nnvm::NodeAttrs &attrs,
+ const OpContext &ctx,
+ const std::vector &inputs,
+ const std::vector &req,
+ const std::vector &outputs) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  const NumpyBernoulliParam ¶m = 
nnvm::get(attrs.parsed);
+  Stream *s = ctx.get_stream();
+  index_t output_len = outputs[0].Size();
+  Random *prnd = ctx.requested[0].get_random(s);
+  Tensor workspace =
+  ctx.requested[1].get_space_typed(Shape1(outp

[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-11 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344610709
 
 

 ##
 File path: src/operator/numpy/random/np_bernoulli_op.cc
 ##
 @@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_bernoulli_op.cc
+ * \brief Operator for numpy sampling from bernoulli distributions
+ */
+
+#include "./np_bernoulli_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+DMLC_REGISTER_PARAMETER(NumpyBernoulliParam);
+
+NNVM_REGISTER_OP(_npi_bernoulli)
+.describe("Sample frmo bernoulli distribution")
 
 Review comment:
   I will just remove it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-11 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344610223
 
 

 ##
 File path: src/operator/numpy/random/np_bernoulli_op.cc
 ##
 @@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_bernoulli_op.cc
+ * \brief Operator for numpy sampling from bernoulli distributions
+ */
+
+#include "./np_bernoulli_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+DMLC_REGISTER_PARAMETER(NumpyBernoulliParam);
+
+NNVM_REGISTER_OP(_npi_bernoulli)
+.describe("Sample frmo bernoulli distribution")
+.set_num_inputs(
+  [](const nnvm::NodeAttrs& attrs) {
+const NumpyBernoulliParam& param = 
nnvm::get(attrs.parsed);
+int num_inputs = 1;
+if (param.logit.has_value() || param.prob.has_value()) {
+  num_inputs -= 1;
+}
+return num_inputs;
+  }
+)
+.set_num_outputs(1)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+const NumpyBernoulliParam& param = 
nnvm::get(attrs.parsed);
+int num_inputs = 1;
+if (param.logit.has_value() || param.prob.has_value()) {
+  num_inputs -= 1;
+}
+if (num_inputs == 0) return std::vector();
+return std::vector{"input1"};
 
 Review comment:
   @sxjscience  Using one if-else would cause lint error if my memory serves.
   @haojin2 's solution is a feasible one, I will switch the ternary expression 
form.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-10 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344488896
 
 

 ##
 File path: python/mxnet/ndarray/numpy_extension/random.py
 ##
 @@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Namespace for operators used in Gluon dispatched by F=ndarray."""
+from __future__ import absolute_import
+from ...context import current_context
+from ..numpy import _internal as _npi
+
+
+__all__ = ['bernoulli']
+
+
+def bernoulli(prob, logit, size, dtype, ctx, out):
+"""Creates a Bernoulli distribution parameterized by :attr:`prob`
+or :attr:`logit` (but not both).
+
+Samples are binary (0 or 1). They take the value `1` with probability `p`
+and `0` with probability `1 - p`.
+
+Parameters
+--
+prob : float, ndarray
+The probability of sampling '1'.
+logit : float, ndarray
+The log-odds of sampling '1'.
 
 Review comment:
   > We should clarify that if we should either set `logtis` or `prob`. If we 
set `logtis`, the prob is `sigmoid(logits)`
   
   I think this is little bit redundant, the term `log-odds` already explains 
the relation between  logit and prob. Also, the docstring is copied from 
torch.distribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-09 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344434146
 
 

 ##
 File path: src/operator/numpy/random/np_bernoulli_op.h
 ##
 @@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_bernoulli_op.h
+ * \brief Operator for numpy sampling from bernoulli distribution.
+ */
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyBernoulliParam : public dmlc::Parameter {
+  dmlc::optional prob;
+  dmlc::optional logit;
+  std::string ctx;
+  int dtype;
+  bool is_logit;
+  dmlc::optional> size;
+  DMLC_DECLARE_PARAMETER(NumpyBernoulliParam) {
+DMLC_DECLARE_FIELD(prob);
+DMLC_DECLARE_FIELD(logit);
+DMLC_DECLARE_FIELD(size)
+.set_default(dmlc::optional>())
+.describe(
+"Output shape. If the given shape is, "
+"e.g., (m, n, k), then m * n * k samples are drawn. "
+"Default is None, in which case a single value is returned.");
+DMLC_DECLARE_FIELD(ctx).set_default("cpu").describe(
+"Context of output, in format [cpu|gpu|cpu_pinned](n)."
+" Only used for imperative calls.");
+DMLC_DECLARE_FIELD(dtype)
+.add_enum("uint8", mshadow::kUint8)
+.add_enum("int32", mshadow::kInt32)
+.add_enum("float32", mshadow::kFloat32)
+.add_enum("float64", mshadow::kFloat64)
+.add_enum("float16", mshadow::kFloat16)
+.set_default(mshadow::kFloat32)
+.describe(
+"DType of the output in case this can't be inferred. "
+"Defaults to float32 if not defined (dtype=None).");
+DMLC_DECLARE_FIELD(is_logit);
+  }
+};
+
+inline bool NumpyBernoulliOpType(const nnvm::NodeAttrs &attrs,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  const NumpyBernoulliParam ¶m = 
nnvm::get(attrs.parsed);
+  int otype = param.dtype;
+  if (otype != -1) {
+(*out_attrs)[0] = otype;
+  } else {
+// Following torch.distributions,
+// the default type will be float32.
+(*out_attrs)[0] = mshadow::kFloat32;
+  }
+  return true;
+}
+
+namespace mxnet_op {
+
+struct prob_to_logit {
+  MSHADOW_XINLINE static void Map(index_t i, float* uniforms) {
+float prob = uniforms[i];
+uniforms[i] = log(prob) - log(1 - prob);
+  }
+};
+
+template 
+struct bernoulli_kernel {
+  MSHADOW_XINLINE static void Map(index_t i,
+  const Shape &stride,
+  const Shape &oshape,
+  IType *inputs, float* threshold, OType *out) 
{
+Shape coord = unravel(i, oshape);
+auto idx = static_cast(dot(coord, stride));
+out[i] =  inputs[idx] > threshold[i] ? OType(1) : OType(0);
+  }
+};
+
+template 
+struct scalar_bernoulli_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, float inputs, float *threshold,
+  OType *out) {
+out[i] = inputs > threshold[i] ? OType(1) : OType(0);
+  }
+};
+
+template 
+struct check_legal_prob_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, IType *scalar, float* flag) {
+if (scalar[i] < 0.0 || scalar[i] > 1.0) {
+  flag[0] = -1.0;
+}
+  }
+};
+
+}  // namespace mxnet_op
+
+template 
+void NumpyBernoulliForward(const nnvm::NodeAttrs &attrs,
+ const OpContext &ctx,
+ const std::vector &inputs,
+ const std::vector &req,
+ const std::vector &outputs) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  const NumpyBernoulliParam ¶m = 
nnvm::get(attrs.parsed);
+  Stream *s = ctx.get_stream();
+  index_t output_len = outputs[0].Size();
+ 

[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-09 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344434456
 
 

 ##
 File path: src/operator/numpy/random/np_bernoulli_op.h
 ##
 @@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_bernoulli_op.h
+ * \brief Operator for numpy sampling from bernoulli distribution.
+ */
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyBernoulliParam : public dmlc::Parameter {
+  dmlc::optional prob;
+  dmlc::optional logit;
+  std::string ctx;
+  int dtype;
+  bool is_logit;
+  dmlc::optional> size;
 
 Review comment:
   They are, in essence, the same thing I believe.
   I actually copy the declaration from
   
https://github.com/apache/incubator-mxnet/blob/a783d8119260fa370c8b12d3154ebc27e8e7284d/src/operator/numpy/random/np_multinomial_op.h#L53
   As multinomial is the first re-implemented random OP in Deep Numpy,  random 
operators after multinomial all declare size as optional>.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-09 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344434247
 
 

 ##
 File path: src/operator/numpy/random/np_bernoulli_op.h
 ##
 @@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_bernoulli_op.h
+ * \brief Operator for numpy sampling from bernoulli distribution.
+ */
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyBernoulliParam : public dmlc::Parameter {
+  dmlc::optional prob;
+  dmlc::optional logit;
+  std::string ctx;
+  int dtype;
+  bool is_logit;
+  dmlc::optional> size;
+  DMLC_DECLARE_PARAMETER(NumpyBernoulliParam) {
+DMLC_DECLARE_FIELD(prob);
+DMLC_DECLARE_FIELD(logit);
+DMLC_DECLARE_FIELD(size)
+.set_default(dmlc::optional>())
+.describe(
+"Output shape. If the given shape is, "
+"e.g., (m, n, k), then m * n * k samples are drawn. "
+"Default is None, in which case a single value is returned.");
+DMLC_DECLARE_FIELD(ctx).set_default("cpu").describe(
+"Context of output, in format [cpu|gpu|cpu_pinned](n)."
+" Only used for imperative calls.");
+DMLC_DECLARE_FIELD(dtype)
+.add_enum("uint8", mshadow::kUint8)
+.add_enum("int32", mshadow::kInt32)
+.add_enum("float32", mshadow::kFloat32)
+.add_enum("float64", mshadow::kFloat64)
+.add_enum("float16", mshadow::kFloat16)
+.set_default(mshadow::kFloat32)
 
 Review comment:
   Support for boolean type seems to be quite useful, I will look into the 
feasibility later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-09 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344434146
 
 

 ##
 File path: src/operator/numpy/random/np_bernoulli_op.h
 ##
 @@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_bernoulli_op.h
+ * \brief Operator for numpy sampling from bernoulli distribution.
+ */
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyBernoulliParam : public dmlc::Parameter {
+  dmlc::optional prob;
+  dmlc::optional logit;
+  std::string ctx;
+  int dtype;
+  bool is_logit;
+  dmlc::optional> size;
+  DMLC_DECLARE_PARAMETER(NumpyBernoulliParam) {
+DMLC_DECLARE_FIELD(prob);
+DMLC_DECLARE_FIELD(logit);
+DMLC_DECLARE_FIELD(size)
+.set_default(dmlc::optional>())
+.describe(
+"Output shape. If the given shape is, "
+"e.g., (m, n, k), then m * n * k samples are drawn. "
+"Default is None, in which case a single value is returned.");
+DMLC_DECLARE_FIELD(ctx).set_default("cpu").describe(
+"Context of output, in format [cpu|gpu|cpu_pinned](n)."
+" Only used for imperative calls.");
+DMLC_DECLARE_FIELD(dtype)
+.add_enum("uint8", mshadow::kUint8)
+.add_enum("int32", mshadow::kInt32)
+.add_enum("float32", mshadow::kFloat32)
+.add_enum("float64", mshadow::kFloat64)
+.add_enum("float16", mshadow::kFloat16)
+.set_default(mshadow::kFloat32)
+.describe(
+"DType of the output in case this can't be inferred. "
+"Defaults to float32 if not defined (dtype=None).");
+DMLC_DECLARE_FIELD(is_logit);
+  }
+};
+
+inline bool NumpyBernoulliOpType(const nnvm::NodeAttrs &attrs,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  const NumpyBernoulliParam ¶m = 
nnvm::get(attrs.parsed);
+  int otype = param.dtype;
+  if (otype != -1) {
+(*out_attrs)[0] = otype;
+  } else {
+// Following torch.distributions,
+// the default type will be float32.
+(*out_attrs)[0] = mshadow::kFloat32;
+  }
+  return true;
+}
+
+namespace mxnet_op {
+
+struct prob_to_logit {
+  MSHADOW_XINLINE static void Map(index_t i, float* uniforms) {
+float prob = uniforms[i];
+uniforms[i] = log(prob) - log(1 - prob);
+  }
+};
+
+template 
+struct bernoulli_kernel {
+  MSHADOW_XINLINE static void Map(index_t i,
+  const Shape &stride,
+  const Shape &oshape,
+  IType *inputs, float* threshold, OType *out) 
{
+Shape coord = unravel(i, oshape);
+auto idx = static_cast(dot(coord, stride));
+out[i] =  inputs[idx] > threshold[i] ? OType(1) : OType(0);
+  }
+};
+
+template 
+struct scalar_bernoulli_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, float inputs, float *threshold,
+  OType *out) {
+out[i] = inputs > threshold[i] ? OType(1) : OType(0);
+  }
+};
+
+template 
+struct check_legal_prob_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, IType *scalar, float* flag) {
+if (scalar[i] < 0.0 || scalar[i] > 1.0) {
+  flag[0] = -1.0;
+}
+  }
+};
+
+}  // namespace mxnet_op
+
+template 
+void NumpyBernoulliForward(const nnvm::NodeAttrs &attrs,
+ const OpContext &ctx,
+ const std::vector &inputs,
+ const std::vector &req,
+ const std::vector &outputs) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  const NumpyBernoulliParam ¶m = 
nnvm::get(attrs.parsed);
+  Stream *s = ctx.get_stream();
+  index_t output_len = outputs[0].Size();
+ 

[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-08 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344433042
 
 

 ##
 File path: python/mxnet/ndarray/numpy_extension/random.py
 ##
 @@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Namespace for operators used in Gluon dispatched by F=ndarray."""
+from __future__ import absolute_import
+from ...context import current_context
+from ..numpy import _internal as _npi
+
+
+__all__ = ['bernoulli']
+
+
+def bernoulli(prob, logit, size, dtype, ctx, out):
+"""Creates a Bernoulli distribution parameterized by :attr:`prob`
+or :attr:`logit` (but not both).
+
+Samples are binary (0 or 1). They take the value `1` with probability `p`
+and `0` with probability `1 - p`.
+
+Parameters
+--
+prob : float, ndarray
+The probability of sampling '1'.
+logit : float, ndarray
+The log-odds of sampling '1'.
 
 Review comment:
   You are right, I shall also add information regarding `ValueError` into the 
docstring.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-08 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344432954
 
 

 ##
 File path: python/mxnet/symbol/numpy_extension/random.py
 ##
 @@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Namespace for operators used in Gluon dispatched by F=symbol."""
+
+from __future__ import absolute_import
+from ...context import current_context
+from ..numpy import _internal as _npi
+
+__all__ = ['bernoulli']
+
+
+def bernoulli(prob=None, logit=None, size=None, dtype=None, ctx=None, 
out=None):
+"""Creates a Bernoulli distribution parameterized by :attr:`prob`
+or :attr:`logit` (but not both).
+
+Samples are binary (0 or 1). They take the value `1` with probability `p`
+and `0` with probability `1 - p`.
+
+Parameters
+--
+prob : float, _Symbol
+The probability of sampling '1'.
+logit : float, _Symbol
+The log-odds of sampling '1'.
+size : int or tuple of ints, optional
+Output shape.  If the given shape is, e.g., ``(m, n, k)``, then
+``m * n * k`` samples are drawn.  Default is None, in which case a
+single value is returned.
+dtype : dtype, optional
+Desired dtype of the result. All dtypes are determined by their
+name, i.e., 'int64', 'int', etc, so byteorder is not available
+and a specific precision may have different C types depending
+on the platform. The default value is 'np.float32'.
+ctx : Context, optional
+Device context of output. Default is current context.
+out : symbol, optional
+The output symbol (default is `None`).
+
+Returns
+---
+out : _Symbol
+Drawn samples from the parameterized bernoulli distribution.
+
+Examples
+
+>>> prob = np.random.uniform(size=(4,4))
+>>> logit = np.log(prob) - np.log(1 - prob)
+>>> npx.random.bernoulli(logit=logit)
+array([[0., 1., 1., 1.],
+[0., 1., 1., 1.],
+[0., 1., 0., 0.],
+[1., 0., 1., 0.]])
+
+>>> npx.random.bernoulli(prob=prob)
+array([[0., 1., 0., 1.],
+[1., 1., 1., 1.],
+[1., 1., 1., 0.],
+[1., 0., 1., 0.]])
+"""
+from ..numpy import _Symbol as np_symbol
+tensor_type_name = np_symbol
+if (prob is None) == (logit is None):
+raise ValueError(
+"Either `prob` or `logit` must be specified, but not both.")
 
 Review comment:
   Good suggestion! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-08 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344432925
 
 

 ##
 File path: python/mxnet/ndarray/numpy_extension/random.py
 ##
 @@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Namespace for operators used in Gluon dispatched by F=ndarray."""
+from __future__ import absolute_import
+from ...context import current_context
+from ..numpy import _internal as _npi
+
+
+__all__ = ['bernoulli']
+
+
+def bernoulli(prob, logit, size, dtype, ctx, out):
+"""Creates a Bernoulli distribution parameterized by :attr:`prob`
+or :attr:`logit` (but not both).
+
+Samples are binary (0 or 1). They take the value `1` with probability `p`
+and `0` with probability `1 - p`.
+
+Parameters
+--
+prob : float, ndarray
+The probability of sampling '1'.
 
 Review comment:
   The docstring format here follows: 
   
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/numpy/random.py#L88


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-08 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344430963
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -2294,6 +2294,26 @@ def hybrid_forward(self, F, x):
 assert_almost_equal(mx_ret.asnumpy(), np_ret, atol=1e-4, 
rtol=1e-3, use_broadcast=False)
 
 
+@with_seed()
+@use_np
+def test_npx_bernoulli():
 
 Review comment:
   resolved, thx!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-08 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344430960
 
 

 ##
 File path: src/operator/numpy/random/np_bernoulli_op.h
 ##
 @@ -0,0 +1,208 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_bernoulli_op.h
+ * \brief Operator for numpy sampling from bernoulli distribution.
+ */
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_BERNOULLI_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+#include "./dist_common.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyBernoulliParam : public dmlc::Parameter {
+  dmlc::optional prob;
+  dmlc::optional logit;
+  std::string ctx;
+  int dtype;
+  bool is_logit;
+  dmlc::optional> size;
+  DMLC_DECLARE_PARAMETER(NumpyBernoulliParam) {
+DMLC_DECLARE_FIELD(prob);
+DMLC_DECLARE_FIELD(logit);
+DMLC_DECLARE_FIELD(size)
+.set_default(dmlc::optional>())
+.describe(
+"Output shape. If the given shape is, "
+"e.g., (m, n, k), then m * n * k samples are drawn. "
+"Default is None, in which case a single value is returned.");
+DMLC_DECLARE_FIELD(ctx).set_default("cpu").describe(
+"Context of output, in format [cpu|gpu|cpu_pinned](n)."
+" Only used for imperative calls.");
+DMLC_DECLARE_FIELD(dtype)
+.add_enum("uint8", mshadow::kUint8)
+.add_enum("int32", mshadow::kInt32)
+.add_enum("float32", mshadow::kFloat32)
+.add_enum("float64", mshadow::kFloat64)
+.add_enum("float16", mshadow::kFloat16)
+.set_default(mshadow::kFloat32)
+.describe(
+"DType of the output in case this can't be inferred. "
+"Defaults to float32 if not defined (dtype=None).");
+DMLC_DECLARE_FIELD(is_logit);
+  }
+};
+
+inline bool NumpyBernoulliOpType(const nnvm::NodeAttrs &attrs,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  const NumpyBernoulliParam ¶m = 
nnvm::get(attrs.parsed);
+  int otype = param.dtype;
+  if (otype != -1) {
+(*out_attrs)[0] = otype;
+  } else {
+// Following torch.distributions,
+// the default type will be float32.
+(*out_attrs)[0] = mshadow::kFloat32;
+  }
+  return true;
+}
+
+namespace mxnet_op {
+
+struct prob_to_logit {
+  MSHADOW_XINLINE static void Map(index_t i, float* uniforms) {
+float prob = uniforms[i];
+uniforms[i] = log(prob) - log(1 - prob);
+  }
+};
+
+template 
+struct bernoulli_kernel {
+  MSHADOW_XINLINE static void Map(index_t i,
+  const Shape &stride,
+  const Shape &oshape,
+  IType *inputs, float* threshold, OType *out) 
{
+Shape coord = unravel(i, oshape);
+auto idx = static_cast(dot(coord, stride));
+out[i] =  inputs[idx] > threshold[i] ? OType(1) : OType(0);
+  }
+};
+
+template 
+struct scalar_bernoulli_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, float inputs, float *threshold,
+  OType *out) {
+out[i] = inputs > threshold[i] ? OType(1) : OType(0);
+  }
+};
+
+template 
+struct check_legal_prob_kernel {
+  MSHADOW_XINLINE static void Map(index_t i, IType *scalar, float* flag) {
+if (scalar[i] < 0.0 || scalar[i] > 1.0) {
+  flag[0] = -1.0;
+}
+  }
+};
+
+}  // namespace mxnet_op
+
+template 
+void NumpyBernoulliForward(const nnvm::NodeAttrs &attrs,
+ const OpContext &ctx,
+ const std::vector &inputs,
+ const std::vector &req,
+ const std::vector &outputs) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  const NumpyBernoulliParam ¶m = 
nnvm::get(attrs.parsed);
+  Stream *s = ctx.get_stream();
+  index_t output_len = outputs[0].Size();
+ 

[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16638: [Numpy] Add sampling method for bernoulli

2019-11-08 Thread GitBox
xidulu commented on a change in pull request #16638: [Numpy] Add sampling 
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r344430950
 
 

 ##
 File path: src/operator/numpy/random/dist_common.h
 ##
 @@ -172,10 +171,36 @@ inline bool TwoparamsDistOpShape(const nnvm::NodeAttrs 
&attrs,
 } else if (in_attrs->size() == 0) {
   // Two scalar case.
   SHAPE_ASSIGN_CHECK(*out_attrs, 0, TShape(0, -1))
-  return true;
 }
   }
-  return out_attrs->at(0).ndim() != 0U;
+  return shape_is_known(out_attrs->at(0));
+}
+
+template 
+inline bool UnaryDistOpShape(const nnvm::NodeAttrs &attrs,
+ std::vector *in_attrs,
 
 Review comment:
   resolved, thx


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services