mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r299279487
##########
File path: src/operator/tensor/ordering_op-inl.h
##########
@@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs& attrs,
const std::vector<OpReqType>& req,
const std::vector<TBlob>& outputs) {
const ArgSortParam& param = nnvm::get<ArgSortParam>(attrs.parsed);
- TopKParam topk_param;
- topk_param.axis = param.axis;
- topk_param.is_ascend = param.is_ascend;
- topk_param.k = 0;
- topk_param.dtype = param.dtype;
- topk_param.ret_typ = topk_enum::kReturnIndices;
- MXNET_NO_FLOAT16_TYPE_SWITCH(inputs[0].type_flag_, DType, {
- MSHADOW_TYPE_SWITCH(param.dtype, IDType, {
- TopKImpl<xpu, DType, IDType>(ctx.run_ctx,
- ctx.requested[0], req, inputs[0], outputs,
topk_param);
+
+ if (inputs[0].shape_.ndim() == 0) {
+ // Scalar tensor only accept axis of value 0, -1 or None
+ CHECK(!static_cast<bool>(param.axis) || param.axis.value() == -1 ||
param.axis.value() == 0)
+ << "Axis can only be -1 or 0 for scalor tensor";
+ MSHADOW_TYPE_SWITCH(param.dtype, DType, {
+ Stream<xpu> *s = ctx.get_stream<xpu>();
+ Tensor<xpu, 1, DType> outdata = outputs[0].get_with_shape<xpu, 1,
DType>(Shape1(1), s);
+ ASSIGN_DISPATCH(outdata, OpReqType::kWriteTo, 0);
});
- });
+ } else if (inputs[0].shape_.Size() == 0) {
+ if (static_cast<bool>(param.axis)) {
+ int axis = param.axis.value();
+ if (axis < 0) axis += inputs[0].shape_.ndim();
+ CHECK(axis >= 0 && axis < inputs[0].shape_.ndim())
+ << "Axis must be within the range of input tensor's dimension";
Review comment:
> What happens if all checks pass inside this else if()?
This `else if` branch is targeting the zero-size tensor.
When the input is of zero-size, no actual computation needs to be done. The
zero-size output is prepared during shape inference, so in the forward pass, we
only need to check if the axis fall within the provided shape. If yes (CHECK
passed), we are finished.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services