reminisce commented on a change in pull request #10889: [MXNET-382] Shape and
Size Operator
URL: https://github.com/apache/incubator-mxnet/pull/10889#discussion_r195605034
##########
File path: src/operator/tensor/elemwise_unary_op_basic.cu
##########
@@ -77,6 +77,56 @@ NNVM_REGISTER_OP(_identity_with_attr_like_rhs)
NNVM_REGISTER_OP(reshape_like)
.set_attr<FCompute>("FCompute<gpu>", UnaryOp::IdentityCompute<gpu>);
+template<>
+void ShapeCompute<gpu>(const nnvm::NodeAttrs& attrs,
+ const OpContext& ctx,
+ const std::vector<TBlob>& inputs,
+ const std::vector<OpReqType>& req,
+ const std::vector<TBlob>& outputs) {
+ using namespace mshadow;
+ CHECK_EQ(inputs.size(), 1U);
+ CHECK_EQ(outputs.size(), 1U);
+ CHECK_EQ(req.size(), 1U);
+ const TBlob& in_data = inputs[0];
+ const TBlob& out_data = outputs[0];
+ mshadow::Stream<gpu> *s = ctx.get_stream<gpu>();
+ const TShape& in_shape = in_data.shape_;
+ Shape<10> temp_shape;
+ for (size_t i = 0; i < in_shape.ndim(); ++i) {
+ temp_shape[i] = in_shape[i];
+ }
+
+ MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+ mxnet_op::Kernel<mshadow_op::shape_kernel, gpu>::Launch(
Review comment:
Same as above, it's simply copying a tiny amount of data from a cpu array to
a gpu array. Launching kernel is expensive and a waste of resources. You can
just call `cudaMemcpyAsync` to alleviate the workload.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services