eric-haibin-lin commented on a change in pull request #7947: [WIP] Refactor
infer storage function for sparse operators.
URL: https://github.com/apache/incubator-mxnet/pull/7947#discussion_r141201987
##########
File path: include/mxnet/op_attr_types.h
##########
@@ -228,17 +241,24 @@ using FCompute = std::function<void (const
nnvm::NodeAttrs& attrs,
/*!
* \brief Resiger an NDArray compute function for simple stateless forward
only operator
*
- * \note Register under "FComputeEx<xpu, default>" and "FComputeEx<xpu,
non-default>"
- * Dispatched only when operators process non-default storage inputs or
outputs
+ * \note Register under "FComputeEx<xpu>" and "FComputeEx<xpu>"
+ * Dispatched only when inferred dispatch_mode is FDispatchComputeEx
*/
using FComputeEx = std::function<void (const nnvm::NodeAttrs& attrs,
const OpContext& ctx,
const std::vector<NDArray>& inputs,
const std::vector<OpReqType>& req,
const std::vector<NDArray>& outputs)>;
+/*!
+ * \brief Resiger a storage and dispatch mode inference function based on
+ * storage types of the inputs and outputs, and the dev_mask for the
operator.
+ *
+ * \note Register under "FInferStorageType"
+ */
using FInferStorageType = std::function<bool (const NodeAttrs& attrs,
- const Context& ctx,
+ const int dev_mask,
Review comment:
Yes we only need to know where it's on cpu or gpu for MKL
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services