WsqRichards commented on issue #16154: URL: https://github.com/apache/tvm/issues/16154#issuecomment-1829757841
https://rocm.docs.amd.com/projects/MIOpen/en/latest/convolution.html#miopenconvolutionforward [miopenStatus_t](https://rocm.docs.amd.com/projects/MIOpen/en/latest/handle.html#_CPPv414miopenStatus_t) miopenConvolutionForward(miopenHandle_t handle, const void *alpha, const miopenTensorDescriptor_t xDesc, const void *x, const miopenTensorDescriptor_t wDesc, const void *w, const miopenConvolutionDescriptor_t convDesc, [miopenConvFwdAlgorithm_t](https://rocm.docs.amd.com/projects/MIOpen/en/latest/convolution.html#_CPPv424miopenConvFwdAlgorithm_t) algo, const void *beta, const miopenTensorDescriptor_t yDesc, void *y, void *workSpace, size_t workSpaceSize) Execute a forward convolution layer. Runs the forward convolution layer based on the selected algorithm. The function [miopenFindConvolutionForwardAlgorithm()](https://rocm.docs.amd.com/projects/MIOpen/en/latest/convolution.html#group__convolutions_1gaca2f3b99b04393beebaee41e3d990f68) must have been executed previously to determine the required memory needed for the workspace and the best convolutional algorithm. If using Group/Depthwise convolution mode, call [miopenSetConvolutionGroupCount()](https://rocm.docs.amd.com/projects/MIOpen/en/latest/convolution.html#group__convolutions_1gad1bdda28a9f5a4a8ea9b718681ac72c2) before running this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
