srkreddy1238 commented on code in PR #12711:
URL: https://github.com/apache/tvm/pull/12711#discussion_r964461276
##########
src/runtime/contrib/clml/clml_runtime.cc:
##########
@@ -286,21 +265,26 @@ class CLMLRuntime : public JSONRuntimeBase {
}
for (size_t i = 0; i < this->layer_.function.size(); ++i) {
- this->evts->resize(this->evts->size() + 1);
- cl_event* evt = &(this->evts->back());
- result = h_ClmlIntf->clEnqueueMLOpQCOM(queue, this->layer_.function[i],
- this->layer_.descriptorSet, 0,
NULL, evt);
+ if (getenv("CLML_PROFILING")) {
Review Comment:
CLML profiling is about profiling the ML op's within CLML sub graph (within
BYOC).
[isProfiling](https://github.com/apache/tvm/blob/main/src/runtime/opencl/opencl_common.h#L278-L286)
is controlled by OpenCLTimer when ever someone want to profile OpenCL kernels
(Generated by OpenCL Codegen). CLML doesn't have any kernels (no
clEnqueueNDRangeKernel calls here) instead it has extension API.
More details,
Ideally, CLML can have it's own workspace (context & queue) and operate. The
only dependency on TVM's OpenCL workspace is to have the buffers allocated on
same queue so that we can do hardware level copy while context switching from
TVM's OpenCL sub graph to CLML subgraph. Too tight integration here may lead to
unexpected functionality break as those who enhance OpenCL runtime may not pay
attention to CLML component dependencies.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]