FrozenGene commented on a change in pull request #6412:
URL: https://github.com/apache/incubator-tvm/pull/6412#discussion_r487709072
##########
File path: src/runtime/opencl/opencl_common.h
##########
@@ -216,7 +216,8 @@ class OpenCLWorkspace : public DeviceAPI {
// Initialzie the device.
void Init(const std::string& type_key, const std::string& device_type,
const std::string& platform_name = "");
- virtual void Init() { Init("opencl", "gpu"); }
+ // Find accelerator first, TODO: Make a proper way to chose between device
types
+ virtual void Init() { Init("opencl", "accelerator"); }
Review comment:
When we lift the `accelerator` as the highest level, one big problem is
we will use CUDA schedule as you have mentioned. I suspect we will get wrong
result sometimes which is worse than poor performance (I need your expert
experience to help me make sure this).
Current issue is we don't have one elegant way to let us accept device type
when we `Init`. If we want to support it, better way is we could expose it to
let users choose. However, I prefer current Intel FPGA / Xilinx way. Intel FPGA
creates `AOCL` like this:
```cpp
void AOCLWorkspace::Init() {
OpenCLWorkspace::Init("aocl", "accelerator", "Intel(R) FPGA SDK for
OpenCL(TM)");
}
```
Xilinx is:
```cpp
void SDAccelWorkspace::Init() { OpenCLWorkspace::Init("sdaccel",
"accelerator", "Xilinx"); }
```
All different things you could do in your own device api. Like `class
YourOwnTarget : public OpenCLWorkspace`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]