Hecmay commented on issue #6634:
URL: https://github.com/apache/incubator-tvm/issues/6634#issuecomment-704469969
Hi @comaniac Cody, thanks for your reply.
I just used the default parameters of 10 trails. All of 10 trails returned
the same error when running the evaluation. The finally returned schedule (from
`Autoschedule(policy, options)`) is the unoptimized schedule (with no thread or
block binding).
```shell
] (0) /home/sx/dlcb/build/tvm/build/libtvm.so(+0x1791844) [0x7f6429edf844]
File "/home/sx/dlcb/build/tvm/src/runtime/cuda/cuda_device_api.cc", line
115
CUDA: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading:
initialization error
, all_cost:0.55, Tstamp:1602008338.18)
```
I tried to look into the problematic function (AllocateDataSpace:108) in
`cuda_device_api.cc`. It seems that `cudaSetDevice()` does not able to find the
device, and program builder just aborted from there.
```c++
if (ctx.device_type == kDLCPUPinned) {
CUDA_CALL(cudaMallocHost(, nbytes));
} else {
CUDA_CALL(cudaSetDevice(ctx.device_id));
CUDA_CALL(cudaMalloc(, nbytes));
}
```
This is quite wired. Everything can run smoothly using the python API. But
when using C++ APIs, the runtime cannot detect the device or context, and thus
cannot find usable schedules.
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org