FrozenGene commented on a change in pull request #10108:
URL: https://github.com/apache/tvm/pull/10108#discussion_r802287909
##########
File path: src/auto_scheduler/search_task.cc
##########
@@ -104,8 +104,32 @@ HardwareParams
HardwareParamsNode::GetDefaultHardwareParams(const Target& target
max_threads_per_block, max_vthread_extent,
warp_size);
} else {
// add other opencl target
- auto target_device = target->GetAttr<String>("device", "");
- LOG(FATAL) << "No default hardware parameters for opencl target device:
" << target_device;
+ auto dev = Device{static_cast<DLDeviceType>(device_type), 0};
+ auto device_name = "device_api.opencl";
+ auto func = tvm::runtime::Registry::Get(device_name);
+ ICHECK(func != nullptr) << "Cannot find OpenCL device_api in registry";
+ auto device_api =
static_cast<tvm::runtime::DeviceAPI*>(((*func)()).operator void*());
+
+ tvm::runtime::TVMRetValue ret;
+ device_api->GetAttr(dev,
tvm::runtime::DeviceAttrKind::kMaxSharedMemoryPerBlock, &ret);
+ int max_shared_memory_per_block = ret;
+
+ int max_local_memory_per_block = INT32_MAX;
+
+ device_api->GetAttr(dev,
tvm::runtime::DeviceAttrKind::kMaxThreadsPerBlock, &ret);
+ int max_threads_per_block = ret;
+
+ device_api->GetAttr(dev, tvm::runtime::DeviceAttrKind::kWarpSize, &ret);
+ int warp_size = ret;
+
+ if (warp_size == 1) {
+ LOG(WARNING)
+ << "Warp size 1 is not recommended for OpenCL devices. Tuning
might crash or stuck";
+ }
+
+ int max_vthread_extent = warp_size / 4;
Review comment:
Sorry I just come back after vocation. I want to check here
max_vthread_extent. As I wrote in the tutorial
https://github.com/apache/tvm/blob/main/gallery/how_to/tune_with_autoscheduler/tune_network_mali.py#L188-L194
: `max_vthread_extent = int(dev.warp_size / 4) if int(dev.warp_size / 4) > 1
else dev.warp_size`. If warp_size is 1, currently code of `max_vthread_extent`
will be 0. Previous experiment shows we will be stacked or crashes. @masahi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]