ZQPei opened a new pull request #9544: URL: https://github.com/apache/tvm/pull/9544
This PR is aiming to find a better way to set cuda arch and register cuda compilation callback function. The current way to set cuda arch is too weird. According to the troubleshooting topic [Nvcc fatal : Value ‘sm_86’ is not defined for option ‘gpu-architecture’](https://discuss.tvm.apache.org/t/nvcc-fatal-value-sm-86-is-not-defined-for-option-gpu-architecture/11422/7) in TVM Disscuss, you can only set cuda arch through `from tvm.autotvm.measure.measure_methods import set_cuda_target_arch` and `set_cuda_target_arch('sm_80')`, otherwise the cuda_arch will be set to the maximum compute capability of GPU device. This is not efficient and setting cuda arch has nothing to do with autotvm. Therefore, I tried to separate `set_cuda_target_arch` and other related functions from autotvm in three steps. 1. `set_cuda_target_arch` and `tvm_callback_cuda_compile` and related functions have been moved to the new file `cuda_scope.py` under `tvm.target`. In addition, I added a `get_cuda_target_arch` function to get the current cuda arch instead of `AutotvmGlobalScope.current.cuda_target_arch`. 2. In order to find a better way to set cuda arch more efficiently, I added the `arch` attribute to the `tvm.target.Target` class, and added the `enter_cuda_ecope` function to `__enter__` to enable setting cuda only via `with target` or `target.enter_cuda_scope()` 3. To ensure that tvm runs well, I modified the relevant code in tvm to be compatible with the latest changes. With this PR, we can set cuda by simply setting `target = tvm.target.Target("cuda -model=3090 -arch=sm_86")` and `with target:` or `target.enter_cuda_scope()`, and the `set_cuda_target_arch` function is retained to allow setting of cuda arch manually. It will be more effective and elegant than the old style. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
