comaniac commented on a change in pull request #7063:
URL: https://github.com/apache/tvm/pull/7063#discussion_r539516443



##########
File path: python/tvm/contrib/nvcc.py
##########
@@ -269,15 +270,24 @@ def have_int8(compute_version):
     return False
 
 
-def have_tensorcore(compute_version):
+def have_tensorcore(compute_version=None):
     """Either TensorCore support is provided in the compute capability or not
 
     Parameters
     ----------
     compute_version : str
         compute capability of a GPU (e.g. "7.0")
     """
+    if compute_version is None:
+        if tvm.gpu(0).exist:
+            compute_version = tvm.gpu(0).compute_version
+        else:
+            compute_version = AutotvmGlobalScope.current.cuda_target_arch

Review comment:
       Hmm this is a good point. Putting CUDA target arch to target definitely 
makes more sense. Then the solution becomes:
   
   ```python
   target = Target("cuda -arch=sm_80")
   ...
   cuda_target_arch = target.attrs["arch"] if "arch" in target.attrs else 
"sm_75"
   ...
   ```
   
   I found that CUDA target already has this attribute so the above solution 
actually works now.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to