gemini-code-assist[bot] commented on code in PR #96:
URL: https://github.com/apache/tvm-ffi/pull/96#discussion_r2422430184


##########
python/tvm_ffi/_optional_torch_c_dlpack.py:
##########
@@ -508,21 +531,45 @@ def load_torch_c_dlpack_extension() -> Any:
     *out = at::toDLPackImpl<DLManagedTensorVersioned>(tensor);
     return 0;
   } catch (const std::exception& e) {
-    SetError(error_ctx, "TorchDLPackTensorAllocator", e.what());
+    SetError(error_ctx, "TorchDLPackManagedTensorAllocator", e.what());
     return -1;
   }
 }
 
-int64_t TorchDLPackFromPyObjectPtr() {
-  return reinterpret_cast<int64_t>(TorchDLPackFromPyObject);
+int TorchDLPackCurrentWorkStream(DLDeviceType device_type, int32_t device_id, 
void** out_stream) {
+  try {
+#ifdef BUILD_WITH_CUDA
+    if (device_type != kDLCPU) {
+      *out_stream = at::cuda::getCurrentCUDAStream(device_id).stream();
+    }

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The condition `device_type != kDLCPU` is too broad. It incorrectly assumes 
that any non-CPU device is a CUDA-like device for which 
`at::cuda::getCurrentCUDAStream` is valid. This function is specific to CUDA 
and ROCm/HIP. For other device types that PyTorch supports (e.g., XPU, Metal, 
which are mapped to `kDLOneAPI`, `kDLMetal`), calling this function is 
incorrect and could lead to undefined behavior.
   
   The check should be more specific to the device types that are compatible 
with `at::cuda::getCurrentCUDAStream`.
   
   ```suggestion
       if (device_type == kDLCUDA || device_type == kDLROCM) {
         *out_stream = at::cuda::getCurrentCUDAStream(device_id).stream();
       }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to