gemini-code-assist[bot] commented on code in PR #110: URL: https://github.com/apache/tvm-ffi/pull/110#discussion_r2427088676
########## python/tvm_ffi/_optional_torch_c_dlpack.py: ########## @@ -607,3 +607,24 @@ def load_torch_c_dlpack_extension() -> Any: # keep alive _mod = load_torch_c_dlpack_extension() + + +def patch_torch_cuda_stream_protocol() -> Any: + """Load the torch cuda stream protocol for older versions of torch.""" + try: + import torch # noqa: PLC0415 + + if not torch.cuda.is_available(): + return + if not hasattr(torch.cuda.Stream, "__cuda_stream__"): + + def __torch_cuda_stream__(self: torch.cuda.Stream) -> tuple[int, torch.cuda.Stream]: + """Return the device id and the cuda stream.""" + return (0, self.cuda_stream) Review Comment:  The implementation of `__torch_cuda_stream__` has a couple of issues: 1. **Hardcoded Device ID**: It hardcodes the device ID to `0`. This is incorrect for streams created on devices other than `cuda:0` and will lead to errors or incorrect behavior in multi-GPU setups. You should use `self.device.index` to retrieve the correct device ID from the stream object. 2. **Incorrect Return Type Hint**: The return type hint is `tuple[int, torch.cuda.Stream]`. However, `self.cuda_stream` is an integer (a pointer to the CUDA stream), not a `torch.cuda.Stream` object. The correct type hint should be `tuple[int, int]`. These issues can cause significant problems for interoperability with other libraries that rely on the `__cuda_stream__` protocol. ```suggestion def __torch_cuda_stream__(self: torch.cuda.Stream) -> tuple[int, int]: """Return the device id and the cuda stream.""" return (self.device.index, self.cuda_stream) ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
