gemini-code-assist[bot] commented on PR #18787:
URL: https://github.com/apache/tvm/pull/18787#issuecomment-3909653131

   ## Summary of Changes
   
   Hello @mshr-h, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull 
request and will post my feedback shortly. In the meantime, here's a summary to 
help you and other reviewers quickly get up to speed!
   
   This pull request aims to resolve recent OpenCL test failures observed in 
the CI pipeline, which are believed to stem from an incompatibility introduced 
by a recent CUDA version upgrade. The changes involve reverting the CUDA base 
image to a known stable version and subsequently downgrading dependent 
libraries like PyTorch, ONNX, and their runtimes to ensure full compatibility 
and restore the reliability of the continuous integration tests.
   
   ### Highlights
   
   * **CUDA Version Reversion**: The base CUDA image used in the CI GPU 
Dockerfile was reverted from version 12.8.1 to 12.4.1 to address suspected 
compatibility issues causing test failures.
   * **PyTorch and ONNX Downgrade**: The installation script for ONNX and 
PyTorch was updated to install specific, older versions of ONNX, ONNX Runtime, 
ONNX Optimizer, PyTorch, and torchvision. These versions are now conditionally 
selected based on the Python version (3.9, 3.11, or other) and the target 
device (CUDA/CPU) to ensure compatibility with the reverted CUDA version.
   
   🧠 **New Feature in Public Preview:** You can now enable **Memory** to help 
**Gemini Code Assist** learn from your team's feedback. This makes future code 
reviews more consistent and personalized to your project's style. **Click 
[here](https://codeassist.google/code-review/login) to enable Memory in your 
admin console.**
   
   <details>
   <summary><b>Changelog</b></summary>
   
   * **docker/Dockerfile.ci_gpu**
       * Reverted the CUDA base image from 
`nvidia/cuda:12.8.1-cudnn-devel-ubuntu22.04` to 
`nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04`.
   * **docker/install/ubuntu_install_onnx.sh**
       * Introduced conditional installation logic for ONNX, ONNX Runtime, ONNX 
Optimizer, PyTorch, and torchvision based on the detected Python version (3.9, 
3.11, or other) and the specified device (cuda/cpu).
       * Downgraded ONNX to `1.16.0` (for Python 3.9), `1.17.0` (for Python 
3.11), or `1.12.0` (for other Python versions).
       * Downgraded ONNX Runtime to `1.19.2` (for Python 3.9), `1.20.1` (for 
Python 3.11), or `1.12.1` (for other Python versions).
       * Downgraded ONNX Optimizer to `0.2.7` across all conditional 
installation paths.
       * Downgraded PyTorch to `2.7.0` (for Python 3.9 and 3.11) or `2.4.1` 
(for other Python versions).
       * Downgraded torchvision to `0.22.0` (for Python 3.9 and 3.11) or 
`0.19.1` (for other Python versions).
       * Added `pip3 install future` to the installation steps.
       * Added comments explaining the rationale behind fixing ONNX versions 
due to potential test breakage.
   </details>
   
   <details>
   <summary><b>Using Gemini Code Assist</b></summary>
   <br>
   
   The full guide for Gemini Code Assist can be found on our [documentation 
page](https://developers.google.com/gemini-code-assist/docs/review-github-code),
 here are some quick tips.
   
   <b>Invoking Gemini</b>
   
   You can request assistance from Gemini at any point by creating a comment 
using either `/gemini <command>` or `@gemini-code-assist <command>`. Below is a 
summary of the supported commands on the current page.
   
   Feature | Command | Description
   --- | --- | ---
   Code Review | `/gemini review` | Performs a code review for the current pull 
request in its current state.
   Pull Request Summary | `/gemini summary` | Provides a summary of the current 
pull request in its current state.
   Comment | @gemini-code-assist | Responds in comments when explicitly tagged, 
both in pull request comments and review comments.
   Help | `/gemini help` | Displays a list of available commands.
   
   <b>Customization</b>
   
   To customize Gemini Code Assist for GitHub experience, repository 
maintainers can create a configuration file and/or provide a custom code review 
style guide (such as PEP-8 for Python) by creating and adding files to a 
`.gemini/` folder in the base of the repository. Detailed instructions can be 
found 
[here](https://developers.google.com/gemini-code-assist/docs/customize-gemini-behavior-github).
   
   <b>Limitations & Feedback</b>
   
   Gemini Code Assist may make mistakes. Please leave feedback on any instances 
where its feedback is incorrect or counter productive. You can react with 
:thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're 
interested in giving your feedback about your experience with Gemini Code 
Assist for Github and other Google products, sign up 
[here](https://google.qualtrics.com/jfe/form/SV_2cyuGuTWsEw84yG).
   
   <b>You can also get AI-powered code generation, chat, as well as code 
reviews directly in the IDE at no cost with the [Gemini Code Assist IDE 
Extension](https://cloud.google.com/products/gemini/code-assist).</b>
   </details>
   
   
   [^1]: Review the [Privacy Notices](https://policies.google.com/privacy), 
[Generative AI Prohibited Use 
Policy](https://policies.google.com/terms/generative-ai/use-policy), [Terms of 
Service](https://policies.google.com/terms), and learn how to configure Gemini 
Code Assist in GitHub 
[here](https://developers.google.com/gemini-code-assist/docs/customize-gemini-behavior-github).
 Gemini can make mistakes, so double check it and [use code with 
caution](https://support.google.com/legal/answer/13505487).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to