gemini-code-assist[bot] commented on PR #18719:
URL: https://github.com/apache/tvm/pull/18719#issuecomment-3862366896

   ## Summary of Changes
   
   Hello @tqchen, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull 
request and will post my feedback shortly. In the meantime, here's a summary to 
help you and other reviewers quickly get up to speed!
   
   This pull request undertakes a significant refactoring effort across 
numerous codegen test files targeting various backends (GPU, AArch64, ARM, 
C-host, LLVM, Metal, OpenCL, ROCm, Vulkan, x86). The core purpose is to 
modernize the test suite by migrating all test definitions from the legacy 
`tvm.te` and `s_tir.Schedule` APIs to the more declarative and integrated 
`tvmscript` syntax. This strategic shift is intended to streamline test logic, 
reduce boilerplate, and ultimately improve the readability and long-term 
maintainability of the codegen tests.
   
   ### Highlights
   
   * **Migration to TVMScript**: All codegen tests have been refactored to 
explicitly use `tvmscript` (`I.ir_module` and `T.prim_func`) for defining 
primitive functions and their schedules. This replaces the older `tvm.te` 
(Tensor Expression) API and imperative `s_tir.Schedule` manipulations.
   * **Improved Test Maintainability**: The change aims to make tests more 
unit-like, reduce indirect dependencies, and enhance maintainability by 
embedding scheduling directives directly within the `tvmscript` definitions, 
making them more declarative and self-contained.
   * **Introduction of Helper Modules**: New helper functions (e.g., 
`_reduce_sum_module`, `_binary_op_module`) have been introduced in several test 
files to encapsulate common `tvmscript` module patterns, promoting code reuse 
and clarity.
   
   🧠 **New Feature in Public Preview:** You can now enable **Memory** to help 
**Gemini Code Assist** learn from your team's feedback. This makes future code 
reviews more consistent and personalized to your project's style. **Click 
[here](https://codeassist.google/code-review/login) to enable Memory in your 
admin console.**
   
   <details>
   <summary><b>Changelog</b></summary>
   
   * **tests/python/codegen/test_gpu_codegen_allreduce.py**
       * Replaced direct `@T.prim_func` definitions with `I.ir_module` wrapped 
in helper functions (`_reduce_sum_module`, `_reduce_max_module`).
       * Scheduling logic (e.g., thread binding) is now directly expressed 
within the `T.prim_func` body using `T.thread_binding`.
   * **tests/python/codegen/test_target_codegen_aarch64.py**
       * Removed `tvm.te` imports.
       * Introduced several `_op_module` helper functions to generate 
`I.ir_module` definitions for binary, ternary, unary, boolean, and gather 
operations using `tvmscript`.
   * **tests/python/codegen/test_target_codegen_arm.py**
       * Removed `tvm.te` imports.
       * Updated `test_popcount` and `test_vmlal_s16` to define their compute 
and scheduling logic directly within `tvmscript` `I.ir_module`.
   * **tests/python/codegen/test_target_codegen_bool.py**
       * Removed `tvm.te` imports and `tvm.testing.fixture` decorators.
       * Replaced with `GPUModule` and `CPUModule` defined using `tvmscript` 
`I.ir_module` and `T.prim_func`, embedding scheduling directly.
   * **tests/python/codegen/test_target_codegen_c_host.py**
       * Removed `tvm.te` imports.
       * Updated `test_add`, `test_reinterpret`, `test_ceil`, `test_floor`, and 
`test_round` to use `tvmscript` `I.ir_module` and `T.prim_func`.
   * **tests/python/codegen/test_target_codegen_cross_llvm.py**
       * Removed `tvm.te` imports.
       * Introduced `AddModule` using `tvmscript` `I.ir_module` and 
`T.prim_func` with embedded scheduling, and updated `test_llvm_add_pipeline` to 
use it.
   * **tests/python/codegen/test_target_codegen_cuda.py**
       * Removed `tvm.te` imports.
       * Updated numerous CUDA codegen tests to define their compute and 
scheduling logic directly within `tvmscript` `I.ir_module` and `T.prim_func`.
   * **tests/python/codegen/test_target_codegen_cuda_fp4.py**
       * Removed `tvm.te` imports.
       * Updated various FP4/FP8 related CUDA codegen tests to use `tvmscript` 
`I.ir_module` and `T.prim_func` for module definitions.
   * **tests/python/codegen/test_target_codegen_cuda_fp8.py**
       * Updated FP8 related CUDA codegen tests to use `tvmscript` 
`I.ir_module` and `T.prim_func` for module definitions.
   * **tests/python/codegen/test_target_codegen_device.py**
       * Removed `tvm.te` imports.
       * Updated `test_large_uint_imm` and `test_add_pipeline` to use 
`tvmscript` `I.ir_module` and `T.prim_func`.
   * **tests/python/codegen/test_target_codegen_gpu_common.py**
       * Removed `tvm.te` imports.
       * Updated `test_int_intrin` to use `tvmscript` `I.ir_module` and 
`T.prim_func`.
   * **tests/python/codegen/test_target_codegen_hexagon.py**
       * Removed `tvm.te` imports.
       * Updated `test_basic`, `test_llvm_target_features`, and 
`test_llvm_options` to use `tvmscript` `I.ir_module` and `T.prim_func`.
   * **tests/python/codegen/test_target_codegen_llvm.py**
       * Removed `tvm.te` and `tvm.tir` imports.
       * Converted almost all LLVM codegen tests to use `tvmscript` 
`I.ir_module` and `T.prim_func` for defining test modules, including complex 
scheduling and intrinsic calls.
   * **tests/python/codegen/test_target_codegen_metal.py**
       * Removed `tvm.te` imports.
       * Updated `test_metal_inf_nan` and `test_metal_erf` to use `tvmscript` 
`I.ir_module` and `T.prim_func`.
   * **tests/python/codegen/test_target_codegen_opencl.py**
       * Removed `tvm.te` imports.
       * Updated various OpenCL codegen tests to use `tvmscript` `I.ir_module` 
and `T.prim_func`.
   * **tests/python/codegen/test_target_codegen_rocm.py**
       * Removed `tvm.te` imports.
       * Updated `test_rocm_inf_nan`, `test_rocm_copy`, and 
`test_rocm_vectorize_add` to use `tvmscript` `I.ir_module` and `T.prim_func`.
   * **tests/python/codegen/test_target_codegen_vulkan.py**
       * Removed `tvm.te`, `tvm.tir`, `tvm.topi.math.cast`, 
`tvm.tir.TensorIntrin`, `tvm.tir.IntImm`, `tvm.tir.Cast` imports.
       * Updated various Vulkan codegen tests to use `tvmscript` `I.ir_module` 
and `T.prim_func`.
   * **tests/python/codegen/test_target_codegen_x86.py**
       * Removed `tvm.te` imports.
       * Updated `test_fp16_to_fp32` to use `tvmscript` `I.ir_module` and 
`T.prim_func`.
   </details>
   
   
   
   <details>
   <summary><b>Activity</b></summary>
   
   * The pull request was initiated by tqchen.
   * The primary activity involves a large-scale refactoring of test 
definitions across numerous codegen test files.
   * This refactoring systematically replaces the older `tvm.te` and 
`s_tir.Schedule` APIs with `tvmscript`'s `I.ir_module` and `T.prim_func` 
constructs.
   * The changes reflect a significant modernization effort to align the test 
infrastructure with current TVM best practices for defining and scheduling 
Tensor IR.
   </details>
   
   
   <details>
   <summary><b>Using Gemini Code Assist</b></summary>
   <br>
   
   The full guide for Gemini Code Assist can be found on our [documentation 
page](https://developers.google.com/gemini-code-assist/docs/review-github-code),
 here are some quick tips.
   
   <b>Invoking Gemini</b>
   
   You can request assistance from Gemini at any point by creating a comment 
using either `/gemini <command>` or `@gemini-code-assist <command>`. Below is a 
summary of the supported commands on the current page.
   
   Feature | Command | Description
   --- | --- | ---
   Code Review | `/gemini review` | Performs a code review for the current pull 
request in its current state.
   Pull Request Summary | `/gemini summary` | Provides a summary of the current 
pull request in its current state.
   Comment | @gemini-code-assist | Responds in comments when explicitly tagged, 
both in pull request comments and review comments.
   Help | `/gemini help` | Displays a list of available commands.
   
   <b>Customization</b>
   
   To customize Gemini Code Assist for GitHub experience, repository 
maintainers can create a configuration file and/or provide a custom code review 
style guide (such as PEP-8 for Python) by creating and adding files to a 
`.gemini/` folder in the base of the repository. Detailed instructions can be 
found 
[here](https://developers.google.com/gemini-code-assist/docs/customize-gemini-behavior-github).
   
   <b>Limitations & Feedback</b>
   
   Gemini Code Assist may make mistakes. Please leave feedback on any instances 
where its feedback is incorrect or counter productive. You can react with 
:thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're 
interested in giving your feedback about your experience with Gemini Code 
Assist for Github and other Google products, sign up 
[here](https://google.qualtrics.com/jfe/form/SV_2cyuGuTWsEw84yG).
   
   <b>You can also get AI-powered code generation, chat, as well as code 
reviews directly in the IDE at no cost with the [Gemini Code Assist IDE 
Extension](https://cloud.google.com/products/gemini/code-assist).</b>
   </details>
   
   
   
   
   [^1]: Review the [Privacy Notices](https://policies.google.com/privacy), 
[Generative AI Prohibited Use 
Policy](https://policies.google.com/terms/generative-ai/use-policy), [Terms of 
Service](https://policies.google.com/terms), and learn how to configure Gemini 
Code Assist in GitHub 
[here](https://developers.google.com/gemini-code-assist/docs/customize-gemini-behavior-github).
 Gemini can make mistakes, so double check it and [use code with 
caution](https://support.google.com/legal/answer/13505487).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to