This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly
in repository https://gitbox.apache.org/repos/asf/tvm.git


    from 3c7adfb1f7 Use `packaging.version.parse` instead of 
`distutils.version.LooseVersion` (#17173)
     add e5bf56d1f4 [Relay][FQ2I]: Use appropriate dtype while quantizing 
relay.op.nn.pad… (#17177)
     add 18ff9ff89b [MetaSchedule]Add a testcase for padded conv2d in 
meta_schedule (#17171)
     add 5d5edd2fd8 [Relax] Integrate cuDNN attention (#17157)
     add 929b8f49ac [Relax][PyTorch] Add support for torch.permute (#17184)
     add 91e9c63b42 [FFI] Add python signal handler for ctypes FFI (#17181)
     add 9b09984636 [Hexagon] [CMake] Fix v66 build issue (#17169)
     add 432f305ce1 Add `packaging` to `python/gen_requirements.py` (#17188)
     add 162d43a997 [Relax][PyTorch] Add support for torch.einsum (#17186)

No new revisions were added by this update.

Summary of changes:
 apps/hexagon_api/CMakeLists.txt                    |   7 +-
 cmake/config.cmake                                 |   7 +
 cmake/modules/CUDA.cmake                           |  16 ++
 cmake/modules/Hexagon.cmake                        |  44 ++--
 python/gen_requirements.py                         |   1 +
 python/tvm/_ffi/_ctypes/packed_func.py             |   2 +
 python/tvm/contrib/cutlass/build.py                |  32 +--
 python/tvm/contrib/cutlass/gen_tensor_op.py        |   4 +-
 python/tvm/relax/backend/contrib/cudnn.py          |  99 +++++++-
 python/tvm/relax/backend/contrib/cutlass.py        |  18 +-
 python/tvm/relax/backend/patterns.py               |  32 ++-
 python/tvm/relax/frontend/nn/op.py                 |   9 +-
 python/tvm/relax/frontend/torch/fx_translator.py   |  13 +
 python/tvm/relax/testing/__init__.py               |   1 +
 python/tvm/relax/testing/attention.py              | 148 ++++++++++++
 .../transform/fake_quantization_to_integer.py      |   2 +-
 python/tvm/topi/testing/__init__.py                |   1 +
 python/tvm/topi/testing/attention_python.py        | 122 ++++++++++
 src/relax/backend/contrib/cudnn/codegen.cc         |  47 ++++
 src/relax/transform/allocate_workspace.cc          |   9 +-
 src/relax/transform/fuse_ops.cc                    |  19 +-
 .../contrib/cudnn/cudnn_frontend/attention.cc      | 124 ++++++++++
 .../contrib/cudnn/cudnn_frontend/attention.h       |  83 +++++++
 src/runtime/contrib/cudnn/cudnn_json_runtime.cc    | 267 ++++++++++++---------
 src/tir/schedule/transform.cc                      |   4 +-
 .../test_meta_schedule_schedule_rule_mlt_tc.py     | 152 ++++++++++++
 tests/python/relax/test_codegen_cudnn.py           |  65 ++++-
 tests/python/relax/test_codegen_cutlass.py         | 213 +++++-----------
 tests/python/relax/test_frontend_from_fx.py        |  52 +++-
 .../relax/test_transform_allocate_workspace.py     |   3 +-
 .../test_transform_merge_composite_functions.py    |   5 +-
 .../test_pass_fake_quantization_to_integer.py      |  14 ++
 32 files changed, 1281 insertions(+), 334 deletions(-)
 create mode 100644 python/tvm/relax/testing/attention.py
 create mode 100644 python/tvm/topi/testing/attention_python.py
 create mode 100644 src/runtime/contrib/cudnn/cudnn_frontend/attention.cc
 create mode 100644 src/runtime/contrib/cudnn/cudnn_frontend/attention.h

Reply via email to