discuss-archive
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [I] [Bug] Error converting operator Attention [tvm]
via GitHub
Re: [I] [Bug] Error converting operator Attention [tvm]
via GitHub
Re: [I] [Bug] Error converting operator Attention [tvm]
via GitHub
Re: [I] [Bug] Error converting operator Attention [tvm]
via GitHub
[PR] [Relax][Torch] Fix from_exported_program crash with FakeTensor and lifted tensors [tvm]
via GitHub
Re: [PR] [Relax][Torch] Fix from_exported_program crash with FakeTensor and lifted tensors [tvm]
via GitHub
Re: [PR] [Relax][Torch] Fix from_exported_program crash with FakeTensor and lifted tensors [tvm]
via GitHub
Re: [PR] [Relax][Torch] Fix from_exported_program crash with FakeTensor and lifted tensors [tvm]
via GitHub
Re: [PR] [Relax][Torch] Fix from_exported_program crash with FakeTensor and lifted tensors [tvm]
via GitHub
Re: [PR] [Relax][Torch] Fix from_exported_program crash with FakeTensor and lifted tensors [tvm]
via GitHub
Re: [PR] [Relax][Torch] Fix from_exported_program crash with FakeTensor and lifted tensors [tvm]
via GitHub
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/embed-cubin-v2): Workflow run "CI" failed!
GitBox
[PR] [WIP] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
Re: [PR] [Enhancement] Refactor cubin launcher [tvm-ffi]
via GitHub
[PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
Re: [PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
Re: [PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
Re: [PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
[PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
Re: [PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
Re: [PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
Re: [PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
Re: [PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
Re: [PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
Re: [PR] [Docs] Fix e2e_opt_model tutorial for GPU deployment [tvm]
via GitHub
[GH] (tvm-ffi/torch-c-dlpack-ext-windows-action): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/torch-c-dlpack-ext-windows-action): Workflow run "CI" failed!
GitBox
[PR] [ADDON] Merge torch c dlpack actions [tvm-ffi]
via GitHub
Re: [PR] [ADDON] Merge torch c dlpack actions [tvm-ffi]
via GitHub
Re: [PR] [ADDON] Merge torch c dlpack actions [tvm-ffi]
via GitHub
Re: [I] [Bug][relax.frontend.torch] Segfault in relax::Tuple when importing/building a torch.export program that uses batch-wise advanced indexing write (aten.index_put_) [tvm]
via GitHub
[GH] (tvm/support-rev-in-cpp-lint): Workflow run "CI" is working again!
GitBox
[GH] (tvm/support-rev-in-cpp-lint): Workflow run "CI" failed!
GitBox
[PR] [CI] Update cpplint script to support revision-based linting [tvm]
via GitHub
Re: [PR] [CI] Update cpplint script to support revision-based linting [tvm]
via GitHub
Re: [PR] [CI] Update cpplint script to support revision-based linting [tvm]
via GitHub
[GH] (tvm-ffi/stl-fix): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/stl-fix): Workflow run "CI" failed!
GitBox
[PR] [TEST] Further fix potential use after free issue [tvm-ffi]
via GitHub
Re: [PR] [TEST] Further fix potential use after free issue [tvm-ffi]
via GitHub
Re: [PR] [TEST] Further fix potential use after free issue [tvm-ffi]
via GitHub
Re: [PR] [TEST] Further fix potential use after free issue [tvm-ffi]
via GitHub
Re: [PR] [TEST] Further fix potential use after free issue [tvm-ffi]
via GitHub
Re: [PR] [TEST] Further fix potential use after free issue [tvm-ffi]
via GitHub
Re: [PR] [TEST] Further fix potential use after free issue [tvm-ffi]
via GitHub
[Apache TVM Discuss] [Questions] How to bring an ASIC Accelerator to TVM
Luna via Apache TVM Discuss
[Apache TVM Discuss] [Questions] How to bring an ASIC Accelerator to TVM
Luna via Apache TVM Discuss
[Apache TVM Discuss] [Questions] How to bring an ASIC Accelerator to TVM
markii via Apache TVM Discuss
[PR] [ARITH] Fix InternalError: Check failed: (eval_vec_) is false [tvm]
via GitHub
Re: [PR] [ARITH] Fix InternalError: Check failed: (eval_vec_) is false [tvm]
via GitHub
Re: [PR] [ARITH] Fix InternalError: Check failed: (eval_vec_) is false [tvm]
via GitHub
Re: [PR] [ARITH] Fix InternalError: Check failed: (eval_vec_) is false [tvm]
via GitHub
Re: [PR] [ARITH] Fix InternalError: Check failed: (eval_vec_) is false [tvm]
via GitHub
Re: [PR] [ARITH] Fix InternalError: Check failed: (eval_vec_) is false [tvm]
via GitHub
[PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for masked_select [tvm]
via GitHub
Re: [I] [Bug] Segfault when instantiating abstract `SearchStrategy()` in `TuneContext` [tvm]
via GitHub
[Apache TVM Discuss] [Questions] Error with tune_tir
A333222 via Apache TVM Discuss
[Apache TVM Discuss] [Questions] Does tvm Bring Tensorflow support to tvm.relax.frontend from v0.21.0
Mankala Srujan via Apache TVM Discuss
[GH] (tvm-ffi/2025-11-29/init): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-11-29/init): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-11-29/init): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-11-30/segfault-workaround): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-11-30/segfault-workaround): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-11-30/segfault-workaround): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/2025-11-30/segfault-workaround): Workflow run "CI" failed!
GitBox
[PR] Workaround Segfaults and clang-tidy warnings [tvm-ffi]
via GitHub
Re: [PR] Workaround Segfaults and clang-tidy warnings [tvm-ffi]
via GitHub
Re: [PR] Workaround Segfaults and clang-tidy warnings [tvm-ffi]
via GitHub
Re: [PR] Workaround Segfaults and clang-tidy warnings [tvm-ffi]
via GitHub
Re: [PR] Workaround Segfaults and clang-tidy warnings [tvm-ffi]
via GitHub
[GH] (tvm-ffi/2025-11-30/version-bump): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-11-30/version-bump): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/2025-11-30/version-bump): Workflow run "CI" failed!
GitBox
[PR] chore(release): Version bump after v0.1.4 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.4 release [tvm-ffi]
via GitHub
[GH] (tvm-ffi/2025-11-29/init): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/2025-11-29/init): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/2025-11-29/init): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/2025-11-29/init): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/2025-11-29/init): Workflow run "CI" failed!
GitBox
[PR] [WIP][Stubgen] Generating `_ffi_api.py` and `__init__.py` [tvm-ffi]
via GitHub
Re: [PR] [WIP][Stubgen] Introduce --init-path for package-level generation [tvm-ffi]
via GitHub
Re: [PR] [WIP][Stubgen] Introduce --init-path for package-level generation [tvm-ffi]
via GitHub
Re: [PR] feat(stubgen): Package generation with `--init-*` flags [tvm-ffi]
via GitHub
Re: [PR] feat(stubgen): Package generation with `--init-*` flags [tvm-ffi]
via GitHub
Re: [PR] feat(stubgen): Package generation with `--init-*` flags [tvm-ffi]
via GitHub
[I] [RESULT][VOTE] Release Apache TVM FFI v0.1.4-rc0 [tvm-ffi]
via GitHub
Re: [I] [RESULT][VOTE] Release Apache TVM FFI v0.1.4-rc0 [tvm-ffi]
via GitHub
[PR] [Bugfix] Prevent segfault when instantiating abstract SearchStrategy [tvm]
via GitHub
Re: [PR] [Bugfix] Prevent segfault when instantiating abstract SearchStrategy [tvm]
via GitHub
Re: [PR] [Bugfix] Prevent segfault when instantiating abstract SearchStrategy [tvm]
via GitHub
Re: [PR] [Bugfix] Prevent segfault when instantiating abstract SearchStrategy [tvm]
via GitHub
[PR] fix: Use-after-move when handling std::tuple [tvm-ffi]
via GitHub
Re: [PR] fix: Use-after-move when handling std::tuple [tvm-ffi]
via GitHub
Re: [PR] fix: Use-after-move when handling std::tuple [tvm-ffi]
via GitHub
Re: [PR] fix: Use-after-move when handling std::tuple [tvm-ffi]
via GitHub
[PR] [Relax][PyTorch] Fix index_put with broadcast indices [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix index_put with broadcast indices [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix index_put with broadcast indices [tvm]
via GitHub
Re: [I] [Bug] [Relax][DLPack] 0-D `bool` from PyTorch via DLPack mis-typed at VM entry → dtype check failure [tvm]
via GitHub
Re: [I] [Bug] [Relax][DLPack] 0-D `bool` from PyTorch via DLPack mis-typed at VM entry → dtype check failure [tvm]
via GitHub
Re: [I] [Bug] [Relax][DLPack] 0-D `bool` from PyTorch via DLPack mis-typed at VM entry → dtype check failure [tvm]
via GitHub
[Apache TVM Discuss] [Questions] New releases on PyPI?
Yunho Cho via Apache TVM Discuss
[Apache TVM Discuss] [Questions] New releases on PyPI?
tqchen via Apache TVM Discuss
Re: [I] [bug][relax.frontend.torch] from_exported_program KeyError for non-persistent buffer bert.embeddings.position_ids (HuggingFace BERT via torch.export) [tvm]
via GitHub
[PR] [Relax][PyTorch] Implement bidirectional GRU [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for bidirectional GRU [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for bidirectional GRU [tvm]
via GitHub
[I] [Docs] IRModule Tutorial Error [tvm]
via GitHub
Re: [I] [Docs] IRModule Tutorial Error [tvm]
via GitHub
Re: [I] [Docs] IRModule Tutorial Error [tvm]
via GitHub
Re: [I] [Docs] IRModule Tutorial Error [tvm]
via GitHub
Re: [I] [Docs] IRModule Tutorial Error [tvm]
via GitHub
[PR] [Relax][PyTorch] Implement boolean tensor support for max operation and add corresponding test case [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add boolean tensor support for max operation and corresponding test case [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add boolean tensor support for max operation and corresponding test case [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add boolean tensor support for max operation and corresponding test case [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add boolean tensor support for max operation and corresponding test case [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add boolean tensor support for max operation and corresponding test case [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add boolean tensor support for max operation and corresponding test case [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add boolean tensor support for max operation and corresponding test case [tvm]
via GitHub
[PR] [Relax][PyTorch] Add support for binary scalar operations in ExportedProgram frontend and corresponding tests [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for binary scalar operations in ExportedProgram frontend and corresponding tests [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for binary scalar operations in ExportedProgram frontend and corresponding tests [tvm]
via GitHub
[PR] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Fix mma tensorize error [tvm]
via GitHub
[PR] [Relax][PyTorch] Add support for non-persistent buffers in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for non-persistent buffers in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for non-persistent buffers in ExportedProgram frontend [tvm]
via GitHub
[I] [Bug] RPCError: Cannot convert from type DLTensor* to ffi.Shape`` when run How to: Cross Compilation and RPC [tvm]
via GitHub
Earlier messages
Later messages