discuss-archive
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [PR] [DLPACK] Upgrade DLPack Exchange API to pass by capsule [tvm-ffi]
via GitHub
Re: [PR] [DLPACK] Upgrade DLPack Exchange API to pass by capsule [tvm-ffi]
via GitHub
[PR] [TIR] Fix tir.LowerIntrin check failed additional_info.size() == new_size (34 vs. 33) [tvm]
via GitHub
Re: [PR] [TIR] Fix tir.LowerIntrin check failed additional_info.size() == new_size [tvm]
via GitHub
Re: [PR] [TIR] Fix tir.LowerIntrin check failed additional_info.size() == new_size [tvm]
via GitHub
Re: [I] [Bug] Incorrect example for Schedule.decompose_reduction [tvm]
via GitHub
[PR] [MISC] Remove unused TVMC configs [tvm]
via GitHub
Re: [PR] [MISC] Remove unused TVMC configs [tvm]
via GitHub
Re: [PR] [MISC] Remove unused TVMC configs [tvm]
via GitHub
[PR] [Pass] Add DumpIR pass instrument to save IR snapshots [tvm]
via GitHub
Re: [PR] [Pass] Add DumpIR pass instrument to save IR snapshots [tvm]
via GitHub
Re: [PR] [Pass] Add DumpIR pass instrument to save IR snapshots [tvm]
via GitHub
Re: [PR] [Pass] Add DumpIR pass instrument to save IR snapshots [tvm]
via GitHub
Re: [PR] [Pass] Add DumpIR pass instrument to save IR snapshots [tvm]
via GitHub
Re: [PR] [Pass] Add DumpIR pass instrument to save IR snapshots [tvm]
via GitHub
[PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
[PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
[PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add Clipping pattern support [tvm]
via GitHub
[PR] [MISC] Fix compilation warnings [tvm]
via GitHub
Re: [PR] [MISC] Fix compilation warnings [tvm]
via GitHub
Re: [PR] [MISC] Fix compilation warnings [tvm]
via GitHub
Re: [I] [feature][relax.frontend.torch] Missing lowering for anti-aliased bilinear resize (aten::_upsample_bilinear2d_aa.vec) in from_exported_program [tvm]
via GitHub
[PR] Support specifying decimals for _round [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Support specifying decimals for _round [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Support specifying decimals for _round [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Support specifying decimals for _round [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Support specifying decimals for _round [tvm]
via GitHub
[PR] [Relax][PyTorch] Enhance data type handling in FX graph translator [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enhance data type handling in FX graph translator [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enhance data type handling in FX graph translator [tvm]
via GitHub
[PR] Add cuda DeviceGuard [tvm-ffi]
via GitHub
Re: [PR] Add cuda DeviceGuard [tvm-ffi]
via GitHub
Re: [PR] Add cuda DeviceGuard [tvm-ffi]
via GitHub
Re: [PR] Add cuda DeviceGuard [tvm-ffi]
via GitHub
[GH] (tvm-ffi/overload): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/overload): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/overload): Workflow run "CI" failed!
GitBox
[PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
Re: [PR] [Feature] Support dynamic-style overload for FFI object types [tvm-ffi]
via GitHub
[I] !!!! When calling a C++ backend from Python, FFI is 50% to 100% slower than pybind11. [tvm-ffi]
via GitHub
Re: [I] !!!! When calling a C++ backend from Python, FFI is 50% to 100% slower than pybind11. [tvm-ffi]
via GitHub
Re: [I] !!!! When calling a C++ backend from Python, FFI is 50% to 100% slower than pybind11. [tvm-ffi]
via GitHub
Re: [I] [Bug] InternalError: Squeeze dimension check too strict compared to PyTorch behavior [tvm]
via GitHub
[PR] [TIR] Update function signatures for decompose_reduction [tvm]
via GitHub
Re: [PR] [TIR] Update function signatures for decompose_reduction [tvm]
via GitHub
Re: [PR] [TIR] Update function signatures for decompose_reduction [tvm]
via GitHub
[PR] [TVMScript] Add test for TIR macro block name suffix handling [tvm]
via GitHub
Re: [PR] [TVMScript] Add test for TIR macro block name suffix handling [tvm]
via GitHub
Re: [PR] [TVMScript] Add test for TIR macro block name suffix handling [tvm]
via GitHub
[PR] [Relax][PyTorch] Add extra_buffers support for exported program [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add extra_buffers support for exported program [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add extra_buffers support for exported program [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add extra_buffers support for exported program [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add extra_buffers support for exported program [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add extra_buffers support for exported program [tvm]
via GitHub
[PR] Fix Hugging Face issue in exported_program_translator [tvm]
via GitHub
Re: [PR] Fix Hugging Face issue in exported_program_translator [tvm]
via GitHub
Re: [PR] Fix Hugging Face issue in exported_program_translator [tvm]
via GitHub
Re: [PR] Fix Hugging Face issue in exported_program_translator [tvm]
via GitHub
[GH] (tvm/fix/fuse-reduction-epilogue-relu): Workflow run "PR" is working again!
GitBox
[PR] [TIR][Schedule] FuseReductionEpilogue: Add ReLU support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add ReLU support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add ReLU support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add ReLU support [tvm]
via GitHub
[PR] [TIR][Schedule] FuseReductionEpilogue: Add ReLU support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add ReLU support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add ReLU support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add ReLU support [tvm]
via GitHub
Re: [PR] [TIR][Schedule] FuseReductionEpilogue: Add ReLU support [tvm]
via GitHub
[Apache TVM Discuss] [Questions] TVM building issue on AArch64 architecture
andy via Apache TVM Discuss
[Apache TVM Discuss] [Questions] TVM building issue on AArch64 architecture
alen via Apache TVM Discuss
[Apache TVM Discuss] [Questions] TVM building issue on AArch64 architecture
andy via Apache TVM Discuss
Re: [I] [Bug] "Duplicated block name" when invoking tune_tir with tvm.script.tir.Macro [tvm]
via GitHub
[PR] Add support for antialiased bilinear upsampling [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for antialiased bilinear upsampling [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for antialiased bilinear upsampling [tvm]
via GitHub
Re: [I] [Bug] VerifyStream::Verify causes dereferencing an invalid pointer [tvm]
via GitHub
Re: [I] [Bug] VerifyStream::Verify causes dereferencing an invalid pointer [tvm]
via GitHub
[PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for sparse matrix multiplication [tvm]
via GitHub
Re: [PR] [RELAX][PASS] Annotate Custom Scope layout pass for Adreno GPU [tvm]
via GitHub
Re: [PR] [RELAX][PASS] Annotate Custom Scope layout pass for Adreno GPU [tvm]
via GitHub
[PR] [Fix] Cast Shape from DLTensor* [tvm-ffi]
via GitHub
Re: [PR] [Fix] Cast Shape from DLTensor* [tvm-ffi]
via GitHub
Re: [PR] [Fix] Cast Shape from DLTensor* [tvm-ffi]
via GitHub
Re: [PR] [Fix] Cast Shape from DLTensor* [tvm-ffi]
via GitHub
Re: [PR] [Fix] Cast Shape from DLTensor* [tvm-ffi]
via GitHub
Re: [PR] [Fix] Cast Shape from DLTensor* [tvm-ffi]
via GitHub
[GH] (tvm/fix-binary-op-promotion): Workflow run "CI" failed!
GitBox
[PR] [CI] Use glob for `conda/build-environment.yaml` in cache key [tvm]
via GitHub
Re: [PR] [CI] Use glob for `conda/build-environment.yaml` in cache key [tvm]
via GitHub
Re: [PR] [CI] Use glob for `conda/build-environment.yaml` in cache key [tvm]
via GitHub
[PR] [Relax][PyTorch] Add binary operation dtype promotion following PyTorch rules in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add binary operation dtype promotion following PyTorch rules in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add binary operation dtype promotion following PyTorch rules in ExportedProgram frontend [tvm]
via GitHub
[GH] (tvm/ep-mul-op): Workflow run "CI" failed!
GitBox
[PR] [Relax][PyTorch] Add support for `mul` operator for ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add `mul` operator in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add `mul` operator in ExportedProgram frontend [tvm]
via GitHub
[GH] (tvm/main): Workflow run "CI" is working again!
GitBox
[GH] (tvm/main): Workflow run "CI" is working again!
GitBox
[GH] (tvm/main): Workflow run "CI" is working again!
GitBox
[GH] (tvm/main): Workflow run "CI" is working again!
GitBox
[GH] (tvm/main): Workflow run "CI" is working again!
GitBox
[GH] (tvm/investigate-macos-ci-error): Workflow run "CI" is working again!
GitBox
[GH] (tvm/investigate-macos-ci-error): Workflow run "CI" is working again!
GitBox
[GH] (tvm/investigate-macos-ci-error): Workflow run "CI" is working again!
GitBox
[GH] (tvm/scatter-negative-slicing): Workflow run "CI" failed!
GitBox
[GH] (tvm/scatter-negative-slicing): Workflow run "CI" failed!
GitBox
[GH] (tvm/add-copy-broadcast-support): Workflow run "CI" failed!
GitBox
[GH] (tvm/add-copy-broadcast-support): Workflow run "CI" failed!
GitBox
[PR] [CI] Update `actions/cache` to v4 in setup action [tvm]
via GitHub
Re: [PR] [CI] Update `actions/cache` to v4 in setup action [tvm]
via GitHub
Re: [PR] [CI] Update `actions/cache` to v4 in setup action [tvm]
via GitHub
Re: [PR] [CI] Update `actions/cache` to v4 in setup action [tvm]
via GitHub
[PR] [Relax][PyTorch] Add negative slicing support in `slice_scatter` operation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add negative slicing support in `slice_scatter` operation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add negative slicing support in `slice_scatter` operation [tvm]
via GitHub
[PR] [Relax][PyTorch] Add broadcast support for copy operation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add broadcast support for `copy` operation [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add broadcast support for `copy` operation [tvm]
via GitHub
[PR] Fix BufferError when converting PyTorch models with sparse tensors [tvm]
via GitHub
Re: [PR] Fix BufferError when converting PyTorch models with sparse tensors [tvm]
via GitHub
Re: [PR] Fix BufferError when converting PyTorch models with sparse tensors [tvm]
via GitHub
Re: [PR] Fix BufferError when converting PyTorch models with sparse tensors [tvm]
via GitHub
[PR] [#17876]Fix TVM crashes with default relax pipeline when opt_level=1: InternalError: Check failed: (slot->value_computed) is false [tvm]
via GitHub
Re: [PR] [Relax][Backend] Fix TVM crashes with default relax pipeline when opt_level=1: InternalError: Check failed: (slot->value_computed) is false [tvm]
via GitHub
Re: [PR] [Relax][Backend] Fix TVM crashes with default relax pipeline when opt_level=1: InternalError: Check failed: (slot->value_computed) is false [tvm]
via GitHub
Re: [PR] [Relax][Backend] Fix TVM crashes with default relax pipeline when opt_level=1: InternalError: Check failed: (slot->value_computed) is false [tvm]
via GitHub
Re: [PR] [Relax][Backend] Fix TVM crashes with default relax pipeline when opt_level=1: InternalError: Check failed: (slot->value_computed) is false [tvm]
via GitHub
Re: [I] [Bug] Installation process incorrect/not working [tvm]
via GitHub
Re: [I] [Bug] Installation process incorrect/not working [tvm]
via GitHub
Re: [I] [Bug] Installation process incorrect/not working [tvm]
via GitHub
[GH] (tvm/main): Workflow run "tvm-bot" failed!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" failed!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" failed!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" failed!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" failed!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" failed!
GitBox
[GH] (tvm/add-as_strided-ep): Workflow run "CI" failed!
GitBox
[GH] (tvm/add-as_strided-ep): Workflow run "CI" failed!
GitBox
[GH] (tvm/add-as_strided-ep): Workflow run "CI" failed!
GitBox
[GH] (tvm/add-as_strided-ep): Workflow run "CI" failed!
GitBox
[PR] [Relax][PyTorch] Add support for `as_strided` operator in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add `as_strided` operator in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add `as_strided` operator in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add `as_strided` operator in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add `as_strided` operator in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add `as_strided` operator in ExportedProgram frontend [tvm]
via GitHub
[GH] (tvm/main): Workflow run "CI" failed!
GitBox
[GH] (tvm/main): Workflow run "CI" failed!
GitBox
[GH] (tvm/main): Workflow run "CI" failed!
GitBox
[GH] (tvm/main): Workflow run "CI" failed!
GitBox
[GH] (tvm/main): Workflow run "CI" failed!
GitBox
[GH] (tvm/main): Workflow run "CI" failed!
GitBox
[GH] (tvm/main): Workflow run "CI" failed!
GitBox
Re: [I] [Bug] Segmentation fault when converting PyTorch index assignment operation with torch.export [tvm]
via GitHub
[GH] (tvm/support-count_include_pad-avg_pool2d): Workflow run "CI" failed!
GitBox
[PR] [Relax][PyTorch] Enhance handling of unbounded upper bound constraints [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enhance handling of unbounded upper bound constraints [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enhance handling of unbounded upper bound constraints [tvm]
via GitHub
[PR] Enhance index_put support for multi-dimensional indices and broadcasting [tvm]
via GitHub
Earlier messages
Later messages