discuss-archive
Thread
Date
Earlier messages
Messages by Thread
[PR] [REFACTOR][NODE] Use fn_repr inside kRepr lambdas, not ffi::ReprPrint [tvm]
via GitHub
Re: [PR] [REFACTOR][NODE] Use fn_repr inside kRepr lambdas, not ffi::ReprPrint [tvm]
via GitHub
[PR] [REFACTOR][NODE] Migrate ReprPrinter to tvm-ffi __ffi_repr__ mechanism [tvm]
via GitHub
Re: [PR] [REFACTOR][NODE] Migrate ReprPrinter to tvm-ffi __ffi_repr__ mechanism [tvm]
via GitHub
Re: [PR] [REFACTOR][NODE] Migrate ReprPrinter to tvm-ffi __ffi_repr__ mechanism [tvm]
via GitHub
Re: [PR] [REFACTOR][NODE] Migrate ReprPrinter to tvm-ffi __ffi_repr__ mechanism [tvm]
via GitHub
Re: [PR] [REFACTOR][NODE] Migrate ReprPrinter to tvm-ffi __ffi_repr__ mechanism [tvm]
via GitHub
Re: [PR] [REFACTOR][NODE] Migrate ReprPrinter to tvm-ffi __ffi_repr__ mechanism [tvm]
via GitHub
Re: [PR] [REFACTOR][NODE] Migrate ReprPrinter to tvm-ffi __ffi_repr__ mechanism [tvm]
via GitHub
[PR] [REFACTOR][S-TIR] Minimize src/support/ by relocating s_tir-private headers [tvm]
via GitHub
Re: [PR] [REFACTOR][S-TIR] Minimize src/support/ by relocating s_tir-private headers [tvm]
via GitHub
Re: [PR] [REFACTOR][S-TIR] Minimize src/support/ by relocating s_tir-private headers [tvm]
via GitHub
Re: [PR] [REFACTOR][S-TIR] Minimize src/support/ by relocating s_tir-private headers [tvm]
via GitHub
[GH] (tvm/issue-19419): Workflow run "CI" is working again!
GitBox
[PR] [REFACTOR] Phase out src/support/ffi_testing.cc [tvm]
via GitHub
Re: [PR] [REFACTOR] Phase out src/support/ffi_testing.cc [tvm]
via GitHub
[GH] (tvm/macro-cleanup-maybe-unused-tvm-dll): Workflow run "CI" failed!
GitBox
[PR] [REFACTOR] Phase out unreachable contrib/rust_extension.cc [tvm]
via GitHub
Re: [PR] [REFACTOR] Phase out unreachable contrib/rust_extension.cc [tvm]
via GitHub
[PR] [REFACTOR][RUNTIME] Macro cleanup — TVM_DLL alignment, [[maybe_unused]], logging.h legacy macros [tvm]
via GitHub
Re: [PR] [REFACTOR][RUNTIME] Macro cleanup — TVM_DLL alignment, [[maybe_unused]], logging.h legacy macros [tvm]
via GitHub
Re: [PR] [REFACTOR][RUNTIME] Macro cleanup — TVM_DLL alignment, [[maybe_unused]], logging.h legacy macros [tvm]
via GitHub
Re: [PR] [REFACTOR][RUNTIME] Macro cleanup — TVM_DLL alignment, [[maybe_unused]], logging.h legacy macros [tvm]
via GitHub
Re: [PR] [REFACTOR][RUNTIME] Macro cleanup — TVM_DLL alignment, [[maybe_unused]], logging.h legacy macros [tvm]
via GitHub
[PR] [REFACTOR] Move source_utils.h into runtime/opencl [tvm]
via GitHub
Re: [PR] [REFACTOR] Move source_utils.h into runtime/opencl [tvm]
via GitHub
Re: [PR] [REFACTOR] Move source_utils.h into runtime/opencl [tvm]
via GitHub
Re: [PR] [REFACTOR] Move source_utils.h into runtime/opencl [tvm]
via GitHub
[PR] [REFACTOR][RUNTIME] Phase out profiling.h heavy types, rename to timer.h [tvm]
via GitHub
Re: [PR] [REFACTOR][RUNTIME] Phase out profiling.h heavy types, rename to timer.h [tvm]
via GitHub
Re: [PR] [REFACTOR][RUNTIME] Phase out profiling.h heavy types, rename to timer.h [tvm]
via GitHub
Re: [PR] [REFACTOR][RUNTIME] Phase out profiling.h heavy types, rename to timer.h [tvm]
via GitHub
Re: [PR] [REFACTOR][RUNTIME] Phase out profiling.h heavy types, rename to timer.h [tvm]
via GitHub
[PR] [REFACTOR][CODEGEN] Phase out tvm_global_barrier_state and tvm_prepare_global_barrier [tvm]
via GitHub
Re: [PR] [REFACTOR][CODEGEN] Phase out tvm_global_barrier_state and tvm_prepare_global_barrier [tvm]
via GitHub
Re: [PR] [REFACTOR][CODEGEN] Phase out tvm_global_barrier_state and tvm_prepare_global_barrier [tvm]
via GitHub
Re: [PR] [REFACTOR][CODEGEN] Phase out tvm_global_barrier_state and tvm_prepare_global_barrier [tvm]
via GitHub
Re: [PR] [REFACTOR][CODEGEN] Phase out tvm_global_barrier_state and tvm_prepare_global_barrier [tvm]
via GitHub
[PR] [S-TIR][Dlight] Add layered fall back strategy to handle missing attr `max_shared_memory_per_block` [tvm]
via GitHub
Re: [PR] [S-TIR][Dlight] Add layered fall back strategy to handle missing attr `max_shared_memory_per_block` [tvm]
via GitHub
[GH] (tvm-ffi/orcjit-refactor): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/orcjit-refactor): Workflow run "CI" failed!
GitBox
[PR] [REFACTOR][OrcJIT] Isolate LLVM patches under llvm_patches/ [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR][OrcJIT] Isolate LLVM patches under llvm_patches/ [tvm-ffi]
via GitHub
[I] [Bug] `R.isnan`, `R.isinf`, `R.isfinite` crash at build time — CodeGenVM intrinsic not implemented [tvm]
via GitHub
[I] [Bug]`scatter_elements` and `scatter_nd` fail to compile for CUDA target [tvm]
via GitHub
[PR] [BugFix][Relax][ONNX] Honor auto_pad in ConvTranspose converter [tvm]
via GitHub
Re: [PR] [BugFix][Relax][ONNX] Honor auto_pad in ConvTranspose converter [tvm]
via GitHub
Re: [PR] [BugFix][Relax][ONNX] Honor auto_pad in ConvTranspose converter [tvm]
via GitHub
[PR] [REFACTOR] Use FFI types in runtime inline module-create wrapper signatures [tvm]
via GitHub
Re: [PR] [REFACTOR] Use FFI types in runtime inline module-create wrapper signatures [tvm]
via GitHub
Re: [PR] [REFACTOR] Use FFI types in runtime inline module-create wrapper signatures [tvm]
via GitHub
Re: [PR] [REFACTOR] Use FFI types in runtime inline module-create wrapper signatures [tvm]
via GitHub
[PR] [REFACTOR] Use FFI types in runtime inline module-create wrapper signatures [tvm]
via GitHub
Re: [PR] [REFACTOR] Use FFI types in runtime inline module-create wrapper signatures [tvm]
via GitHub
[GH] (tvm-ffi/handle-rawstr-bytearrayptr-callback): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/handle-rawstr-bytearrayptr-callback): Workflow run "CI" is working again!
GitBox
[PR] [FIX] Handle kTVMFFIRawStr / kTVMFFIByteArrayPtr in callback args path [tvm-ffi]
via GitHub
Re: [PR] [FIX] Handle kTVMFFIRawStr / kTVMFFIByteArrayPtr in callback args path [tvm-ffi]
via GitHub
[GH] (tvm-ffi/lang-module-registration): Workflow run "CI" failed!
GitBox
[PR] Lang module registration [tvm-ffi]
via GitHub
Re: [PR] Lang module registration [tvm-ffi]
via GitHub
Re: [PR] Lang module registration [tvm-ffi]
via GitHub
[PR] [REFACTOR] Isolate backend module creation via ffi.Module.create.<kind> registry [tvm]
via GitHub
Re: [PR] [REFACTOR] Isolate backend module creation via ffi.Module.create.<kind> registry [tvm]
via GitHub
Re: [PR] [REFACTOR] Isolate backend module creation via ffi.Module.create.<kind> registry [tvm]
via GitHub
[PR] [release][Dont Squash] Update version to 0.24.0 and 0.25.0.dev on main branch [tvm]
via GitHub
Re: [PR] [release][Dont Squash] Update version to 0.24.0 and 0.25.0.dev on main branch [tvm]
via GitHub
Re: [PR] [release][Dont Squash] Update version to 0.24.0 and 0.25.0.dev on main branch [tvm]
via GitHub
Re: [PR] [release][Dont Squash] Update version to 0.24.0 and 0.25.0.dev on main branch [tvm]
via GitHub
[GH] (tvm/main): Workflow run "npm_and_yarn in /web for underscore - Update #1337517797" failed!
GitBox
[I] Class-decorator form of prim_func_pass broken with tvm-ffi >= 0.1.8 (__slots__ on Object) [tvm-ffi]
via GitHub
Re: [I] Class-decorator form of prim_func_pass broken with tvm-ffi >= 0.1.10 (__slots__ on Object) [tvm-ffi]
via GitHub
Re: [I] Class-decorator form of prim_func_pass broken with tvm-ffi >= 0.1.10 (__slots__ on Object) [tvm-ffi]
via GitHub
Re: [I] Class-decorator form of prim_func_pass broken with tvm-ffi >= 0.1.10 (__slots__ on Object) [tvm-ffi]
via GitHub
[GH] (tvm/jenkins-s3-bundle-refactor): Workflow run "CI" is working again!
GitBox
[GH] (tvm/jenkins-s3-bundle-refactor): Workflow run "CI" is working again!
GitBox
[PR] [CI][REFACTOR] Decouple data.py from Jenkins script and docker images [tvm]
via GitHub
Re: [PR] [CI][REFACTOR] Decouple data.py from Jenkins script and docker images [tvm]
via GitHub
Re: [PR] [CI][REFACTOR] Decouple data.py from Jenkins script and docker images [tvm]
via GitHub
Re: [PR] [CI][REFACTOR] Decouple data.py from Jenkins script and docker images [tvm]
via GitHub
[PR] [REFACTOR] libinfo: anchor dev fallback on calling package root [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: anchor dev fallback on calling package root [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: anchor dev fallback on calling package root [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: anchor dev fallback on calling package root [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] libinfo: add extra_lib_paths parameter for foreign-caller support [tvm-ffi]
via GitHub
[GH] (tvm/split-libtvm-runtime-compiler): Workflow run "CI" failed!
GitBox
[GH] (tvm/split-libtvm-runtime-compiler): Workflow run "CI" failed!
GitBox
[GH] (tvm/split-libtvm-runtime-compiler): Workflow run "CI" failed!
GitBox
[GH] (tvm/split-libtvm-runtime-compiler): Workflow run "CI" failed!
GitBox
[GH] (tvm/split-libtvm-runtime-compiler): Workflow run "CI" failed!
GitBox
[PR] [CMAKE][REFACTOR] Split libtvm.so into libtvm_runtime.so and libtvm_compiler.so [tvm]
via GitHub
Re: [PR] [CMAKE][REFACTOR] Split libtvm.so into libtvm_runtime.so and libtvm_compiler.so [tvm]
via GitHub
Re: [PR] [CMAKE][REFACTOR] Split libtvm.so into libtvm_runtime.so and libtvm_compiler.so [tvm]
via GitHub
[PR] [CMAKE][REFACTOR] Split libtvm.so into libtvm_runtime.so and libtvm_compiler.so [tvm]
via GitHub
Re: [PR] [CMAKE][REFACTOR] Split libtvm.so into libtvm_runtime.so and libtvm_compiler.so [tvm]
via GitHub
Re: [PR] [CMAKE][REFACTOR] Split libtvm.so into libtvm_runtime.so and libtvm_compiler.so [tvm]
via GitHub
Re: [PR] [CMAKE][REFACTOR] Split libtvm.so into libtvm_runtime.so and libtvm_compiler.so [tvm]
via GitHub
Re: [PR] [CMAKE][REFACTOR] Split libtvm.so into libtvm_runtime.so and libtvm_compiler.so [tvm]
via GitHub
[PR] [REFACTOR] Remove tvm.runtime.packed_func and container shims; route via tvm_ffi [tvm]
via GitHub
Re: [PR] [REFACTOR] Remove tvm.runtime.packed_func and container shims; route via tvm_ffi [tvm]
via GitHub
Re: [PR] [REFACTOR] Remove tvm.runtime.packed_func and container shims; route via tvm_ffi [tvm]
via GitHub
Re: [PR] [REFACTOR] Remove tvm.runtime.packed_func and container shims; route via tvm_ffi [tvm]
via GitHub
[PR] [REFACTOR] Phase out include/tvm/runtime/module.h [tvm]
via GitHub
Re: [PR] [REFACTOR] Phase out include/tvm/runtime/module.h [tvm]
via GitHub
[PR] [REFACTOR] Remove runtime/object.py shim and route Object via tvm_ffi [tvm]
via GitHub
Re: [PR] [REFACTOR] Remove runtime/object.py shim and route Object via tvm_ffi [tvm]
via GitHub
[GH] (tvm/doc500): Workflow run "PR" is working again!
GitBox
[PR] [Docs]Refactor BYOC example NPU tutorial [tvm]
via GitHub
Re: [PR] [Docs] Refactor BYOC example NPU tutorial [tvm]
via GitHub
Re: [PR] [Docs] Refactor BYOC example NPU tutorial [tvm]
via GitHub
[GH] (tvm-ffi/main): Workflow run "Publish wheel" is working again!
GitBox
[GH] (tvm-ffi/benchmark-pycall): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/benchmark-pycall): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/benchmark-pycall): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/benchmark-pycall): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/benchmark-pycall): Workflow run "CI" failed!
GitBox
[PR] [REFACTOR] Optimized Python callback path via TVMFFIPyCallbackArgSetter [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] Optimized Python callback path via TVMFFIPyCallbackArgSetter [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] Optimized Python callback path via TVMFFIPyCallbackArgSetter [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] Optimized Python callback path via TVMFFIPyCallbackArgSetter [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] Optimized Python callback path via TVMFFIPyCallbackArgSetter [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] Optimized Python callback path via TVMFFIPyCallbackArgSetter [tvm-ffi]
via GitHub
Re: [PR] [REFACTOR] Optimized Python callback path via TVMFFIPyCallbackArgSetter [tvm-ffi]
via GitHub
[GH] (tvm-ffi/main): Workflow run "Publish wheel" failed!
GitBox
[PR] [S-TIR][MetaSchedule] Make evolutionary search resilient to trace replay failures [tvm]
via GitHub
Re: [PR] [S-TIR][MetaSchedule] Make evolutionary search resilient to trace replay failures [tvm]
via GitHub
Re: [PR] [S-TIR][MetaSchedule] Make evolutionary search resilient to trace replay failures [tvm]
via GitHub
Re: [PR] [S-TIR][MetaSchedule] Make evolutionary search resilient to trace replay failures [tvm]
via GitHub
Re: [PR] [S-TIR][MetaSchedule] Make evolutionary search resilient to trace replay failures [tvm]
via GitHub
Re: [PR] [S-TIR][MetaSchedule] Make evolutionary search resilient to trace replay failures [tvm]
via GitHub
Re: [PR] [S-TIR][MetaSchedule] Make evolutionary search resilient to trace replay failures [tvm]
via GitHub
[GH] (tvm-ffi/fix-typed-method): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/junrushao/2026-04-23/enum-enhance): Workflow run "CI" failed!
GitBox
[PR] feat(enum): add payload enum compatibility behavior [tvm-ffi]
via GitHub
Re: [PR] feat(enum): add payload enum compatibility behavior [tvm-ffi]
via GitHub
Re: [PR] feat(enum): add payload enum compatibility behavior [tvm-ffi]
via GitHub
Re: [PR] feat(enum): add payload enum compatibility behavior [tvm-ffi]
via GitHub
[I] [Bug] [Relax][ONNX] CumSum ignores axis parameter, always reduces along axis 0 [tvm]
via GitHub
[I] [Bug] [Relax][ONNX] Gather with negative indices reads out-of-bounds memory [tvm]
via GitHub
[I] [Bug] [Relax][ONNX] ScatterElements ignores `reduction` attribute, silently produces wrong results [tvm]
via GitHub
[GH] (tvm-ffi/fix-typed-method): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/fix-typed-method): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/fix-typed-method): Workflow run "CI" failed!
GitBox
[PR] [Bug fix] Added typed method registration for py_class [tvm-ffi]
via GitHub
Re: [PR] [Bug fix] Added typed method registration for py_class [tvm-ffi]
via GitHub
Re: [PR] [Bug fix] Added typed method registration for py_class [tvm-ffi]
via GitHub
Re: [PR] [Bug fix] Added typed method registration for py_class [tvm-ffi]
via GitHub
Re: [PR] feat: Added typed method registration for py_class [tvm-ffi]
via GitHub
[PR] [Relax][Frontend][TFLite] Add CUMSUM operator mapping [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Add CUMSUM operator mapping [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Add CUMSUM operator mapping [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Add CUMSUM operator mapping [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Add CUMSUM operator mapping [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Add CUMSUM operator mapping [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Add CUMSUM operator mapping [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Add CUMSUM operator mapping [tvm]
via GitHub
[GH] (tvm/tflite-nms-v5-soft-nms-19412): Workflow run "CI" is working again!
GitBox
[PR] doc: clarify structural_eq semantics and py_class eq/hash interaction [tvm-ffi]
via GitHub
Re: [PR] doc: clarify structural_eq semantics and py_class eq/hash interaction [tvm-ffi]
via GitHub
Re: [PR] doc: clarify structural_eq semantics and py_class eq/hash interaction [tvm-ffi]
via GitHub
[GH] (tvm/tflite-nms-v5-soft-nms-19412): Workflow run "CI" failed!
GitBox
[PR] [Relax][Frontend][TFLite] Fix dynamic FILL/SPLIT_V partial implementations [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Fix dynamic FILL/SPLIT_V partial implementations [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Fix dynamic FILL/SPLIT_V partial implementations [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Fix dynamic FILL/SPLIT_V partial implementations [tvm]
via GitHub
Re: [PR] [Relax][Frontend][TFLite] Fix dynamic FILL/SPLIT_V partial implementations [tvm]
via GitHub
[PR] fix: version mismatch of CUDA symbols. [tvm]
via GitHub
Re: [PR] fix: version mismatch of CUDA symbols. [tvm]
via GitHub
Re: [PR] fix: version mismatch of CUDA symbols [tvm]
via GitHub
Re: [PR] fix: version mismatch of CUDA symbols [tvm]
via GitHub
Re: [PR] fix: version mismatch of CUDA symbols [tvm]
via GitHub
Re: [PR] fix: version mismatch of CUDA symbols [tvm]
via GitHub
Re: [PR] fix: version mismatch of CUDA symbols [tvm]
via GitHub
Re: [PR] [Fix][CUDA] Version compatibility of CUDA symbols [tvm]
via GitHub
Re: [I] [Bug] [CUDA] not compilable with CUDA 11.4 due to missing symbols [tvm]
via GitHub
[PR] [Relax][Frontend][KVCache] Extend masked sequence prefill to causal left-padding [tvm]
via GitHub
Re: [PR] [Relax][Frontend][KVCache] Extend masked sequence prefill to causal left-padding [tvm]
via GitHub
Re: [PR] [Relax][Frontend][KVCache] Extend masked sequence prefill to causal left-padding [tvm]
via GitHub
Re: [PR] [Relax][Frontend][KVCache] Extend masked sequence prefill to causal left-padding [tvm]
via GitHub
Re: [PR] [Relax][Frontend][KVCache] Extend masked sequence prefill to causal left-padding [tvm]
via GitHub
Re: [PR] [Relax][Frontend][KVCache] Extend masked sequence prefill to causal left-padding [tvm]
via GitHub
[PR] [Relax][NN] Use int64 for RoPE apply flag [tvm]
via GitHub
Re: [PR] [Relax][NN] Use int64 for RoPE apply flag [tvm]
via GitHub
Re: [PR] [Relax][NN] Use int64 for RoPE apply flag [tvm]
via GitHub
[PR] Bump poetry from 1.1.13 to 2.3.4 in /docker/python [tvm]
via GitHub
Re: [PR] [Backend][Relax] Add Intel GNA backend for NPU support [tvm]
via GitHub
Earlier messages