commits
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]
via GitHub
Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]
via GitHub
Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]
via GitHub
[I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]
via GitHub
[PR] [TVMJS] Check DataType.NUMPY2STR when saving array [tvm]
via GitHub
Re: [PR] [TVMJS] Check DataType.NUMPY2STR when saving array [tvm]
via GitHub
(tvm) branch main updated: [Relax] [ONNX] Add support for Sign and Not (#17167)
tqchen
(tvm) branch main updated: [Meta Schedule][XGBoost] enable custom callback func test with xgboost>=1.6.0 (#17168)
syfeng
(tvm) branch main updated: [Relax][BugFix] Fix a bug about the IR construction in test file (#17121)
syfeng
Re: [PR] [Relax][BugFix] Fix a bug about the IR construction in test file [tvm]
via GitHub
Re: [PR] [Relax][BugFix] Fix a bug about the IR construction in test file [tvm]
via GitHub
Re: [PR] [Relax][BugFix] Fix a bug about the IR construction in test file [tvm]
via GitHub
[PR] Use `packaging.version.parse` instead of `distutils.version.LooseVersion` [tvm]
via GitHub
Re: [PR] Use `packaging.version.parse` instead of `distutils.version.LooseVersion` [tvm]
via GitHub
[PR] [MetaSchedule]Add a testcase for padded conv2d in meta_schedule [tvm]
via GitHub
Re: [PR] [MetaSchedule]Add a testcase for padded conv2d in meta_schedule [tvm]
via GitHub
Re: [PR] [MetaSchedule]Add a testcase for padded conv2d in meta_schedule [tvm]
via GitHub
Re: [PR] [MetaSchedule]Add a testcase for padded conv2d in meta_schedule [tvm]
via GitHub
[PR] Pass to eliminate redundant branch and overcompute [tvm]
via GitHub
Re: [PR] Pass to eliminate redundant branch and overcompute [tvm]
via GitHub
Re: [PR] Pass to eliminate redundant branch and overcompute [tvm]
via GitHub
Re: [PR] Pass to eliminate redundant branch and overcompute [tvm]
via GitHub
Re: [PR] [Hexagon] [CMake] Fix v66 build issue [tvm]
via GitHub
Re: [PR] [Hexagon] [CMake] Fix v66 build issue [tvm]
via GitHub
Re: [PR] [Hexagon] [CMake] Fix v66 build issue [tvm]
via GitHub
Re: [PR] [Hexagon] [CMake] Fix v66 build issue [tvm]
via GitHub
Re: [PR] [Hexagon] [CMake] Fix v66 build issue [tvm]
via GitHub
Re: [PR] [Hexagon] [CMake] Fix v66 build issue [tvm]
via GitHub
Re: [PR] [Hexagon] [CMake] Fix v66 build issue [tvm]
via GitHub
Re: [PR] [Hexagon] [CMake] Fix v66 build issue [tvm]
via GitHub
[PR] [Meta Schedule][XGBoost] enable callback func test with xgboost>=1.6.0 [tvm]
via GitHub
Re: [PR] [Meta Schedule][XGBoost] enable custom callback func test with xgboost>=1.6.0 [tvm]
via GitHub
[PR] [Relax] [ONNX] Add support for Sign and Not [tvm]
via GitHub
Re: [PR] [Relax] [ONNX] Add support for Sign and Not [tvm]
via GitHub
Re: [PR] [Relax] [ONNX] Add support for Sign and Not [tvm]
via GitHub
Re: [PR] [CI][AArch64] Enable ONNX and PyTorch tests on AArch64 [tvm]
via GitHub
Re: [PR] [docs] Add tvm.driver.tvmc module to Python documentation [tvm]
via GitHub
(tvm) branch nightly updated (b654852b15 -> 73078f11dc)
github-bot
(tvm) branch main updated (51d7c5e47a -> 73078f11dc)
wuwei
(tvm) branch main updated: [Hexagon] Support RPC execution of existing shared lib (#17162)
cbalint13
(tvm) branch main updated: [Relax] Fix fuseOps via pattern (#17160)
tqchen
Re: [PR] [Relax] Fix fuseOps via pattern [tvm]
via GitHub
[I] [Bug] [Relax] [TIR] FuseOps and FuseTIR cannot fuse no-op reshape into other `PrimFunc`s, nor can they eliminate that [tvm]
via GitHub
[I] [Bug] tvm/src/contrib/torch/tvm_module_wrapper/RuntimeModuleWrapperTVM.cc:32:10: fatal error: ../../support/base64.h: No such file or directory [tvm]
via GitHub
Re: [I] [Bug] tvm/src/contrib/torch/tvm_module_wrapper/RuntimeModuleWrapperTVM.cc:32:10: fatal error: ../../support/base64.h: No such file or directory [tvm]
via GitHub
Re: [I] [Bug] tvm/src/contrib/torch/tvm_module_wrapper/RuntimeModuleWrapperTVM.cc:32:10: fatal error: ../../support/base64.h: No such file or directory [tvm]
via GitHub
Re: [I] [Bug] tvm/src/contrib/torch/tvm_module_wrapper/RuntimeModuleWrapperTVM.cc:32:10: fatal error: ../../support/base64.h: No such file or directory [tvm]
via GitHub
[I] [Bug] Building for qualcomm hexagon dsp V66 architeture [tvm]
via GitHub
Re: [I] [Bug] Building for qualcomm hexagon dsp V66 architeture [tvm]
via GitHub
Re: [I] [Bug] Building for qualcomm hexagon dsp V66 architeture [tvm]
via GitHub
Re: [I] [Bug] Building for qualcomm hexagon dsp V66 architeture [tvm]
via GitHub
Re: [I] [Bug] Building for qualcomm hexagon dsp V66 architeture [tvm]
via GitHub
Re: [PR] [Hexagon] Support RPC execution of existing shared lib [tvm]
via GitHub
Re: [PR] [Hexagon] Support RPC execution of existing shared lib [tvm]
via GitHub
Re: [PR] [Hexagon] remove #if defined(__hexagon__) where it is no longer needed [tvm]
via GitHub
[PR] Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
Re: [PR] [TIR]Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
Re: [PR] [TIR]Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
Re: [PR] [TIR]Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
Re: [PR] [TIR]Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
Re: [PR] [TIR]Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
Re: [PR] [TIR]Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
Re: [PR] [TIR]Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
Re: [PR] [TIR]Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
Re: [PR] [TIR]Fix Inlining of Non-Output Consumers in TileWithTensorIntrin with Padding [tvm]
via GitHub
(tvm) tag v0.18.dev0 created (now 9a9386de08)
ysh329
(tvm) tag v0.17.0.rc0 created (now eeebcfa0ad)
ysh329
(tvm) branch v0.17.0 created (now eeebcfa0ad)
ysh329
(tvm) branch nightly updated (f60b08c9a4 -> b654852b15)
github-bot
Re: [PR] [Relay][Pytorch] Add support for `aten::scaled_dot_product_attention` [tvm]
via GitHub
[PR] [KVCache] PagedKVCache Quantization [tvm]
via GitHub
[PR] [TIR][Analyzer] Simplify `x==x` expressions for all dtypes [tvm]
via GitHub
Re: [PR] [TIR][Analyzer] Simplify `x==x` expressions for all dtypes [tvm]
via GitHub
Re: [PR] [TIR][Analyzer] Simplify `x==x` expressions for all dtypes [tvm]
via GitHub
Re: [PR] [TIR][Analyzer] Simplify `x==x` expressions for all dtypes [tvm]
via GitHub
Re: [PR] [TIR][Analyzer] Simplify `x==x` expressions for all dtypes [tvm]
via GitHub
Re: [PR] [TIR][Analyzer] Simplify `x==x` expressions for all dtypes [tvm]
via GitHub
(tvm) branch main updated: [Bugfix] Allow import of TVM when current directory is read-only (#17142)
wuwei
(tvm) branch main updated (f60b08c9a4 -> 9a9386de08)
wuwei
[PR] [Relax] Integrate cuDNN attention [tvm]
via GitHub
Re: [PR] [Relax] Integrate cuDNN attention [tvm]
via GitHub
Re: [PR] [Relax] Integrate cuDNN attention [tvm]
via GitHub
Re: [PR] [Relax] Integrate cuDNN attention [tvm]
via GitHub
Re: [PR] [Relax] Integrate cuDNN attention [tvm]
via GitHub
[PR] [release][Dont Squash] Update version to 0.17.0 and 0.18.0.dev on main branch [tvm]
via GitHub
Re: [PR] [release][Dont Squash] Update version to 0.17.0 and 0.18.0.dev on main branch [tvm]
via GitHub
Re: [I] Features Discussion before Next Release v0.17.0 [tvm]
via GitHub
(tvm) branch nightly updated (32e9a48b1f -> f60b08c9a4)
github-bot
(tvm) branch main updated: [QoL][IR] Provide default constructor for NameSupply/GlobalVarSupply (#17135)
masahi
(tvm) branch main updated: [Utils] Define line-length for "ruff format" (#17125)
sunggg
Re: [PR] [Utils] Define line-length for "ruff format" [tvm]
via GitHub
(tvm) branch main updated (32e9a48b1f -> 641ce71b3c)
masahi
[PR] [CI] Remove lint step from `unity/pr-head` step [tvm]
via GitHub
Re: [PR] [CI] Remove lint step from `unity/pr-head` step [tvm]
via GitHub
Re: [PR] [CI] Remove lint step from `unity/pr-head` step [tvm]
via GitHub
Re: [PR] [CI] Remove lint step from `unity/pr-head` step [tvm]
via GitHub
[PR] [TVMScript] Print small constant arrays as in-line R.const [tvm]
via GitHub
[I] [BUG]: Relay.transform.FoldScaleAxis() Error in folding the SqueezeExcitation structure of mobilenetv3. [tvm]
via GitHub
Re: [I] [BUG]: Relay.transform.FoldScaleAxis() Error in folding the SqueezeExcitation structure of mobilenetv3. [tvm]
via GitHub
(tvm) branch nightly updated (fd7c81de3b -> 32e9a48b1f)
github-bot
(tvm) branch main updated: [WebGPU] Fall back to 256MB for maxBufferSize if needed (#17150)
ruihangl
(tvm) branch main updated: [Fix][TIR] Fix outdated call to create extern buffer in make_extern (#17138)
wuwei
Re: [PR] [WebGPU] Fall back to 256MB for maxBufferSize if needed [tvm]
via GitHub
Re: [PR] [WebGPU] Fall back to 256MB for maxBufferSize if needed [tvm]
via GitHub
[PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
Re: [PR] [Relax] Implement Rewriter class for pattern-rewrite [tvm]
via GitHub
[PR] [Bugfix][Relax] Preserve existing DataflowBlock in ConvertToDataflow [tvm]
via GitHub
Re: [PR] [Bugfix][Relax] Preserve existing DataflowBlock in ConvertToDataflow [tvm]
via GitHub
Re: [PR] [Relax] Fix cublas dispatch for corner cases [tvm]
via GitHub
(tvm) branch main updated: [Relax] Fix cublas dispatch for corner cases (#17139)
tqchen
(tvm) branch main updated: [DOC] Fix typo for the "We utilize the intermediate representation of nn.Graph to convert the OneFlow model to Reley." (#17146)
tqchen
(tvm) branch main updated: [Backend][ROCm] Fix error when building TVM with LLVM 19 (#17141)
tqchen
Re: [PR] [Relax] Implement R.ensure_aligned and update memory planning for R.view [tvm]
via GitHub
Re: [PR] [Relax] Implement R.ensure_aligned and update memory planning for R.view [tvm]
via GitHub
Re: [PR] [Relax] Implement R.ensure_aligned and update memory planning for R.view [tvm]
via GitHub
Re: [PR] [Relax] Implement R.ensure_aligned and update memory planning for R.view [tvm]
via GitHub
Re: [PR] [Relax] Implement R.ensure_aligned and update memory planning for R.view [tvm]
via GitHub
Re: [PR] [Relax] Implement R.ensure_aligned and update memory planning for R.view [tvm]
via GitHub
Re: [PR] [Relax] Implement R.ensure_aligned and update memory planning for R.view [tvm]
via GitHub
Re: [PR] [Relax] Implement R.ensure_aligned and update memory planning for R.view [tvm]
via GitHub
Re: [PR] [Relax] Implement R.ensure_aligned and update memory planning for R.view [tvm]
via GitHub
[I] [Docs] A typo in tvm.relay.frontend [tvm]
via GitHub
Re: [I] [Docs] A typo in tvm.relay.frontend [tvm]
via GitHub
Re: [I] [Docs] A typo in tvm.relay.frontend [tvm]
via GitHub
[PR] [DOC] Fix typo for the "We utilize the intermediate representation of nn.Graph to convert the OneFlow model to Reley." [tvm]
via GitHub
Re: [PR] [DOC] Fix typo for the "We utilize the intermediate representation of nn.Graph to convert the OneFlow model to Reley." [tvm]
via GitHub
(tvm) branch nightly updated (0fc047c98b -> fd7c81de3b)
github-bot
Re: [PR] GraphExecutor: Fix wild pointer assign when input and output are reshape [tvm]
via GitHub
Re: [PR] GraphExecutor: Fix wild pointer assign when input and output are reshape [tvm]
via GitHub
Re: [PR] GraphExecutor: Fix wild pointer assign when input and output are reshape [tvm]
via GitHub
Re: [PR] GraphExecutor: Fix wild pointer assign when input and output are reshape [tvm]
via GitHub
Re: [PR] GraphExecutor: Fix wild pointer assign when input and output are reshape [tvm]
via GitHub
[PR] GraphExecutor: Fix wild pointer assign when input and output are reshape [tvm]
via GitHub
Re: [PR] GraphExecutor: Fix wild pointer assign when input and output are reshape [tvm]
via GitHub
Re: [PR] GraphExecutor: Fix wild pointer assign when input and output are reshape [tvm]
via GitHub
(tvm) branch dependabot/pip/apps/microtvm/zipp-3.19.1 created (now 438b2903ff)
github-bot
[PR] Bump zipp from 3.15.0 to 3.19.1 in /apps/microtvm [tvm]
via GitHub
(tvm) branch main updated: [TIR][Schedule] Remove `@type_check` for `set_axis_separator` (#17134)
lunderberg
(tvm) branch main updated: [TOPI] Add dense schedule for fp16 and fp32 using gemm (#17091)
ekalda
[PR] [Bugfix] Allow import of TVM when current directory is read-only [tvm]
via GitHub
Re: [PR] [Bugfix] Allow import of TVM when current directory is read-only [tvm]
via GitHub
[PR] [Backend][ROCm] fix error when building with llvm>=19 [tvm]
via GitHub
Re: [PR] [Backend][ROCm] Fix error when building TVM with LLVM 19 [tvm]
via GitHub
Re: [PR] [Backend][ROCm] Fix error when building TVM with LLVM 19 [tvm]
via GitHub
[I] [Bug] Unable to build TVM with LLVM 19 [tvm]
via GitHub
Re: [I] [Bug] Unable to build TVM with LLVM 19 [tvm]
via GitHub
[PR] [Fix][TIR] Fix outdated call to create extern buffer in make_extern [tvm]
via GitHub
Re: [PR] [Fix][TIR] Fix outdated call to create extern buffer in make_extern [tvm]
via GitHub
[PR] Bump certifi from 2022.12.7 to 2024.7.4 in /apps/microtvm [tvm]
via GitHub
(tvm) branch dependabot/pip/apps/microtvm/certifi-2024.7.4 created (now 9932085905)
github-bot
(tvm) branch nightly updated (0df4103675 -> 0fc047c98b)
github-bot
Re: [PR] [Compute-inline] Prefer T.where for reverse compute-inlined block with predicate [tvm]
via GitHub
(tvm) branch main updated: [Compute-inline] Prefer T.where for reverse compute-inlined block with predicate (#17128)
syfeng
Re: [I] [Tracking Issue] [WebGPU] Supporting DP4A in WebGPU backend [tvm]
via GitHub
Re: [I] [Tracking Issue] [WebGPU] Supporting DP4A in WebGPU backend [tvm]
via GitHub
(tvm) branch main updated: [WebGPU] Implement `tir.dp4a` with WGSL built-in function `dot4I8Packed` (#16976)
tqchen
[I] [Bug] Compilation fails for ARM Cortex-M4 tvmgen_default.h(36): error: conflicting types for 'tvmgen_default_run' [tvm]
via GitHub
Re: [I] [Bug] MicroTVM Compilation fails for ARM Cortex-M4 ,with CMSIS-NN=false [tvm]
via GitHub
Re: [I] [Bug] MicroTVM Compilation fails for ARM Cortex-M4 ,with CMSIS-NN=false [tvm]
via GitHub
Re: [I] [Bug] MicroTVM Compilation fails for ARM Cortex-M4 ,with CMSIS-NN=false [tvm]
via GitHub
Re: [I] [Bug] MicroTVM Compilation fails for ARM Cortex-M4 ,with CMSIS-NN=false [tvm]
via GitHub
Re: [I] [Bug] MicroTVM Compilation fails for ARM Cortex-M4 ,with CMSIS-NN=false [tvm]
via GitHub
Re: [I] [Bug] MicroTVM Compilation fails for ARM Cortex-M4 ,with CMSIS-NN=false [tvm]
via GitHub
Re: [I] [Bug] MicroTVM Compilation fails for ARM Cortex-M4 ,with CMSIS-NN=false [tvm]
via GitHub
[PR] [QoL][IR] Provide default constructor for NameSupply/GlobalVarSupply [tvm]
via GitHub
Re: [PR] [QoL][IR] Provide default constructor for NameSupply/GlobalVarSupply [tvm]
via GitHub
Re: [PR] [QoL][IR] Provide default constructor for NameSupply/GlobalVarSupply [tvm]
via GitHub
[PR] [TIR][Schedule] Remove `@type_check` decorator for `set_axis_separator` [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Remove `@type_check` decorator for `set_axis_separator` [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Remove `@type_check` decorator for `set_axis_separator` [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Remove `@type_check` for `set_axis_separator` [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Remove `@type_check` for `set_axis_separator` [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Remove `@type_check` for `set_axis_separator` [tvm]
via GitHub
Re: [PR] [TIR][Schedule] Remove `@type_check` for `set_axis_separator` [tvm]
via GitHub
[PR] [TIR] Improve Lower cross thread Pass to handle block reduction [tvm]
via GitHub
Re: [PR] [TIR] Enhance Lower cross thread Pass [tvm]
via GitHub
Re: [PR] [TIR] Enhance Lower cross thread Pass [tvm]
via GitHub
Re: [PR] [TIR] Enhance Lower cross thread Pass [tvm]
via GitHub
Earlier messages
Later messages