commits
Thread
Date
Earlier messages
Later messages
Messages by Thread
(tvm) branch main updated (cdfdd0e4ec -> e738f1d4f1)
tqchen
[I] [Bug] Init block not discoverable after sch.blockize [tvm]
via GitHub
Re: [I] [Bug] Init block not discoverable after sch.blockize [tvm]
via GitHub
Re: [I] [Bug] Init block not discoverable after sch.blockize [tvm]
via GitHub
[PR] [CUBLAS][FP8] Support e4m3 gemm in cuBLAS BYOC [tvm]
via GitHub
Re: [PR] [CUBLAS][FP8] Support e4m3 gemm in cuBLAS BYOC [tvm]
via GitHub
[PR] [Contrib] Enable fp16 for thrust [tvm]
via GitHub
Re: [PR] [Contrib] Enable fp16 for thrust sort [tvm]
via GitHub
Re: [PR] [Relax][Frontend] Fix sort, argsort and topk in nn module [tvm]
via GitHub
Re: [PR] [Relax][Frontend] Fix sort, argsort and topk in nn module [tvm]
via GitHub
(tvm) branch nightly updated (a64d1f1cc3 -> d4056ca795)
github-bot
Re: [I] [Bug] InitCCLPerWorker Fails when using AMD GPU Bridge [tvm]
via GitHub
[PR] Bump sqlparse from 0.4.3 to 0.5.0 in /apps/microtvm [tvm]
via GitHub
(tvm) branch dependabot/pip/apps/microtvm/sqlparse-0.5.0 created (now 824003e6f5)
github-bot
[PR] [dlight] Add check for matmul dtype and fix reduction rule [tvm]
via GitHub
Re: [PR] [dlight] Add check for matmul dtype and fix reduction rule [tvm]
via GitHub
(tvm) branch main updated (f267691fa4 -> d4056ca795)
ekalda
(tvm) branch main updated (a64d1f1cc3 -> f267691fa4)
tqchen
[PR] [Relax] Stabilize relax pass mutation order [tvm]
via GitHub
Re: [PR] [Relax] Stabilize relax pass mutation order [tvm]
via GitHub
(tvm) branch nightly updated (64911ab5da -> a64d1f1cc3)
github-bot
(tvm) branch main updated: [TIR] Make T.reinterpret nop when dtype is the same (#16879)
tqchen
[PR] [Codegen][Debug] fix unnumbered reshape in graph executor [tvm]
via GitHub
(tvm) branch nightly updated (0a3fe22208 -> 64911ab5da)
github-bot
(tvm) tag v0.16.0.rc0 created (now 64969035fd)
ysh329
(tvm) tag v0.17.dev0 created (now d0cbb02e1d)
ysh329
(tvm) branch main updated: [Runtime] Implemented Datatype.itemsize() (#16880)
tqchen
(tvm) branch main updated (5c80691c81 -> d0cbb02e1d)
wuwei
(tvm) 02/02: [release] Update version to 0.17.dev0 on main branch
wuwei
(tvm) 01/02: [release] Update version to 0.16.0 on main branch
wuwei
(tvm) branch main updated: [Dlight] Enhance vectorization loading weight for gemv (#16878)
tqchen
Re: [PR] [Dlight] Enhance vectorization loading weight for gemv [tvm]
via GitHub
[PR] [WIP][release][Dont Squash] Update version to 0.16.0 and 0.17.0.dev on main branch [tvm]
via GitHub
Re: [PR] [release][Dont Squash] Update version to 0.16.0 and 0.17.0.dev on main branch [tvm]
via GitHub
(tvm) branch nightly updated (88a1c6560c -> 0a3fe22208)
github-bot
[PR] [Runtime] Implemented Datatype.itemsize() [tvm]
via GitHub
Re: [PR] [Runtime] Implemented Datatype.itemsize() [tvm]
via GitHub
Re: [PR] [Runtime] Implemented Datatype.itemsize() [tvm]
via GitHub
[PR] [TIR] Make T.reinterpret nop when dtype is the same [tvm]
via GitHub
Re: [PR] [TIR] Make T.reinterpret nop when dtype is the same [tvm]
via GitHub
[PR] [TVMScript][Bug] Add test case for missing symbolic bounds [tvm]
via GitHub
Re: [PR] [TVMScript][Bug] Add test case for missing symbolic bounds [tvm]
via GitHub
(tvm) branch main updated: [Relax] Enhance symbolic expr estimation in memory planning (#16872)
tqchen
[PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
Re: [PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
Re: [PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
Re: [PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
Re: [PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
(tvm) branch main updated: [Thrust] Fix thrust workspace allocation (#16873)
tqchen
(tvm) branch nightly updated (f9e36fcbf8 -> 88a1c6560c)
github-bot
(tvm) branch dependabot/pip/apps/microtvm/idna-3.7 created (now 557d185544)
github-bot
[PR] Bump idna from 3.4 to 3.7 in /apps/microtvm [tvm]
via GitHub
(tvm) branch dependabot/pip/docker/python/idna-3.7 created (now 4fdd576c8a)
github-bot
[PR] Bump idna from 3.3 to 3.7 in /docker/python [tvm]
via GitHub
(tvm) branch main updated: [3rdparty] Bump flashinfer (#16868)
tqchen
(tvm) branch main updated: [PageKV] allow PopN to pop all the tokens in last block (#16871)
tqchen
[PR] [Thrust] Fix thrust workspace allocation [tvm]
via GitHub
Re: [PR] [Thrust] Fix thrust workspace allocation [tvm]
via GitHub
Re: [PR] [Thrust] Fix thrust workspace allocation [tvm]
via GitHub
[PR] [Relax] Enhance symbolic expr estimation in memory planning [tvm]
via GitHub
Re: [PR] [Relax] Enhance symbolic expr estimation in memory planning [tvm]
via GitHub
[PR] [RFC] Add NNEF frontend #108 [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
[PR] [PageKV] allow PopN to pop all the tokens in last block [tvm]
via GitHub
Re: [PR] [PageKV] allow PopN to pop all the tokens in last block [tvm]
via GitHub
(tvm) branch main updated: [OpenCL] Add OpenCL device for automatic target detection (#16854)
tqchen
[I] [Bug] Inconsistent Results between Direct Optimization and Sequential Optimization in TVM [tvm]
via GitHub
Re: [I] [Bug] Inconsistent Results between Direct Optimization and Sequential Optimization in TVM [tvm]
via GitHub
[I] [Bug] Error in compiling model after applying LazyGradientInit optimization [tvm]
via GitHub
(tvm) branch main updated: [BugFix][Target] Added null check to fix segfault at ->defined() in cpu.cc DetectSystemTriple() (#16766)
lukhut
Re: [I] [Bug] VTA FSIM MacOS incompatibility [tvm]
via GitHub
Re: [I] [Bug] VTA FSIM MacOS incompatibility [tvm]
via GitHub
(tvm) branch nightly updated (4d4f0508a2 -> f9e36fcbf8)
github-bot
(tvm) branch main updated: [3rdparty] Bump FlashInfer (#16866)
wuwei
Re: [PR] [3rdparty] Bump FlashInfer [tvm]
via GitHub
[PR] [3rdparty] Bump flashinfer [tvm]
via GitHub
Re: [PR] [3rdparty] Bump flashinfer [tvm]
via GitHub
Re: [PR] [3rdparty] Bump flashinfer [tvm]
via GitHub
(tvm) branch main updated: [Relax] Dispatch sort/scan for non-cuda gpu backends (#16867)
yongwww
Re: [PR] [Relax] Dispatch sort/scan for non-cuda gpu backends [tvm]
via GitHub
(tvm) branch main updated (2829b59e1c -> 6748215b42)
tqchen
(tvm) branch main updated: [TVMScript] Add parser and printer support for e4m3/e5m2 fp8 (#16864)
tqchen
(tvm) branch main updated (95cb0de27a -> a482b4c191)
tqchen
(tvm) branch main updated: [VULKAN] Fix CLZ support for Vulkan (#16858)
tqchen
(tvm) branch nightly updated (a309b6b857 -> 4d4f0508a2)
github-bot
[PR] Feat/fp8 broadcast [tvm]
via GitHub
Re: [PR] [Codegen, CUDA] Add handling of fp8 broadcast / const [tvm]
via GitHub
Re: [PR] [Codegen, CUDA] Add handling of fp8 broadcast / const [tvm]
via GitHub
Re: [PR] [Codegen, CUDA] Add handling of fp8 broadcast / const [tvm]
via GitHub
[PR] [TVMScript] Add parser and printer support for e4m3/e5m2 fp8 [tvm]
via GitHub
Re: [PR] [TVMScript] Add parser and printer support for e4m3/e5m2 fp8 [tvm]
via GitHub
[PR] [Picojson] Let the key of objects in json be ordered by default [tvm]
via GitHub
Re: [PR] [Picojson] Let the key of objects in json be ordered by default [tvm]
via GitHub
Re: [PR] [Picojson] Let the key of objects in json be ordered by default [tvm]
via GitHub
Re: [PR] [Picojson] Let the key of objects in json be ordered by default [tvm]
via GitHub
[PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
[PR] [QoL][Relax] Infer StructInfo for relax::Tuple on construction [tvm]
via GitHub
Re: [PR] [QoL][Relax] Infer StructInfo for relax::Tuple on construction [tvm]
via GitHub
[PR] [QoL][Relax] Return well-formed IR from relax::Function::CreateEmpty [tvm]
via GitHub
Re: [PR] [QoL][Relax] Return well-formed IR from relax::Function::CreateEmpty [tvm]
via GitHub
(tvm) branch main updated: [SVE] Support scalable vectors in LoopVectorizer (#16782)
lukhut
[PR] [VULKAN] Fix CLZ support for Vulkan [tvm]
via GitHub
Re: [PR] [VULKAN] Fix CLZ support for Vulkan [tvm]
via GitHub
Re: [PR] [VULKAN] Fix CLZ support for Vulkan [tvm]
via GitHub
Re: [PR] [VULKAN] Fix CLZ support for Vulkan [tvm]
via GitHub
[I] [Release] v0.16.0 release schedule [tvm]
via GitHub
Re: [I] [Release] v0.16.0 release schedule [tvm]
via GitHub
Re: [I] [Release] v0.16.0 release schedule [tvm]
via GitHub
Re: [I] [Release] v0.16.0 release schedule [tvm]
via GitHub
Re: [I] [Release] v0.16.0 release schedule [tvm]
via GitHub
(tvm) branch nightly updated (81a850693d -> a309b6b857)
github-bot
(tvm) branch main updated: [Thrust] Use pointer to tls pool to prevent creating new pool (#16856)
wuwei
(tvm) branch main updated: [ONNX] Fix interpreting auto_pad parameters in ConvTranspose operator (#16001)
yongwww
[PR] [Thrust] Use pointer to tls pool to prevent creating new pool [tvm]
via GitHub
Re: [PR] [Thrust] Use pointer to tls pool to prevent creating new pool [tvm]
via GitHub
(tvm) branch main updated (81a850693d -> d1e24ca721)
tqchen
[I] [Bug] https://github.com/apache/tvm/blob/main/src/te/schedule/message_passing.cc#L415 [tvm]
via GitHub
Re: [I] [Bug] https://github.com/apache/tvm/blob/main/src/te/schedule/message_passing.cc#L417 [tvm]
via GitHub
Re: [PR] [ONNX] Fix interpreting auto_pad parameters in ConvTranspose operator [tvm]
via GitHub
Re: [PR] [ONNX] Fix interpreting auto_pad parameters in ConvTranspose operator [tvm]
via GitHub
Re: [PR] [ONNX] Fix interpreting auto_pad parameters in ConvTranspose operator [tvm]
via GitHub
(tvm) branch nightly updated (a156181ee3 -> 81a850693d)
github-bot
[PR] [OpenCL] Add OpenCL device for automatic target detection. [tvm]
via GitHub
Re: [PR] [OpenCL] Add OpenCL device for automatic target detection. [tvm]
via GitHub
Re: [PR] [OpenCL] Add OpenCL device for automatic target detection [tvm]
via GitHub
Re: [PR] [OpenCL] Add OpenCL device for automatic target detection [tvm]
via GitHub
Re: [PR] [OpenCL] Add OpenCL device for automatic target detection [tvm]
via GitHub
Re: [PR] [OpenCL] Add OpenCL device for automatic target detection [tvm]
via GitHub
Re: [PR] [OpenCL] Add OpenCL device for automatic target detection [tvm]
via GitHub
Re: [PR] [OpenCL] Add OpenCL device for automatic target detection [tvm]
via GitHub
Re: [PR] [OpenCL] Add OpenCL device for automatic target detection [tvm]
via GitHub
Re: [PR] [OpenCL] Add OpenCL device for automatic target detection [tvm]
via GitHub
(tvm) branch main updated: [TIR] Use constructor for new PrimFunc in TransformLayout (#16832)
sanirudh
(tvm) branch main updated: Fixing probability comment (#16850)
sanirudh
(tvm) branch main updated: [KVCache] Initialize one extra page than specified (#16849)
tqchen
(tvm) branch nightly updated (9b5a7a457f -> a156181ee3)
github-bot
(tvm) branch main updated: [Relax] Fix EliminiateCommonSubexpr removing alloc tensor (#16852)
tqchen
(tvm) branch main updated: [Relax,Topi] Allow passing workspace to thrust to avoid allocations (#16851)
tqchen
Re: [I] How does tvm decide the number of threads and blocks to launch the cuda kernels? [tvm]
via GitHub
(tvm) branch nightly updated (cd08356e66 -> 9b5a7a457f)
github-bot
[PR] [Relax] Fix EliminiateCommonSubexpr removing alloc tensor [tvm]
via GitHub
Re: [PR] [Relax] Fix EliminiateCommonSubexpr removing alloc tensor [tvm]
via GitHub
[PR] [Relax,Topi] Allow passing workspace to thrust to avoid allocations [tvm]
via GitHub
Re: [PR] [Relax,Topi] Allow passing workspace to thrust to avoid allocations [tvm]
via GitHub
(tvm) branch main updated: [IR] Provide well-formed intermediate in ApplyPassToFunction (#16843)
lunderberg
[PR] Fixing probability comment [tvm]
via GitHub
Re: [PR] Fixing probability comment [tvm]
via GitHub
[PR] [KVCache] Initialize one extra page than specified [tvm]
via GitHub
Re: [PR] [KVCache] Initialize one extra page than specified [tvm]
via GitHub
Re: [PR] [MSC][M5.3] Support torch.dynamo for dynamic models [tvm]
via GitHub
(tvm) branch main updated: [TVMScript] Produce empty DictAttrs when R.func_attrs is absent (#16844)
lunderberg
(tvm) branch main updated: [DLight] Fix a corner case for reduction rule (#16848)
syfeng
(tvm) branch main updated: [CI] Disable flaky unit test (#16837)
sanirudh
(tvm) branch main updated: [Meta-Schedule][OpenCL] Enable MS tuning for Android OpenCL (#16846)
echuraev
(tvm) branch nightly updated (6f74762743 -> cd08356e66)
github-bot
(tvm) branch main updated: [TIR] Fix segfaults from ordering of Let/Assert in MakePackedAPI (#16543)
lunderberg
[PR] [DLight] Fix a corner case for reduction rule [tvm]
via GitHub
Re: [PR] [DLight] Fix a corner case for reduction rule [tvm]
via GitHub
[PR] [relay][feature] save relay IR as onnx for visualize [tvm]
via GitHub
Re: [PR] [relay][feature] save relay IR as onnx for visualize [tvm]
via GitHub
Re: [PR] [relay][feature] save relay IR as onnx for visualize [tvm]
via GitHub
Re: [PR] [relay][feature] save relay IR as onnx for visualize [tvm]
via GitHub
Re: [PR] [relay][feature] save relay IR as onnx for visualize [tvm]
via GitHub
Re: [PR] [relay][feature] save relay IR as onnx for visualize [tvm]
via GitHub
Re: [PR] [relay][feature] save relay IR as onnx for visualize [tvm]
via GitHub
Re: [PR] [relay][feature] save relay IR as onnx for visualize [tvm]
via GitHub
(tvm) branch main updated: [Debug][Disco] Check if a PackedFunc exists before calling it (#16845)
tqchen
[PR] [Meta-Schedule][OpenCL] Enable MS tuning for Android OpenCL [tvm]
via GitHub
Re: [PR] [Meta-Schedule][OpenCL] Enable MS tuning for Android OpenCL [tvm]
via GitHub
(tvm) branch dependabot/pip/apps/microtvm/pillow-10.3.0 updated (c0de939622 -> 5b7db0cdec)
github-bot
(tvm) branch dependabot/pip/apps/microtvm/cmsisnn/pillow-10.3.0 deleted (was 78f4bc6434)
lukhut
(tvm) branch main updated (c84f6bb4fd -> dd384906e3)
lukhut
(tvm) branch dependabot/pip/apps/microtvm/ethosu/pillow-10.3.0 deleted (was 61558f1e91)
lukhut
(tvm) branch main updated (6f74762743 -> c84f6bb4fd)
lukhut
(tvm) branch nightly updated (ef80af65dd -> 6f74762743)
github-bot
(tvm) branch main updated: [Relax] Provide well-formed output in `transform.LazyGetInput` (#16841)
lunderberg
[PR] [Debug][Disco] Check if a PackedFunc exists before calling it [tvm]
via GitHub
Re: [PR] [Debug][Disco] Check if a PackedFunc exists before calling it [tvm]
via GitHub
[PR] [TVMScript] Produce empty DictAttrs when R.func_attrs is absent [tvm]
via GitHub
Re: [PR] [TVMScript] Produce empty DictAttrs when R.func_attrs is absent [tvm]
via GitHub
Re: [PR] [TVMScript] Produce empty DictAttrs when R.func_attrs is absent [tvm]
via GitHub
Re: [PR] [TVMScript] Produce empty DictAttrs when R.func_attrs is absent [tvm]
via GitHub
[PR] [IR] Provide well-formed intermediate in ApplyPassToFunction [tvm]
via GitHub
Re: [PR] [IR] Provide well-formed intermediate in ApplyPassToFunction [tvm]
via GitHub
Re: [PR] [IR] Provide well-formed intermediate in ApplyPassToFunction [tvm]
via GitHub
Re: [PR] [IR] Provide well-formed intermediate in ApplyPassToFunction [tvm]
via GitHub
Re: [PR] [IR] Provide well-formed intermediate in ApplyPassToFunction [tvm]
via GitHub
Earlier messages
Later messages