commits
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [PR] [TOPI] improve inclusive_scan for thrust [tvm]
via GitHub
Re: [PR] [TOPI] improve inclusive_scan for thrust [tvm]
via GitHub
Re: [PR] [TOPI] improve inclusive_scan for thrust [tvm]
via GitHub
Re: [PR] [TOPI] improve inclusive_scan for thrust [tvm]
via GitHub
Re: [PR] [TOPI] improve inclusive_scan for thrust [tvm]
via GitHub
Re: [PR] [TOPI] improve inclusive_scan for thrust [tvm]
via GitHub
Re: [PR] [TOPI] improve inclusive_scan for thrust [tvm]
via GitHub
Re: [PR] [TOPI] improve inclusive_scan for thrust [tvm]
via GitHub
Re: [PR] [TOPI] improve inclusive_scan for thrust [tvm]
via GitHub
(tvm) branch main updated (ff3716b83a -> c2c579bb0a)
ekalda
Re: [PR] [BugFix][FFI] Add a missing default for datatype lanes [tvm]
via GitHub
[PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] Update the export method of PaddlePaddle Softmax [tvm]
via GitHub
(tvm) branch nightly updated (b3fa6cb873 -> ff3716b83a)
github-bot
[PR] [Frontend][PaddlePaddle] PaddlePaddle model with NHWC data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
Re: [PR] [Frontend][PaddlePaddle] PaddlePaddle model with NCHW data format that supports quantization [tvm]
via GitHub
[PR] [Web] Revert back to the non-parallel version to avoid cache.add() error [tvm]
via GitHub
Re: [PR] [Web] Revert back to the non-parallel version to avoid cache.add() error [tvm]
via GitHub
Re: [PR] [Web] Revert back to the non-parallel version to avoid cache.add() error [tvm]
via GitHub
Re: [PR] [Web] Revert back to the non-parallel version to avoid cache.add() error [tvm]
via GitHub
Re: [PR] [Web] Revert back to the non-parallel version to avoid cache.add() error [tvm]
via GitHub
Re: [PR] [Web] Seperate parallel shard download and iterative shard loading [tvm]
via GitHub
(tvm) branch main updated (563ef9587c -> ff3716b83a)
tqchen
Re: [I] [Tracking Issue] [WebGPU] Supporting DP4A in WebGPU backend [tvm]
via GitHub
Re: [I] [Tracking Issue] [WebGPU] Supporting DP4A in WebGPU backend [tvm]
via GitHub
Re: [I] [Tracking Issue] [WebGPU] Supporting DP4A in WebGPU backend [tvm]
via GitHub
Re: [I] [Tracking Issue] [WebGPU] Supporting DP4A in WebGPU backend [tvm]
via GitHub
Re: [I] [Tracking Issue] [WebGPU] Supporting DP4A in WebGPU backend [tvm]
via GitHub
Re: [I] [Tracking Issue] [WebGPU] Supporting DP4A in WebGPU backend [tvm]
via GitHub
(tvm) branch main updated: [SVE] Add support for scalable data type strings (#16612)
ekalda
(tvm) branch nightly updated (7e269dcfc8 -> b3fa6cb873)
github-bot
[I] [Bug][QNN][QNNX-Frontent] Error reading zero_point parameter in per-channel quantization. [tvm]
via GitHub
(tvm) branch main updated: [AOT][Testing] Print output values on test failure (#16611)
ekalda
[I] [Docs][Datatypes] Minimal example for supporting custom datatypes [tvm]
via GitHub
[PR] [Relax][Frontend][Onnx]fix name supply bug [tvm]
via GitHub
Re: [PR] [Relax][Frontend][Onnx]fix name supply bug [tvm]
via GitHub
Re: [PR] [Relax][Frontend][Onnx]fix name supply bug [tvm]
via GitHub
Re: [PR] [Relax][Frontend][Onnx]fix name supply bug [tvm]
via GitHub
(tvm) branch main updated: [Disco] Expose functions to query the per-worker device/rank (#16639)
masahi
(tvm) branch main updated: [Disco] Implement `Session.import_python_module` method (#16617)
masahi
Re: [I] [Bug] Unable to build TVM with LLVM 12.0.0 [tvm]
via GitHub
[I] [Bug] int8 slower than float32 on GPU [tvm]
via GitHub
(tvm) branch nightly updated (fc4abee022 -> 7e269dcfc8)
github-bot
(tvm) branch main updated: [RUNTIME][RPC] Enable RPCObjectRef over multi-hop RPC (#16635)
yongwww
(tvm) branch main updated: [Runtime] Add TVM_DLL to threading backend funcs (#16630)
yongwww
[PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]
via GitHub
[PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]
via GitHub
[PR] [TVMScript] Represent tir::builtin::ret() using python "return" [tvm]
via GitHub
Re: [PR] [TVMScript] Represent tir::builtin::ret() using python "return" [tvm]
via GitHub
(tvm) branch main updated (89cc09c621 -> 2ca8f3131e)
masahi
[PR] [Disco] Expose functions to query the per-worker device/rank [tvm]
via GitHub
Re: [PR] [Disco] Expose functions to query the per-worker device/rank [tvm]
via GitHub
[PR] [CMAKE][CUTLASS] Improve dependancy management [tvm]
via GitHub
Re: [PR] [CMAKE][CUTLASS] Improve dependancy management [tvm]
via GitHub
Re: [PR] [CMAKE][CUTLASS] Improve dependancy management [tvm]
via GitHub
Re: [PR] [CMAKE][CUTLASS] Improve dependancy management [tvm]
via GitHub
Re: [PR] [CMAKE][CUTLASS] Improve dependancy management [tvm]
via GitHub
Re: [PR] [CMAKE][CUTLASS] Improve dependancy management [tvm]
via GitHub
Re: [PR] [CMAKE][CUTLASS] Improve dependency management [tvm]
via GitHub
Re: [PR] [CMAKE][CUTLASS] Improve dependency management [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [BugFix] Fix Crash Cases Caused by "__tvm_meta__ = None" [tvm]
via GitHub
Re: [PR] [Bugfix][Transform] Preserve symbolic variables in FuseOps [tvm]
via GitHub
Re: [PR] [Bugfix][Transform] Preserve symbolic variables in FuseOps [tvm]
via GitHub
[I] Not able to pass the NDArray from Java to PackedFunc. [tvm]
via GitHub
Re: [I] Not able to pass the NDArray from Java to PackedFunc. [tvm]
via GitHub
Re: [I] Not able to pass the NDArray from Java to PackedFunc. [tvm]
via GitHub
[PR] [RUNTIME][RPC] Enable RPCObjectRef over multi-hop RPC [tvm]
via GitHub
Re: [PR] [RUNTIME][RPC] Enable RPCObjectRef over multi-hop RPC [tvm]
via GitHub
(tvm) branch main updated: [Unity][Transform] Handle dynamic shapes in CombineParallelMatmul (#16591)
lunderberg
(tvm) branch main updated: [Transform] De-duplicate MatchCast nodes in EliminateCommonSubexpr (#16599)
lunderberg
(tvm) branch main updated: [Relax][Transform] Preserve param names in LiftTransformParams (#16594)
lunderberg
(tvm) branch main updated: [Transform] Implement relax.transform.ReorderPermuteDimsAfterConcat (#16596)
lunderberg
(tvm) branch main updated: [Unity][SLM] GPU sampling (#16575)
tqchen
(tvm) branch main updated: [Unity][Analysis] Include impure call in VerifyWellFormed errors (#16585)
lunderberg
(tvm) branch main updated: [Relax] Additional unit tests for RemoveUnusedParameters (#16574)
lunderberg
(tvm) branch main updated: [Unity][Transform] Raise error in FuseOpsByPattern for SSA violation (#16421)
lunderberg
(tvm) branch main updated: [TIR] Expand debug symbol output for CodeGenLLVM (#16544)
lunderberg
(tvm) branch main updated: [Web] Fix NDArrayCache loading report callback (#16631)
tqchen
[I] [CI Problem] Crashed Cases Haven't Catched by CI [tvm]
via GitHub
Re: [I] [CI Problem] Crashed Cases Haven't Catched by CI [tvm]
via GitHub
Re: [I] [CI Problem] Crashed Cases Haven't Catched by CI [tvm]
via GitHub
(tvm) branch main updated: [Relay][ONNX] Fix the attribute mode parse of operator Upsample (#16622)
echuraev
(tvm) branch main updated: [Relay][ONNX] Fix the Resize operator in ONNX frontend (#16626)
echuraev
(tvm) branch nightly updated (ff0b99c5ce -> fc4abee022)
github-bot
[PR] [Web] Fix NDArrayCache loading report callback [tvm]
via GitHub
Re: [PR] [Web] Fix NDArrayCache loading report callback [tvm]
via GitHub
Re: [PR] [Web] Fix NDArrayCache loading report callback [tvm]
via GitHub
(tvm) branch main updated (4b7d78d157 -> fc4abee022)
syfeng
[PR] [Runtime] Add TVM_DLL to threading backend funcs [tvm]
via GitHub
Re: [PR] [Runtime] Add TVM_DLL to threading backend funcs [tvm]
via GitHub
[PR] [Relax] Fix error message in BlockBuilder [tvm]
via GitHub
Re: [PR] [Relax] Fix error message in BlockBuilder [tvm]
via GitHub
(tvm) branch main updated: [Relax] Handle dynamic arguments in legalization of nn.attention (#16592)
lunderberg
(tvm) branch main updated: [Unity][Transform] Check for permute_dims in ExpandMatmulOfSum (#16590)
lunderberg
(tvm) branch main updated: [Transform] Allow explicit name of bundled model parameters (#16597)
lunderberg
Re: [PR] [Transform] Allow explicit name of bundled model parameters [tvm]
via GitHub
Earlier messages
Later messages