commits
Thread
Date
Earlier messages
Later messages
Messages by Thread
(tvm) branch nightly updated (45a2a4082e -> 0701aaba4b)
github-bot
svn commit: r80752 - dev/tvm/tvm-ffi-v0.1.3-rc0
syfeng
(tvm-ffi) tag v0.1.3-rc0 created (now 70d6bf0)
syfeng
(tvm-ffi) branch main updated: Add Siyuan Feng's GPG key to KEYS file (#269)
tqchen
(tvm-ffi) branch main updated: feat(stubgen): Generate `__all__` for proper exporting (#268)
junrushao
(tvm) branch main updated: [Relax][PyTorch]: Fix the sqrt operation requires float dtype but receives int64 in attention scaling (#18454)
tlopex
(tvm) branch main updated: [CI] Fix crash when grep finds no matches (#18457)
tqchen
(tvm) branch main updated: [Relax][PyTorch] Fix MultiheadAttention complie (#18459)
tlopex
(tvm) branch main updated: [Relax][PyTorch] Add decomposed operator support for normalization (#18460)
tlopex
(tvm) branch nightly updated (b6ac0721a0 -> 45a2a4082e)
github-bot
(tvm-ffi) branch main updated: feat(stubgen): Refactor into staged pipeline, Introduce directive `import` (#259)
tqchen
(tvm) branch main updated: [Relax][PyTorch] Add decomposed operator support for Binary (#18458)
tlopex
(tvm) branch nightly updated (506a0bbc3f -> b6ac0721a0)
github-bot
(tvm) branch main updated: [DataType] Update to use explicit Bool Type Aligning with DLPack (#18453)
tlopex
(tvm-ffi) branch main updated: [DTYPE] Include dtype literals (#263)
tqchen
(tvm-ffi) branch main updated: [DTYPE] Align bool parsing to align with DLPack (#262)
tqchen
(tvm) branch tvm-ffi-bool updated (b19a4c7d24 -> 0067e1a3d6)
tqchen
(tvm) 01/01: [DataType] Update to use explicit Bool Type Aligning with DLPack
tqchen
(tvm) branch tvm-ffi-bool updated (3458c475f1 -> b19a4c7d24)
tqchen
(tvm) 01/01: [DataType] Update to use explicit Bool Type Aligning with DLPack
tqchen
(tvm-site) branch asf-site updated: deploying docs (apache/tvm@f8471f820a121e9d20ab56f5f26139b461f9afea)
tqchen
(tvm) branch main updated (6c7ed243e5 -> f8471f820a)
tlopex
(tvm-site) branch asf-site updated: deploying docs (apache/tvm@11347788478724395fe1b2c0cac268411e3c5c37)
tqchen
(tvm) branch main updated: [DOCS] Update the merge setting (#18451)
tlopex
[PR] [DOCS] Update the merge setting [tvm]
via GitHub
Re: [PR] [DOCS] Update the merge setting [tvm]
via GitHub
Re: [PR] [DOCS] Update the merge setting [tvm]
via GitHub
Re: [PR] [DOCS] Update the merge setting [tvm]
via GitHub
(tvm) branch main updated: [DOCS] Remove prebuilt package references and disable Colab button at tutorials (#18436)
tqchen
(tvm) branch tvm-ffi-bool updated (a7821b8a8c -> 3458c475f1)
tqchen
(tvm) 01/01: [DataType] Update to use explicit Bool Type Aligning with DLPack
tqchen
(tvm) branch tvm-ffi-bool updated (a975381e85 -> a7821b8a8c)
tqchen
(tvm) 01/01: [DataType] Update to use explicit Bool Type Aligning with DLPack
tqchen
(tvm) branch main updated: [CI] Update pre-commit configuration (#18448)
tqchen
[PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
Re: [PR] fix(web): update progress reporting when loading from cache [tvm]
via GitHub
(tvm) branch main updated: [Relax][PyTorch] Add lower bound support for range constraints (#18447)
mshr
[PR] [Relax][PyTorch] Add decomposed operator support for Pad [tvm]
via GitHub
[PR] [CI] Update pre-commit configuration [tvm]
via GitHub
Re: [PR] [CI] Update pre-commit configuration [tvm]
via GitHub
Re: [PR] [CI] Update pre-commit configuration [tvm]
via GitHub
(tvm-site) branch asf-site updated: deploying docs (apache/tvm@0754ad82d6669af048effcf019cb549ed342605c)
tqchen
[PR] [Relax][PyTorch] Add lower bound support for range constraints [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add lower bound support for range constraints [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add lower bound support for range constraints [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add lower bound support for range constraints [tvm]
via GitHub
Re: [I] [Bug][FRONTEND][ONNX] Error converting operator Expand: TVMError: broadcast_to expects the input tensor shape is broadcastable to the target shape. [tvm]
via GitHub
Re: [I] [FRONTEND][ONNX] Error converting operator Transpose: TVMError: PermuteDims expects the number of input axes to equal the ndim of the input tensor. [tvm]
via GitHub
Re: [I] [FRONTEND][ONNX] Error converting operator Transpose: TVMError: PermuteDims expects the number of input axes to equal the ndim of the input tensor. [tvm]
via GitHub
(tvm) branch main updated: [FRONTEND][ONNX] Fix operator Transpose: TVMError: PermuteDims expects the number of input axes to equal the ndim of the input tensor (#18435)
tlopex
(tvm) branch main updated: [Relax][PyTorch] Add decomposed operator support for MaxPool (#18446)
tlopex
(tvm) branch nightly updated (4b555a964f -> 506a0bbc3f)
github-bot
[PR] [Relax][PyTorch] Add decomposed operator support for MaxPool [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add decomposed operator support for MaxPool [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add decomposed operator support for MaxPool [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add decomposed operator support for MaxPool [tvm]
via GitHub
[I] relax export json [tvm]
via GitHub
Re: [I] relax export json [tvm]
via GitHub
Re: [I] relax export json [tvm]
via GitHub
[PR] Add CPU Sample device API implementation [tvm]
via GitHub
Re: [PR] Add CPU Sample device API implementation [tvm]
via GitHub
Re: [PR] Add CPU Sample device API implementation [tvm]
via GitHub
Re: [PR] Add CPU Sample device API implementation [tvm]
via GitHub
(tvm) branch tvm-ffi-bool updated (11f324d9ea -> a975381e85)
tqchen
(tvm) 01/01: [DataType] Update to use explicit Bool Type Aligning with DLPack
tqchen
(tvm-site) branch asf-site updated: deploying docs (apache/tvm@506a0bbc3f37bbee4bca5ce45972eefb6dc0288c)
tqchen
(tvm-site) branch asf-site updated: deploying docs (apache/tvm@506a0bbc3f37bbee4bca5ce45972eefb6dc0288c)
tqchen
(tvm) branch main updated: [Relax][PyTorch] Add decomposed operator support for AdaptiveAvgPool (#18437)
tlopex
[I] [Bug] InternalError: sqrt operation requires float dtype but receives int64 in attention scaling [tvm]
via GitHub
[I] [Bug] InternalError: Squeeze dimension check too strict compared to PyTorch behavior [tvm]
via GitHub
[I] [Bug] InternalError: PermuteDims dimension mismatch when converting scaled_dot_product_attention with 2D inputs [tvm]
via GitHub
[I] [Bug] InternalError: cannot make uint from negative value -inf when compiling model with MultiheadAttention [tvm]
via GitHub
[I] [Bug] KeyError: 'dtype' when converting PyTorch model with gradient checkpointing using torch.export [tvm]
via GitHub
[I] [Bug] Segmentation fault when converting PyTorch index assignment operation with torch.export [tvm]
via GitHub
[PR] [Relax][PyTorch] Add decomposed operator support for AdaptiveAvgPool [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add decomposed operator support for AdaptiveAvgPool [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add decomposed operator support for AdaptiveAvgPool [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add decomposed operator support for AdaptiveAvgPool [tvm]
via GitHub
(tvm) branch main updated (4b555a964f -> 6785c8f131)
tlopex
[PR] [DOC] Remove prebuilt package references and disable Colab button at tutorials [tvm]
via GitHub
Re: [PR] [DOC] Remove prebuilt package references and disable Colab button at tutorials [tvm]
via GitHub
Re: [PR] [DOC] Remove prebuilt package references and disable Colab button at tutorials [tvm]
via GitHub
Re: [PR] [DOCS] Remove prebuilt package references and disable Colab button at tutorials [tvm]
via GitHub
Re: [PR] [DOCS] Remove prebuilt package references and disable Colab button at tutorials [tvm]
via GitHub
(tvm) branch nightly updated (f574031657 -> 4b555a964f)
github-bot
(tvm) branch tvm-ffi-bool updated: Update
tqchen
(tvm-ffi) branch main updated: [DLPack] Leverage exchange api when possible (#260)
yaxingcai
(tvm) branch main updated: Adjusted Longrope embedding function to match Huggingface Implementation (#18422)
ruihangl
(tvm) branch tvm-ffi-bool updated (ee4270e209 -> 4b1670421f)
tqchen
(tvm) 01/01: [DataType] Update to use explicit Bool Type Aligning with DLPack
tqchen
(tvm-site) branch asf-site updated: deploying docs (apache/tvm@d013dad06d38cf0e011ac065815db2609d2c3efe)
tqchen
(tvm) branch tvm-ffi-bool created (now ee4270e209)
tqchen
(tvm) 01/01: [DataType] Update to use explicit Bool Type Aligning with DLPack
tqchen
(tvm) branch main updated: [TOPI] Support integer type input for log and log2 (#18426)
ruihangl
(tvm) branch main updated: [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests (#18433)
tlopex
[PR] [FRONTEND][ONNX] Fix operator Transpose: TVMError: PermuteDims expects the number of input axes to equal the ndim of the input tensor [tvm]
via GitHub
Re: [PR] [FRONTEND][ONNX] Fix operator Transpose: TVMError: PermuteDims expects the number of input axes to equal the ndim of the input tensor [tvm]
via GitHub
(tvm-ffi) branch main updated: [ADDON] Add torch c dlpack for macos (#256)
tqchen
[I] [Docs] Remove Google Analytics from the TVM Website [tvm]
via GitHub
Re: [I] [Docs] Remove Google Analytics from the TVM Website [tvm]
via GitHub
Re: [I] [Docs] Remove Google Analytics from the TVM Website [tvm]
via GitHub
Re: [I] [Docs] Remove Google Analytics from the TVM Website [tvm]
via GitHub
Re: [I] [Docs] Remove Google Analytics from the TVM Website [tvm]
via GitHub
(tvm) branch main updated: [Relax][Pytorch] Support basic range constraints (#18429)
mshr
Re: [I] [Bug] 3 tests fail [tvm]
via GitHub
Re: [I] [Bug] 3 tests fail [tvm]
via GitHub
Re: [I] [Bug] 3 tests fail [tvm]
via GitHub
Re: [I] [Bug] 3 tests fail [tvm]
via GitHub
(tvm) branch nightly updated (26db8bfd7e -> f574031657)
github-bot
[PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests [tvm]
via GitHub
(tvm-ffi) branch main updated: [Fix] Enhanced check for CUDA availability (#258)
ruihangl
svn commit: r80636 - release/tvm/tvm-ffi-v0.1.2
tqchen
(tvm-ffi) tag v0.1.2 created (now c1df05f)
tqchen
(tvm) branch main updated: [TEST][CODEGEN] Fix the test scripts tries to tell numpy a dtype name that it cannot recognise (#18430)
tlopex
[PR] Enable username checks in PR title and body [tvm]
via GitHub
Re: [PR] [CI] Enable username checks in PR title and body [tvm]
via GitHub
Re: [PR] [CI] Enable username checks in PR title and body [tvm]
via GitHub
Re: [PR] [CI] Enable username checks in PR title and body [tvm]
via GitHub
(tvm) branch nightly updated (33fa9262fa -> 26db8bfd7e)
github-bot
(tvm-site) branch asf-site updated: deploying docs (apache/tvm@26db8bfd7e527198f43f3cc379f404c7513a82ef)
tqchen
(tvm-ffi) branch main updated: [ADDON] Add torch c dlpack ext wheels for windows (#252)
tqchen
(tvm-ffi) branch main updated: [PyTorch] Allow tensor conversion on rocm backend (#253)
tqchen
(tvm) branch dependabot/pip/docker/python/pip-25.3 created (now 872198ecc5)
github-bot
[PR] Bump pip from 22.1.1 to 25.3 in /docker/python [tvm]
via GitHub
(tvm) branch dependabot/pip/docker/python/pip-25.2 deleted (was 80b1bc229f)
github-bot
(tvm) branch main updated: [Web] Fix arrayDecodeStorage scope issue for q0f32 models (#18415)
tqchen
[PR] [TEST][CODEGEN] Fix the test scripts tries to tell numpy a dtype name that it cannot recognise [tvm]
via GitHub
Re: [PR] [TEST][CODEGEN] Fix the test scripts tries to tell numpy a dtype name that it cannot recognise [tvm]
via GitHub
[PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
Re: [PR] [Relax][Pytorch] Support basic range constraints [tvm]
via GitHub
(tvm) branch main updated: [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(8) (#18428)
mshr
[PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(8) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(8) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(8) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(8) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(8) [tvm]
via GitHub
(tvm-ffi) branch main updated: [misc] support various arg types (#229)
tqchen
(tvm-ffi) branch main updated: feat: Introduce `tvm.registry.get_registered_type_keys()` (#249)
tqchen
(tvm-ffi) branch main updated: chore(cython): Specify `--module-name` when building Cython (#248)
tqchen
(tvm) branch nightly updated (3d136588d9 -> 33fa9262fa)
github-bot
(tvm) branch main updated: [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(7) (#18427)
mshr
(tvm-ffi) branch main updated: fix(cython): Make sure `TypeInfo` is properly registered by all classes (#246)
junrushao
(tvm-ffi) branch main updated: feat(cython): Expose TypeAttr via `_lookup_type_attr` (#247)
junrushao
svn commit: r80574 - dev/tvm/tvm-ffi-v0.1.2-rc0
tqchen
(tvm-ffi) tag v0.1.2.rc0 created (now c1df05f)
tqchen
(tvm-ffi) branch main updated: [PYTHON] Further streamline number handling (#242)
tqchen
(tvm-ffi) branch main updated (5a87749 -> 8ee0e49)
tqchen
(tvm-ffi) branch main updated: [CYTHON] Improve fallback and dtype convert behavior (#241)
tqchen
(tvm-ffi) branch main updated: [FIX] Fix missing static registration for DLTensor* (#239)
junrushao
(tvm-ffi) branch main updated: [ERROR] Make Error conform more to std (#240)
junrushao
[PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(7) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(7) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(7) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(7) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(7) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(7) [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for decomposed operators and fix IR of ops tests(7) [tvm]
via GitHub
(tvm-ffi) branch main updated: [STREAM] Enable compact with cuda-python driver stream (#236)
junrushao
(tvm-ffi) branch main updated: Allow handling of load-bearing compiler flags for dlpack (#231)
junrushao
(tvm-ffi) branch main updated: [CYTHON] Fix ctypes.c_void_p for nullptr (#235)
tqchen
Re: [PR] feat(relax/frontend/torch): Add basic range constraint support [tvm]
via GitHub
Re: [PR] feat(relax/frontend/torch): Add basic range constraint support [tvm]
via GitHub
Re: [PR] feat(relax/frontend/torch): Add basic range constraint support [tvm]
via GitHub
Re: [PR] feat(relax/frontend/torch): Add basic range constraint support [tvm]
via GitHub
[PR] Support integer type input for log and log2 [tvm]
via GitHub
Re: [PR] Support integer type input for log and log2 [tvm]
via GitHub
Earlier messages
Later messages