discuss-archive
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
Re: [PR] [Feature] Add cubin launcher utility as an extra header [tvm-ffi]
via GitHub
[PR] [Relax][PyTorch] Add count_include_pad support to avg_pool2d [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add `count_include_pad` support to `avg_pool2d` in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add `count_include_pad` support to `avg_pool2d` in PyTorch frontend [tvm]
via GitHub
[GH] (tvm/main): Workflow run "pip in /docker/python for certifi - Update #1161509587" failed!
GitBox
[GH] (tvm/main): Workflow run "npm_and_yarn in /web for underscore.string - Update #1161509572" failed!
GitBox
[GH] (tvm-ffi/torch-compact): Workflow run "CI" is working again!
GitBox
[PR] [Relax][PyTorch] Fix batch_norm.default in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix `batch_norm.default` args handling in ExportedProgram frontend [tvm]
via GitHub
[GH] (tvm-ffi/torch-compact): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/torch-compact): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/torch-compact): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/torch-compact): Workflow run "CI" failed!
GitBox
[PR] [TORCH] Fix precise version of f8e8m0 support [tvm-ffi]
via GitHub
Re: [PR] [TORCH] Fix precise version of f8e8m0 support [tvm-ffi]
via GitHub
Re: [PR] [TORCH] Fix precise version of f8e8m0 support [tvm-ffi]
via GitHub
Re: [PR] [TORCH] Fix precise version of f8e8m0 support [tvm-ffi]
via GitHub
[PR] [Relax][PyTorch] Add dynamic shape support to `torch.ops.aten.sym_size.int` in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add dynamic shape support to `torch.ops.aten.sym_size.int` in ExportedProgram frontend [tvm]
via GitHub
[PR] [CI] Update file patterns for specific linting hooks [tvm]
via GitHub
Re: [PR] [CI] Update file patterns for specific linting hooks [tvm]
via GitHub
Re: [PR] [CI] Update file patterns for specific linting hooks [tvm]
via GitHub
Re: [PR] [CI] Update file patterns for specific linting hooks [tvm]
via GitHub
Re: [PR] [CI] Update file patterns for specific linting hooks [tvm]
via GitHub
Re: [PR] [CI] Update file patterns for specific linting hooks [tvm]
via GitHub
Re: [PR] [CI] Update file patterns for specific linting hooks [tvm]
via GitHub
[PR] Add support for grid_sample operator [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for grid_sample operator [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for grid_sample operator [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for grid_sample operator [tvm]
via GitHub
[PR] [Relax][PyTorch] Add support for gumbel_softmax [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for gumbel_softmax [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for gumbel_softmax [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for gumbel_softmax [tvm]
via GitHub
[I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
Re: [I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
Re: [I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
Re: [I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
Re: [I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
Re: [I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
Re: [I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
Re: [I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
Re: [I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
Re: [I] [Question] Kindly ask for metal example. [tvm-ffi]
via GitHub
[I] [Docs] The end-to-end optimization tutorial doesn't work on the latest code [tvm]
via GitHub
Re: [I] [Docs] The end-to-end optimization tutorial doesn't work on the latest code [tvm]
via GitHub
Re: [I] [Docs] The end-to-end optimization tutorial doesn't work on the latest code [tvm]
via GitHub
Re: [I] [Docs] The end-to-end optimization tutorial doesn't work on the latest code [tvm]
via GitHub
Re: [I] [Docs] The end-to-end optimization tutorial doesn't work on the latest code [tvm]
via GitHub
Re: [I] [Docs] The end-to-end optimization tutorial doesn't work on the latest code [tvm]
via GitHub
Re: [I] [Docs] The end-to-end optimization tutorial doesn't work on the latest code [tvm]
via GitHub
Re: [I] [Docs] The end-to-end optimization tutorial doesn't work on the latest code [tvm]
via GitHub
Re: [I] [Docs] The end-to-end optimization tutorial doesn't work on the latest code [tvm]
via GitHub
[I] [RESULT][VOTE] Release Apache TVM FFI v0.1.3.rc1 [tvm-ffi]
via GitHub
Re: [I] [RESULT][VOTE] Release Apache TVM FFI v0.1.3.rc1 [tvm-ffi]
via GitHub
[PR] [Web] Bump web runtime version 0.23.0-dev1 [tvm]
via GitHub
Re: [PR] [Web] Bump web runtime version 0.23.0-dev1 [tvm]
via GitHub
Re: [PR] [Web] Bump web runtime version 0.23.0-dev1 [tvm]
via GitHub
Re: [PR] [Web] Bump web runtime version 0.23.0-dev1 [tvm]
via GitHub
[PR] fix: clang tidy issue [tvm-ffi]
via GitHub
Re: [PR] fix: clang tidy issue [tvm-ffi]
via GitHub
Re: [PR] fix: clang tidy issue [tvm-ffi]
via GitHub
[GH] (tvm/issue-17798): Workflow run "PR" is working again!
GitBox
[GH] (tvm/main): Workflow run "pip in /docker/python for certifi - Update #1160439268" failed!
GitBox
[GH] (tvm/main): Workflow run "npm_and_yarn in /web for underscore.string - Update #1160439267" failed!
GitBox
[PR] [TIR]: Fix VerifyStream::Verify causes dereferencing an invalid pointer [tvm]
via GitHub
Re: [PR] [TIR]: Fix VerifyStream::Verify causes dereferencing an invalid pointer [tvm]
via GitHub
Re: [PR] [TIR]: Fix VerifyStream::Verify causes dereferencing an invalid pointer [tvm]
via GitHub
Re: [PR] [BugFix][Codegen, CUDA] Fix faulty codegen for FP8 [tvm]
via GitHub
[PR] [Relax] Fix the squeeze operator to behave consistently with torch [tvm]
via GitHub
Re: [PR] [Relax] Fix the squeeze operator to behave consistently with torch [tvm]
via GitHub
Re: [PR] [Relax] Fix the squeeze operator to behave consistently with torch [tvm]
via GitHub
Re: [PR] [Relax] Fix the squeeze operator to behave consistently with torch [tvm]
via GitHub
Re: [PR] [Relax] Fix the squeeze operator to behave consistently with torch [tvm]
via GitHub
Re: [PR] [Relax] Fix the squeeze operator to behave consistently with torch [tvm]
via GitHub
[I] [Feature Request] Support for Gumbel Softmax and related stochastic operations in PyTorch frontend [tvm]
via GitHub
Re: [I] [Feature Request] Support for Gumbel Softmax and related stochastic operations in PyTorch frontend [tvm]
via GitHub
[I] [Feature Request] Support for sparse matrix multiplication and random number generation in PyTorch frontend [tvm]
via GitHub
Re: [I] [Feature Request] Support for sparse matrix multiplication and random number generation in PyTorch frontend [tvm]
via GitHub
Re: [I] [Feature Request] Support for sparse matrix multiplication and random number generation in PyTorch frontend [tvm]
via GitHub
[I] [Feature Request] Support for Spatial Transformer Network operations in PyTorch frontend [tvm]
via GitHub
Re: [I] [Feature Request] Support for Spatial Transformer Network operations in PyTorch frontend [tvm]
via GitHub
[I] [Bug] BufferError: Can't export tensors with layout other than torch.strided when model contains sparse tensors [tvm]
via GitHub
Re: [I] [Bug] BufferError: Can't export tensors with layout other than torch.strided when model contains sparse tensors [tvm]
via GitHub
[PR] [Relax][PyTorch] Add support for `torch.ops.aten.sym_size.int` in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for `torch.ops.aten.sym_size.int` in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for `torch.ops.aten.sym_size.int` in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Add support for `torch.ops.aten.sym_size.int` in ExportedProgram frontend [tvm]
via GitHub
Re: [PR] [Optimization][Operator] Implement and enable Conv2d-Reshape-Add-ReLU fusion [tvm]
via GitHub
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
[GH] (tvm/main): Workflow run "tvm-bot" is working again!
GitBox
Re: [I] [Bug] Data Type Mismatch (int64 vs int32) in T.match_buffer when Working with Scalar Buffers in TIR [tvm]
via GitHub
Re: [I] [Bug] Data Type Mismatch (int64 vs int32) in T.match_buffer when Working with Scalar Buffers in TIR [tvm]
via GitHub
[GH] (tvm-ffi/feat_cpp_stl): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "TVM-FFI-OrcJIT CI Tests" is working again!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "TVM-FFI-OrcJIT CI Tests" is working again!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "TVM-FFI-OrcJIT CI Tests" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "TVM-FFI-OrcJIT CI Tests" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "TVM-FFI-OrcJIT CI Tests" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "TVM-FFI-OrcJIT CI Tests" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "TVM-FFI-OrcJIT CI Tests" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "TVM-FFI-OrcJIT CI Tests" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "TVM-FFI-OrcJIT CI Tests" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/orcjit): Workflow run "CI" failed!
GitBox
[I] [Bug] [tvm]
via GitHub
Re: [I] [Bug] mma MultiLevelTilingTensorCore failed in meta schedule [tvm]
via GitHub
Re: [I] [Bug] mma MultiLevelTilingTensorCore failed in meta schedule [tvm]
via GitHub
[I] [Bug] [tvm]
via GitHub
Re: [I] [Bug] [tvm]
via GitHub
Re: [I] [Bug] [tvm]
via GitHub
Re: [I] [Bug] [tvm]
via GitHub
[PR] [Relax][PyTorch] Enable run_ep_decomposition by default [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enable run_ep_decomposition by default [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enable run_ep_decomposition by default [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enable run_ep_decomposition by default [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enable run_ep_decomposition by default [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enable run_ep_decomposition by default [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enable run_ep_decomposition by default [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Enable run_ep_decomposition by default [tvm]
via GitHub
[PR] [CI] Enhance python linting scripts to support revision-based checks [tvm]
via GitHub
Re: [PR] [CI] Enhance python linting scripts to support revision-based checks [tvm]
via GitHub
Re: [PR] [CI] Enhance python linting scripts to support revision-based checks [tvm]
via GitHub
[PR] [Contrib] Update RandomFill to use StreamSync for CUDA synchronization [tvm]
via GitHub
Re: [PR] [Contrib] Update RandomFill to use StreamSync for CUDA synchronization [tvm]
via GitHub
Re: [PR] [Contrib] Update RandomFill to use StreamSync for CUDA synchronization [tvm]
via GitHub
Re: [PR] [Contrib] Update RandomFill to use StreamSync for CUDA synchronization [tvm]
via GitHub
[I] [Bug] 'TVMSynchronize' was not declared in this scope [tvm]
via GitHub
Re: [I] [Bug] 'TVMSynchronize' was not declared in this scope [tvm]
via GitHub
[PR] [Web] Replace string with TVMFFIByteArray* to avoid memory issues [tvm]
via GitHub
Re: [PR] [Web] Replace string with TVMFFIByteArray* to avoid memory issues [tvm]
via GitHub
Re: [PR] [Web] Replace string with TVMFFIByteArray* to avoid memory issues [tvm]
via GitHub
Re: [PR] [Web] Replace string with TVMFFIByteArray* to avoid memory issues [tvm]
via GitHub
Re: [PR] [Web] Replace string with TVMFFIByteArray* to avoid memory issues [tvm]
via GitHub
[GH] (tvm/main): Workflow run "Ping Reviewers" is working again!
GitBox
[GH] (tvm/main): Workflow run "Ping Reviewers" is working again!
GitBox
[GH] (tvm/main): Workflow run "Ping Reviewers" failed!
GitBox
[GH] (tvm/main): Workflow run "Ping Reviewers" failed!
GitBox
[GH] (tvm/main): Workflow run "Ping Reviewers" failed!
GitBox
[GH] (tvm/main): Workflow run "Ping Reviewers" failed!
GitBox
[GH] (tvm/main): Workflow run "Ping Reviewers" failed!
GitBox
[GH] (tvm/main): Workflow run "Ping Reviewers" failed!
GitBox
[PR] [TIR] Fix Data Type Mismatch (int64 vs int32) in T.match_buffer when Working with Scalar Buffers in TIR [tvm]
via GitHub
Re: [PR] [TIR] Fix Data Type Mismatch (int64 vs int32) in T.match_buffer when Working with Scalar Buffers in TIR [tvm]
via GitHub
Re: [PR] [Web] Fix progress reporting when loading from cache [tvm]
via GitHub
[GH] (tvm-ffi/build-hide): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/build-hide): Workflow run "CI" failed!
GitBox
[PR] [chore] Hide symbols by default for extension build [tvm-ffi]
via GitHub
Re: [PR] [chore] Hide symbols by default for extension build [tvm-ffi]
via GitHub
Re: [PR] [chore] Hide symbols by default for extension build [tvm-ffi]
via GitHub
Re: [PR] [chore] Hide symbols by default for extension build [tvm-ffi]
via GitHub
[PR] [TVMScript] Add block name suffix management for TIR macros [tvm]
via GitHub
Re: [PR] [TVMScript] Add block name suffix management for TIR macros [tvm]
via GitHub
Re: [PR] [TVMScript] Add block name suffix management for TIR macros [tvm]
via GitHub
Re: [PR] [TVMScript] Add block name suffix management for TIR macros [tvm]
via GitHub
Re: [PR] [TVMScript] Add block name suffix management for TIR macros [tvm]
via GitHub
Earlier messages
Later messages