This is an automated email from the ASF dual-hosted git repository. zhic pushed a commit to branch publications in repository https://gitbox.apache.org/repos/asf/tvm.git
commit 82db5e472eda2c7a42bdc8106020f6e601580a7e Author: Zhi <[email protected]> AuthorDate: Wed Apr 27 07:59:10 2022 +0800 [docs] Update publication list This PR updates some publications that use or built on top of TVM. --- docs/reference/publications.rst | 53 ++++++++++++++++++++++++++++++++++++----- 1 file changed, 47 insertions(+), 6 deletions(-) diff --git a/docs/reference/publications.rst b/docs/reference/publications.rst index 3a90a3ad3c..1b80de54dc 100644 --- a/docs/reference/publications.rst +++ b/docs/reference/publications.rst @@ -22,10 +22,51 @@ TVM is developed as part of peer-reviewed research in machine learning compiler framework for CPUs, GPUs, and machine learning accelerators. This document includes references to publications describing the research, -results, and design underlying TVM. +results, and design that use or built on top of TVM. -* `TVM: An Automated End-to-End Optimizing Compiler for Deep Learning <https://arxiv.org/abs/1802.04799>`_ -* `Learning to Optimize Tensor Programs <https://arxiv.org/pdf/1805.08166.pdf>`_ -* `Ansor: Generating High-Performance Tensor Programs for Deep Learning <https://arxiv.org/abs/2006.06762>`_ -* `Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference - <https://arxiv.org/abs/2006.03031>`_ +2018 + +* `TVM: An Automated End-to-End Optimizing Compiler for Deep Learning`__, [Slides_] +.. __: https://arxiv.org/abs/1802.04799 +.. _Slides: https://www.usenix.org/system/files/osdi18-chen.pdf + +* `Learning to Optimize Tensor Programs`__, [Slides] +.. __: https://arxiv.org/pdf/1805.08166.pdf + +2020 + +* `Ansor: Generating High-Performance Tensor Programs for Deep Learning`__, [Slides__] [Tutorial__] +.. __: https://arxiv.org/abs/2006.06762 +.. __: https://www.usenix.org/sites/default/files/conference/protected-files/osdi20_slides_zheng.pdf +.. __: https://tvm.apache.org/2021/03/03/intro-auto-scheduler + +2021 + +* `Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference`__, [Slides__] +.. __: https://arxiv.org/abs/2006.03031 +.. __: https://shenhaichen.com/slides/nimble_mlsys.pdf + +* `Cortex: A Compiler for Recursive Deep Learning Models`__, [Slides__] +.. __: https://arxiv.org/pdf/2011.01383.pdf +.. __: https://mlsys.org/media/mlsys-2021/Slides/1507.pdf + +* `UNIT: Unifying Tensorized Instruction Compilation`__, [Slides] +.. __: https://arxiv.org/abs/2101.08458 + +* `Lorien: Efficient Deep Learning Workloads Delivery`__, [Slides] +.. __: https://assets.amazon.science/c2/46/2481c9064a8bbaebcf389dd5ad75/lorien-efficient-deep-learning-workloads-delivery.pdf + +* `Bring Your Own Codegen to Deep Learning Compiler`__, [Slides] [Tutorial__] +.. __: https://arxiv.org/abs/2105.03215 +.. __: https://tvm.apache.org/2020/07/15/how-to-bring-your-own-codegen-to-tvm + +2022 + +* `DietCode: Automatic optimization for dynamic tensor program`__, [Slides] +.. __: https://proceedings.mlsys.org/paper/2022/file/fa7cdfad1a5aaf8370ebeda47a1ff1c3-Paper.pdf + +* `Bolt: Bridging the Gap between Auto-tuners and Hardware-native Performance`__, [Slides] +.. __: https://proceedings.mlsys.org/paper/2022/file/38b3eff8baf56627478ec76a704e9b52-Paper.pdf + +* `The CoRa Tensor Compiler: Compilation for Ragged Tensors with Minimal Padding`__, [Slides] +.. __: https://arxiv.org/abs/2110.10221
