This is an automated email from the ASF dual-hosted git repository.
ruihangl pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git
The following commit(s) were added to refs/heads/main by this push:
new c9fb8cd3cd [Docs] Clean up stale references from recent refactors
(#18908)
c9fb8cd3cd is described below
commit c9fb8cd3cd481fd77de1aeb18e40cc197e4f59cf
Author: Shushi Hong <[email protected]>
AuthorDate: Wed Mar 18 13:57:14 2026 -0400
[Docs] Clean up stale references from recent refactors (#18908)
This is a follow-up pr of #18906
- **Removed "Unity" branding**: Replace "Apache TVM Unity" with "Apache
TVM" across all docs (Unity branch has been merged into
main)
- **Fixed stale Python APIs**: `tvm.instrument` → `tvm.ir.instrument`,
`tvm.transform` → `tvm.ir.transform`, `tvm.module.Module` →
`tvm.runtime.Module`, `tvm.convert` → `tvm.runtime.convert`,
`tvm.runtime.load` → `tvm.runtime.load_module`
- **Fixed S-TIR migration**: `tvm.tir.transform.DefaultGPUSchedule` →
`tvm.s_tir.transform.DefaultGPUSchedule`; added missing
`s_tir/transform` API doc page
- **Removed references to deleted features**: AutoTVM/AutoScheduler
(removed), FewShotTuning (phased out in #18864), i386 CI
(removed in #18737), VTA (removed), TensorFlow frontend (not in Relax),
MXNet/Gluon (archived)
- **Fixed stale C++ references**: `NodeRef` → `ObjectRef`, `make_node` →
`ffi::make_object`, removed unused `using
tvm::runtime::Registry`, `src/relax/transforms/` →
`src/relax/transform/`
- **Updated CI docs**: Added GitHub Actions lint workflow info to
`ci.rst`
- **Fixed broken links**: dead tutorial links in `pass_infra.rst`,
`gallery/` → `docs/how_to/tutorials/`, `how-to/index.rst` →
`how_to/dev/index.rst`, `discuss.tvm.ai` → `discuss.tvm.apache.org`
- **Updated overview**: TensorFlow → ONNX in supported framework list,
AutoTVM → RPC in security docs
---
docs/Makefile | 1 -
docs/README.md | 4 ++--
docs/arch/index.rst | 6 ++---
docs/arch/introduction_to_module_serialization.rst | 4 ++--
docs/arch/pass_infra.rst | 25 ++++++++++---------
docs/arch/runtime.rst | 2 +-
docs/contribute/ci.rst | 28 +++++++++++++---------
docs/contribute/document.rst | 8 +++----
docs/contribute/error_handling.rst | 2 +-
docs/contribute/pull_request.rst | 2 +-
docs/contribute/release_process.rst | 2 +-
docs/deep_dive/relax/abstraction.rst | 2 +-
docs/deep_dive/relax/index.rst | 2 +-
docs/deep_dive/relax/learning.rst | 2 +-
docs/deep_dive/tensor_ir/tutorials/tir_creation.py | 4 ++--
docs/get_started/overview.rst | 2 +-
docs/get_started/tutorials/ir_module.py | 14 +++++------
docs/how_to/tutorials/customize_opt.py | 4 ++--
docs/how_to/tutorials/e2e_opt_model.py | 4 ++--
docs/reference/api/python/index.rst | 1 +
docs/reference/api/python/instrument.rst | 6 ++---
.../reference/api/python/{ => s_tir}/transform.rst | 11 +++++----
docs/reference/api/python/transform.rst | 6 ++---
docs/reference/security.rst | 2 +-
24 files changed, 75 insertions(+), 69 deletions(-)
diff --git a/docs/Makefile b/docs/Makefile
index 0d0b9468bf..8a869c5cf8 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -75,7 +75,6 @@ clean:
rm -rf gen_modules
rm -rf user_tutorials
rm -rf tutorials
- rm -rf vta/tutorials
staging:
# Prepare the staging directory. Sphinx gallery automatically
diff --git a/docs/README.md b/docs/README.md
index 572b72fc3c..f708afd768 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -77,10 +77,10 @@ This will cause failure in some cases when certain machines
do not have necessar
environment. You can set `TVM_TUTORIAL_EXEC_PATTERN` to only execute
the path that matches the regular expression pattern.
-For example, to only build tutorials under `/vta/tutorials`, run
+For example, to only build tutorials under `/get_started/tutorials`, run
```bash
-python tests/scripts/ci.py docs --tutorial-pattern=/vta/tutorials
+python tests/scripts/ci.py docs --tutorial-pattern=/get_started/tutorials
```
To only build one specific file, do
diff --git a/docs/arch/index.rst b/docs/arch/index.rst
index 186227a95b..9677f994ce 100644
--- a/docs/arch/index.rst
+++ b/docs/arch/index.rst
@@ -102,7 +102,7 @@ focus on optimizations that are not covered by them.
cross-level transformations
^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Apache TVM brings a unity strategy to optimize the end-to-end models. As the
IRModule includes both relax and tir functions, the cross-level transformations
are designed to mutate
+Apache TVM enables cross-level optimization of end-to-end models. As the
IRModule includes both relax and tir functions, the cross-level transformations
are designed to mutate
the IRModule by applying different transformations to these two types of
functions.
For example, ``relax.LegalizeOps`` pass mutates the IRModule by lowering relax
operators, adding corresponding TIR PrimFunc into the IRModule, and replacing
the relax operators
@@ -337,7 +337,7 @@ While possible to construct operators directly via TIR or
tensor expressions (TE
tvm/s_tir/meta_schedule
-----------------------
-MetaSchedule is a system for automated search-based program optimization. It
is designed to be a drop-in replacement for AutoTVM and AutoScheduler,
+MetaSchedule is a system for automated search-based program optimization,
and can be used to optimize TensorIR schedules. Note that MetaSchedule only
works with static-shape workloads.
tvm/dlight
@@ -346,6 +346,6 @@ tvm/dlight
DLight is a set of pre-defined, easy-to-use, and performant TIR schedules.
DLight aims:
- Fully support **dynamic shape workloads**.
-- **Light weight**. DLight schedules provides tuning-free or (very few-shots
tuning) schedule with reasonable performance.
+- **Light weight**. DLight schedules provides tuning-free schedule with
reasonable performance.
- **Robust**. DLight schedules are designed to be robust and general-purpose
for a single rule. And if the rule is not applicable,
DLight not raise any error and switch to the next rule automatically.
diff --git a/docs/arch/introduction_to_module_serialization.rst
b/docs/arch/introduction_to_module_serialization.rst
index ec6d262f33..1dfc9a1678 100644
--- a/docs/arch/introduction_to_module_serialization.rst
+++ b/docs/arch/introduction_to_module_serialization.rst
@@ -27,7 +27,7 @@ serialization format standard and implementation details.
Serialization
*************
-The entrance API is ``export_library`` of ``tvm.module.Module``.
+The entrance API is ``export_library`` of ``tvm.runtime.Module``.
Inside this function, we will do the following steps:
1. Collect all DSO modules (LLVM modules and C modules)
@@ -143,7 +143,7 @@ support arbitrary modules to import ideally.
Deserialization
****************
-The entrance API is ``tvm.runtime.load``. This function
+The entrance API is ``tvm.runtime.load_module``. This function
actually calls ``_LoadFromFile``. If we dig it a little deeper, this is
``Module::LoadFromFile``. In our example, the file is ``deploy.so``,
according to the function logic, we will call ``module.loadfile_so`` in
diff --git a/docs/arch/pass_infra.rst b/docs/arch/pass_infra.rst
index 8826251ba8..0d2043a66c 100644
--- a/docs/arch/pass_infra.rst
+++ b/docs/arch/pass_infra.rst
@@ -45,11 +45,11 @@ will contain hundreds of individual passes. Often external
users will want to
have custom passes correctly scheduled without having to modify a single
handcrafted pass order.
-Similarly, modern deep learning frameworks, such as Pytorch and MXNet
-Gluon, also have the tendency to enable pass-style layer construction
-scheme through `Sequential`_ and `Block`_, respectively. With such constructs,
-these modern frameworks are able to conveniently add modules/layers to their
-containers and build up neural networks easily.
+Similarly, modern deep learning frameworks, such as PyTorch, also have
+the tendency to enable pass-style layer construction scheme through
+`Sequential`_. With such constructs, these modern frameworks are able to
+conveniently add modules/layers to their containers and build up neural
+networks easily.
The design of the TVM pass infra is largely inspired by the hierarchical
pass manager used in LLVM and the block-style containers used in the popular
@@ -132,7 +132,7 @@ Python APIs to create a compilation pipeline using pass
context.
ffi::Array<instrument::PassInstrument> instruments;
};
- class PassContext : public NodeRef {
+ class PassContext : public ObjectRef {
public:
TVM_DLL static PassContext Create();
TVM_DLL static PassContext Current();
@@ -158,7 +158,7 @@ Python APIs to create a compilation pipeline using pass
context.
/*! \brief The current pass context. */
std::stack<PassContext> context_stack;
PassContextThreadLocalEntry() {
- default_context = PassContext(make_node<PassContextNode>());
+ default_context = PassContext(ffi::make_object<PassContextNode>());
}
};
@@ -300,7 +300,6 @@ pass is registered with an API endpoint as we will show
later.
.. code:: c++
Pass GetPass(const std::string& pass_name) {
- using tvm::runtime::Registry;
std::string fpass_name = "relax.transform." + pass_name;
const std::optional<tvm::ffi::Function> f =
tvm::ffi::Function::GetGlobal(fpass_name);
TVM_FFI_ICHECK(f.has_value()) << "Cannot find " << fpass_name
@@ -341,7 +340,7 @@ We've covered the concept of different level of passes and
the context used for
compilation. It would be interesting to see how easily users can register
a pass. Let's take const folding as an example. This pass has already been
implemented to fold constants in a Relax function (found in
-`src/relax/transforms/fold_constant.cc`_).
+`src/relax/transform/fold_constant.cc`_).
An API was provided to perform the ``Expr`` to ``Expr`` transformation.
@@ -635,7 +634,7 @@ new ``PassInstrument`` are called.
.. _Sequential:
https://pytorch.org/docs/stable/nn.html?highlight=sequential#torch.nn.Sequential
-.. _Block:
https://mxnet.apache.org/api/python/docs/api/gluon/block.html#gluon-block
+.. _Block: https://pytorch.org/docs/stable/generated/torch.nn.Module.html
.. _include/tvm/ir/transform.h:
https://github.com/apache/tvm/blob/main/include/tvm/ir/transform.h
@@ -647,7 +646,7 @@ new ``PassInstrument`` are called.
.. _src/ir/instrument.cc:
https://github.com/apache/tvm/blob/main/src/ir/instrument.cc
-.. _src/relax/transforms/fold_constant.cc:
https://github.com/apache/tvm/blob/main/src/relax/transforms/fold_constant.cc
+.. _src/relax/transform/fold_constant.cc:
https://github.com/apache/tvm/blob/main/src/relax/transform/fold_constant.cc
.. _python/tvm/relax/transform/transform.py:
https://github.com/apache/tvm/blob/main/python/tvm/relax/transform/transform.py
@@ -659,6 +658,6 @@ new ``PassInstrument`` are called.
.. _src/tir/transform/unroll_loop.cc:
https://github.com/apache/tvm/blob/main/src/tir/transform/unroll_loop.cc
-.. _use pass infra:
https://github.com/apache/tvm/blob/main/tutorials/dev/use_pass_infra.py
+.. _use pass infra:
https://github.com/apache/tvm/blob/main/docs/how_to/tutorials/customize_opt.py
-.. _use pass instrument:
https://github.com/apache/tvm/blob/main/tutorials/dev/use_pass_instrument.py
+.. _use pass instrument:
https://github.com/apache/tvm/blob/main/docs/how_to/dev/index.rst
diff --git a/docs/arch/runtime.rst b/docs/arch/runtime.rst
index 72b47705ec..25199b57eb 100644
--- a/docs/arch/runtime.rst
+++ b/docs/arch/runtime.rst
@@ -128,7 +128,7 @@ we can pass functions from python (as PackedFunc) to C++.
print(msg)
# convert to PackedFunc
- f = tvm.convert(callback)
+ f = tvm.runtime.convert(callback)
callhello = tvm.get_global_func("callhello")
# prints hello world
callhello(f)
diff --git a/docs/contribute/ci.rst b/docs/contribute/ci.rst
index ef64856ab0..6fdb0cd187 100644
--- a/docs/contribute/ci.rst
+++ b/docs/contribute/ci.rst
@@ -23,12 +23,16 @@ Using TVM's CI
.. contents::
:local:
-TVM primarily uses Jenkins for running Linux continuous integration (CI) tests
on
-`branches <https://ci.tlcpack.ai/job/tvm/>`_
-`pull requests <https://ci.tlcpack.ai/job/tvm/view/change-requests/>`_ through
a
-build configuration specified in a `Jenkinsfile
<https://github.com/apache/tvm/blob/main/ci/jenkins/templates/>`_.
-Jenkins is the only CI step that is codified to block merging. TVM is also
tested minimally
-against Windows and MacOS using GitHub Actions.
+TVM uses a combination of Jenkins and GitHub Actions for continuous
integration (CI).
+
+- **Jenkins** runs Linux CI tests on
+ `branches <https://ci.tlcpack.ai/job/tvm/>`_ and
+ `pull requests <https://ci.tlcpack.ai/job/tvm/view/change-requests/>`_
through a
+ build configuration specified in `Jenkinsfile templates
<https://github.com/apache/tvm/blob/main/ci/jenkins/templates/>`_.
+ Jenkins is the primary CI step that is codified to block merging.
+- **GitHub Actions** runs `linting
<https://github.com/apache/tvm/blob/main/.github/workflows/lint.yml>`_
+ (via pre-commit hooks) on pushes and pull requests, as well as minimal
+ Windows and macOS build-and-test jobs.
This page describes how contributors and committers can use TVM's CI to verify
their code. You can
read more about the design of TVM CI in the `tlc-pack/ci
<https://github.com/tlc-pack/ci>`_ repo.
@@ -36,10 +40,11 @@ read more about the design of TVM CI in the `tlc-pack/ci
<https://github.com/tlc
For Contributors
----------------
-A standard CI run looks something like this viewed in `Jenkins' BlueOcean
viewer <https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/activity>`_.
+A standard Jenkins CI run looks something like this viewed in `Jenkins'
BlueOcean viewer
<https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/activity>`_.
CI runs usually take a couple hours to complete and pull requests (PRs) cannot
be merged before CI
has successfully completed. To diagnose failing steps, click through to the
failing
-pipeline stage then to the failing step to see the output logs.
+pipeline stage then to the failing step to see the output logs. For GitHub
Actions jobs (lint,
+Windows, macOS), check the "Actions" tab on the pull request or repository
page.
.. image::
https://github.com/tlc-pack/web-data/raw/main/images/contribute/ci.png
:width: 800
@@ -142,12 +147,13 @@ Skipping CI
^^^^^^^^^^^
For reverts and trivial forward fixes, adding ``[skip ci]`` to the revert's
-PR title will cause CI to shortcut and only run lint. Committers should
+PR title will cause Jenkins CI to shortcut and only run lint. Committers should
take care that they only merge CI-skipped PRs to fix a failure on ``main`` and
not in cases where the submitter wants to shortcut CI to merge a change faster.
-The PR title is checked when the build is first run (specifically during the
lint
+The PR title is checked when the build is first run (specifically during the
Jenkins lint
step, so changes after that has run do not affect CI and will require the job
to
-be re-triggered by another ``git push``).
+be re-triggered by another ``git push``). Note that GitHub Actions lint always
+runs independently via pre-commit hooks.
.. code-block:: bash
diff --git a/docs/contribute/document.rst b/docs/contribute/document.rst
index 70499e2a71..8dcb541107 100644
--- a/docs/contribute/document.rst
+++ b/docs/contribute/document.rst
@@ -61,7 +61,7 @@ How-to Guides
These are step by step guides on how to solve particular problems. The user can
ask meaningful questions, and the documents provide answers. An examples of
this type of document might be, "how do I compile an optimized model for ARM
-architecture?" or "how do I compile and optimize a TensorFlow model?" These
+architecture?" or "how do I compile and optimize a PyTorch model?" These
documents should be open enough that a user could see how to apply it to a new
use case. Practical usability is more important than completeness. The title
should tell the user what problem the how-to is solving.
@@ -209,8 +209,8 @@ Sphinx Gallery How-Tos
----------------------
We use `sphinx-gallery <https://sphinx-gallery.github.io/>`_ to build many
-Python how-tos. You can find the source code under `gallery
-<https://github.com/apache/tvm/tree/main/gallery>`_.
+Python how-tos. You can find the source code under `docs/how_to/tutorials
+<https://github.com/apache/tvm/tree/main/docs/how_to/tutorials>`_.
One thing that worth noting is that the comment blocks are written in
reStructuredText instead of markdown so be aware of the syntax.
@@ -222,7 +222,7 @@ existing environment to demonstrate the usage.
If you add a new categorization of how-to, you will need to add references to
`conf.py <https://github.com/apache/tvm/tree/main/docs/conf.py>`_ and the
-`how-to index <https://github.com/apache/tvm/tree/main/docs/how-to/index.rst>`_
+`how-to index
<https://github.com/apache/tvm/tree/main/docs/how_to/dev/index.rst>`_
Refer to Another Location in the Document
-----------------------------------------
diff --git a/docs/contribute/error_handling.rst
b/docs/contribute/error_handling.rst
index c580977ddb..754602c421 100644
--- a/docs/contribute/error_handling.rst
+++ b/docs/contribute/error_handling.rst
@@ -84,7 +84,7 @@ error messages when necessary.
def preferred():
# Very clear about what is being raised and what is the error message.
- raise OpNotImplemented("Operator relu is not implemented in the MXNet
frontend")
+ raise OpNotImplemented("Operator relu is not implemented in the ONNX
frontend")
def _op_not_implemented(op_name):
return OpNotImplemented("Operator {} is not
implemented.").format(op_name)
diff --git a/docs/contribute/pull_request.rst b/docs/contribute/pull_request.rst
index c20be2df38..f8128ce297 100644
--- a/docs/contribute/pull_request.rst
+++ b/docs/contribute/pull_request.rst
@@ -189,7 +189,7 @@ Docker (recommended)
``tests/scripts/ci.py`` replicates the CI environment locally and provides a
user-friendly interface.
The same Docker images and scripts used in CI are used directly to run tests.
It also deposits builds
in different folders so you can maintain multiple test environments without
rebuilding from scratch
-each time (e.g. you can test a change in CPU and i386 while retaining
incremental rebuilds).
+each time (e.g. you can test a change in CPU and GPU while retaining
incremental rebuilds).
.. code:: bash
diff --git a/docs/contribute/release_process.rst
b/docs/contribute/release_process.rst
index f29f82a35e..a38fffe127 100644
--- a/docs/contribute/release_process.rst
+++ b/docs/contribute/release_process.rst
@@ -49,7 +49,7 @@ The release manager role in TVM means you are responsible for
a few different th
Prepare the Release Notes
-------------------------
-Release note contains new features, improvement, bug fixes, known issues and
deprecation, etc. TVM provides `monthly dev report
<https://discuss.tvm.ai/search?q=TVM%20Monthly%20%23Announcement>`_ collects
developing progress each month. It could be helpful to who writes the release
notes.
+Release note contains new features, improvement, bug fixes, known issues and
deprecation, etc. TVM provides `monthly dev report
<https://discuss.tvm.apache.org/search?q=TVM%20Monthly%20%23Announcement>`_
collects developing progress each month. It could be helpful to who writes the
release notes.
It is recommended to open a GitHub issue to collect feedbacks for the release
note draft before cutting the release branch. See the scripts in
``tests/scripts/release`` for some starting points.
diff --git a/docs/deep_dive/relax/abstraction.rst
b/docs/deep_dive/relax/abstraction.rst
index 2b9ee8b5d7..8705d6dc1d 100644
--- a/docs/deep_dive/relax/abstraction.rst
+++ b/docs/deep_dive/relax/abstraction.rst
@@ -52,7 +52,7 @@ relationships between different parts of the model.
Key Features of Relax
~~~~~~~~~~~~~~~~~~~~~
-Relax, the graph representation utilized in Apache TVM's Unity strategy,
+Relax, the graph representation utilized in Apache TVM,
facilitates end-to-end optimization of ML models through several crucial
features:
diff --git a/docs/deep_dive/relax/index.rst b/docs/deep_dive/relax/index.rst
index 2b7c4ea599..d6c44d2e63 100644
--- a/docs/deep_dive/relax/index.rst
+++ b/docs/deep_dive/relax/index.rst
@@ -20,7 +20,7 @@
Relax
=====
Relax is a high-level abstraction for graph optimization and transformation in
Apache TVM stack.
-Additionally, Apache TVM combine Relax and TensorIR together as a unity
strategy for cross-level
+Additionally, Apache TVM combines Relax and TensorIR together for cross-level
optimization. Hence, Relax is usually working closely with TensorIR for
representing and optimizing
the whole IRModule
diff --git a/docs/deep_dive/relax/learning.rst
b/docs/deep_dive/relax/learning.rst
index 6c16ff944b..72dc21186b 100644
--- a/docs/deep_dive/relax/learning.rst
+++ b/docs/deep_dive/relax/learning.rst
@@ -19,7 +19,7 @@
Understand Relax Abstraction
============================
-Relax is a graph abstraction used in Apache TVM Unity strategy, which
+Relax is a graph abstraction used in Apache TVM, which
helps to end-to-end optimize ML models. The principal objective of Relax
is to depict the structure and data flow of ML models, including the
dependencies and relationships between different parts of the model, as
diff --git a/docs/deep_dive/tensor_ir/tutorials/tir_creation.py
b/docs/deep_dive/tensor_ir/tutorials/tir_creation.py
index 0254dbb3a0..48b6dcc627 100644
--- a/docs/deep_dive/tensor_ir/tutorials/tir_creation.py
+++ b/docs/deep_dive/tensor_ir/tutorials/tir_creation.py
@@ -22,7 +22,7 @@
TensorIR Creation
-----------------
In this section, we will introduce the methods to write a TensorIR function
-in Apache TVM Unity. This tutorial presumes familiarity with the fundamental
concepts of TensorIR.
+in Apache TVM. This tutorial presumes familiarity with the fundamental
concepts of TensorIR.
If not already acquainted, please refer to :ref:`tir-learning` initially.
.. note::
@@ -233,7 +233,7 @@ print(evaluate_dynamic_shape(dyn_shape_lib, m=64, n=64,
k=128))
# Tensor Expression comprises two components within the TVM stack: the
expression and the
# schedule. The expression is the domain-specific language embodying the
computation pattern,
# precisely what we're addressing in this section. Conversely, the TE
schedule is the legacy
-# scheduling method, has been superseded by the TensorIR schedule in the TVM
Unity stack.
+# scheduling method, has been superseded by the TensorIR schedule in the
current TVM stack.
#
# Create Static-Shape Functions
# *****************************
diff --git a/docs/get_started/overview.rst b/docs/get_started/overview.rst
index 5931837d16..6d775b5de1 100644
--- a/docs/get_started/overview.rst
+++ b/docs/get_started/overview.rst
@@ -47,7 +47,7 @@ please refer to :ref:`quick_start`
1. **Import/construct an ML model**
- TVM supports importing models from various frameworks, such as PyTorch,
TensorFlow for generic ML models. Meanwhile, we can create models directly
using Relax frontend for scenarios of large language models.
+ TVM supports importing models from various frameworks, such as PyTorch and
ONNX for generic ML models. Meanwhile, we can create models directly using
Relax frontend for scenarios of large language models.
2. **Perform composable optimization** transformations via ``pipelines``
diff --git a/docs/get_started/tutorials/ir_module.py
b/docs/get_started/tutorials/ir_module.py
index e47e4fba4b..49a7882853 100644
--- a/docs/get_started/tutorials/ir_module.py
+++ b/docs/get_started/tutorials/ir_module.py
@@ -21,7 +21,7 @@
IRModule
========
-This tutorial presents the core abstraction of Apache TVM Unity, the IRModule.
+This tutorial presents the core abstraction of Apache TVM, the IRModule.
The IRModule encompasses the **entirety** of the ML models, incorporating the
computational graph, tensor programs, and potential calls to external
libraries.
@@ -49,7 +49,7 @@ from tvm.relax.frontend.torch import from_exported_program
# Import from existing models
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
# The most common way to initialize an IRModule is to import from an existing
-# model. Apache TVM Unity accommodates imports from a range of frameworks,
+# model. Apache TVM accommodates imports from a range of frameworks,
# such as PyTorch and ONNX. This tutorial solely demonstrates the import
process
# from PyTorch.
@@ -86,7 +86,7 @@ mod_from_torch.show()
######################################################################
# Write with Relax NN Module
# ~~~~~~~~~~~~~~~~~~~~~~~~~~
-# Apache TVM Unity also provides a set of PyTorch-liked APIs, to help users
+# Apache TVM also provides a set of PyTorch-liked APIs, to help users
# write the IRModule directly.
from tvm.relax.frontend import nn
@@ -170,7 +170,7 @@ assert mod[gv] == mod["main"]
######################################################################
# Transformations on IRModules
# ----------------------------
-# Transformations are the import component of Apache TVM Unity. One
transformation
+# Transformations are the import component of Apache TVM. One transformation
# takes in an IRModule and outputs another IRModule. We can apply a sequence of
# transformations to an IRModule to obtain a new IRModule. That is the common
way to
# optimize a model.
@@ -195,7 +195,7 @@ mod.show()
print(mod.get_global_vars())
######################################################################
-# Next, Apache TVM Unity provides a set of default transformation pipelines
for users,
+# Next, Apache TVM provides a set of default transformation pipelines for
users,
# to simplify the transformation process. We can then apply the default
pipeline to the module.
# The default **zero** pipeline contains very fundamental transformations,
including:
#
@@ -225,7 +225,7 @@ mod.show()
# Deploy the IRModule Universally
# -------------------------------
# After the optimization, we can compile the model into a TVM runtime module.
-# Notably, Apache TVM Unity provides the ability of universal deployment,
which means
+# Notably, Apache TVM provides the ability of universal deployment, which means
# we can deploy the same IRModule on different backends, including CPU, GPU,
and other emerging
# backends.
#
@@ -279,7 +279,7 @@ assert np.allclose(cpu_out, gpu_out, atol=1e-3)
######################################################################
# Deploy on Other Backends
# ~~~~~~~~~~~~~~~~~~~~~~~~
-# Apache TVM Unity also supports other backends, such as different kinds of
GPUs
+# Apache TVM also supports other backends, such as different kinds of GPUs
# (Metal, ROCm, Vulkan and OpenCL), different kinds of CPUs (x86, ARM), and
other
# emerging backends (e.g., WebAssembly). The deployment process is similar to
the
# GPU backend.
diff --git a/docs/how_to/tutorials/customize_opt.py
b/docs/how_to/tutorials/customize_opt.py
index 342184dc15..4872b324d9 100644
--- a/docs/how_to/tutorials/customize_opt.py
+++ b/docs/how_to/tutorials/customize_opt.py
@@ -61,11 +61,11 @@ from tvm.relax.frontend import nn
######################################################################
# Composable IRModule Optimization
# --------------------------------
-# Apache TVM Unity provides a flexible way to optimize the IRModule.
Everything centered
+# Apache TVM provides a flexible way to optimize the IRModule. Everything
centered
# around IRModule optimization can be composed with existing pipelines. Note
that each optimization
# can focus on **part of the computation graph**, enabling partial lowering or
partial optimization.
#
-# In this tutorial, we will demonstrate how to optimize a model with Apache
TVM Unity.
+# In this tutorial, we will demonstrate how to optimize a model with Apache
TVM.
######################################################################
# Prepare a Relax Module
diff --git a/docs/how_to/tutorials/e2e_opt_model.py
b/docs/how_to/tutorials/e2e_opt_model.py
index 6bd1be512c..bdb7ac0c91 100644
--- a/docs/how_to/tutorials/e2e_opt_model.py
+++ b/docs/how_to/tutorials/e2e_opt_model.py
@@ -89,7 +89,7 @@ if not IS_IN_CI:
######################################################################
# IRModule Optimization
# ---------------------
-# Apache TVM Unity provides a flexible way to optimize the IRModule.
Everything centered
+# Apache TVM provides a flexible way to optimize the IRModule. Everything
centered
# around IRModule optimization can be composed with existing pipelines. Note
that each
# transformation can be combined as an optimization pipeline via
``tvm.ir.transform.Sequential``.
#
@@ -141,7 +141,7 @@ if not IS_IN_CI:
if not IS_IN_CI:
with target:
- mod = tvm.tir.transform.DefaultGPUSchedule()(mod)
+ mod = tvm.s_tir.transform.DefaultGPUSchedule()(mod)
ex = tvm.compile(mod, target=target)
dev = tvm.device("cuda", 0)
vm = relax.VirtualMachine(ex, dev)
diff --git a/docs/reference/api/python/index.rst
b/docs/reference/api/python/index.rst
index 1752b9d99c..a89938367e 100644
--- a/docs/reference/api/python/index.rst
+++ b/docs/reference/api/python/index.rst
@@ -63,6 +63,7 @@ Python API
:caption: tvm.s_tir
s_tir/schedule
+ s_tir/transform
s_tir/dlight
.. toctree::
diff --git a/docs/reference/api/python/instrument.rst
b/docs/reference/api/python/instrument.rst
index 270a19690b..56897b2cfa 100644
--- a/docs/reference/api/python/instrument.rst
+++ b/docs/reference/api/python/instrument.rst
@@ -15,8 +15,8 @@
specific language governing permissions and limitations
under the License.
-tvm.instrument
---------------
-.. automodule:: tvm.instrument
+tvm.ir.instrument
+------------------
+.. automodule:: tvm.ir.instrument
:members:
:imported-members:
diff --git a/docs/reference/api/python/transform.rst
b/docs/reference/api/python/s_tir/transform.rst
similarity index 86%
copy from docs/reference/api/python/transform.rst
copy to docs/reference/api/python/s_tir/transform.rst
index d200dfdd11..b5835280d4 100644
--- a/docs/reference/api/python/transform.rst
+++ b/docs/reference/api/python/s_tir/transform.rst
@@ -15,8 +15,9 @@
specific language governing permissions and limitations
under the License.
-tvm.transform
--------------
-.. automodule:: tvm.transform
- :members:
- :imported-members:
+tvm.s_tir.transform
+-------------------
+.. automodule:: tvm.s_tir.transform
+ :members:
+ :imported-members:
+ :no-index:
diff --git a/docs/reference/api/python/transform.rst
b/docs/reference/api/python/transform.rst
index d200dfdd11..62f7cac389 100644
--- a/docs/reference/api/python/transform.rst
+++ b/docs/reference/api/python/transform.rst
@@ -15,8 +15,8 @@
specific language governing permissions and limitations
under the License.
-tvm.transform
--------------
-.. automodule:: tvm.transform
+tvm.ir.transform
+-----------------
+.. automodule:: tvm.ir.transform
:members:
:imported-members:
diff --git a/docs/reference/security.rst b/docs/reference/security.rst
index cb18a1df07..8380b6c15a 100644
--- a/docs/reference/security.rst
+++ b/docs/reference/security.rst
@@ -47,5 +47,5 @@ All of the TVM APIs are designed to be used by trusted users,
for APIs that invo
we expect users to put in only trusted URLs.
-AutoTVM data exchange between the tracker, server and client are in plain-text.
+RPC data exchange between the tracker, server and client are in plain-text.
It is recommended to use them under trusted networking environment or
encrypted channels.