This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
     new 926a315  [DOCS] Update to reflect the repo name change (#6967)
926a315 is described below

commit 926a3153721882121522deee06e8c67fa1aaf689
Author: Tianqi Chen <tqc...@users.noreply.github.com>
AuthorDate: Tue Nov 24 12:28:36 2020 -0500

    [DOCS] Update to reflect the repo name change (#6967)
---
 CONTRIBUTORS.md                                    |   2 +-
 README.md                                          |   4 +-
 apps/android_deploy/README.md                      |   4 +-
 apps/android_rpc/README.md                         |   6 +-
 apps/benchmark/README.md                           |   4 +-
 apps/microtvm/reference-vm/zephyr/pyproject.toml   |   2 +-
 apps/wasm-standalone/wasm-graph/Cargo.toml         |   2 +-
 apps/wasm-standalone/wasm-runtime/Cargo.toml       |   2 +-
 docker/Dockerfile.demo_android                     |   2 +-
 docker/Dockerfile.demo_opencl                      |   2 +-
 docker/install/install_tvm_cpu.sh                  |   2 +-
 docker/install/install_tvm_gpu.sh                  |   2 +-
 docs/conf.py                                       |  10 +-
 docs/contribute/community.rst                      |   2 +-
 docs/contribute/document.rst                       |   4 +-
 docs/contribute/release_process.rst                |  49 ++++----
 docs/deploy/android.rst                            |   4 +-
 docs/deploy/cpp_deploy.rst                         |  10 +-
 docs/deploy/index.rst                              |   2 +-
 docs/deploy/vitis_ai.rst                           | 128 ++++++++++-----------
 docs/dev/convert_layout.rst                        |   4 +-
 docs/dev/frontend/tensorflow.rst                   |   4 +-
 docs/dev/inferbound.rst                            |   8 +-
 docs/dev/pass_infra.rst                            |  20 ++--
 docs/dev/relay_add_pass.rst                        |   6 +-
 docs/dev/relay_bring_your_own_codegen.rst          |   2 +-
 docs/dev/runtime.rst                               |  22 ++--
 docs/dev/virtual_machine.rst                       |  16 +--
 docs/install/docker.rst                            |   4 +-
 docs/install/from_source.rst                       |   2 +-
 docs/install/nnpack.rst                            |   2 +-
 docs/langref/relay_adt.rst                         |   2 +-
 docs/langref/relay_pattern.rst                     |   2 +-
 docs/vta/install.rst                               |   4 +-
 jvm/README.md                                      |   2 +-
 jvm/pom.xml                                        |   8 +-
 python/setup.py                                    |   2 +-
 python/tvm/relay/qnn/op/legalizations.py           |   2 +-
 python/tvm/topi/x86/conv2d.py                      |   2 +-
 python/tvm/topi/x86/conv2d_avx_1x1.py              |   2 +-
 rust/tvm-graph-rt/Cargo.toml                       |   2 +-
 rust/tvm-macros/Cargo.toml                         |   2 +-
 rust/tvm-rt/Cargo.toml                             |   4 +-
 rust/tvm-rt/README.md                              |   2 +-
 rust/tvm-rt/src/lib.rs                             |   2 +-
 rust/tvm-sys/src/context.rs                        |   2 +-
 rust/tvm/Cargo.toml                                |   4 +-
 rust/tvm/README.md                                 |   4 +-
 rust/tvm/src/lib.rs                                |   2 +-
 src/parser/tokenizer.h                             |   2 +-
 tests/python/frontend/tflite/test_forward.py       |   2 +-
 tests/python/relay/test_op_level2.py               |   2 +-
 .../topi/python/test_topi_conv2d_nhwc_pack_int8.py |   2 +-
 tests/python/topi/python/test_topi_vision.py       |   2 +-
 .../unittest/test_autotvm_graph_tuner_core.py      |   2 +-
 .../unittest/test_autotvm_graph_tuner_utils.py     |   2 +-
 tutorials/autotvm/tune_relay_arm.py                |   4 +-
 tutorials/autotvm/tune_relay_cuda.py               |   2 +-
 tutorials/autotvm/tune_relay_mobile_gpu.py         |   4 +-
 tutorials/dev/bring_your_own_datatypes.py          |   4 +-
 tutorials/dev/use_pass_infra.py                    |   2 +-
 tutorials/frontend/deploy_model_on_android.py      |   6 +-
 tutorials/frontend/deploy_model_on_rasp.py         |   4 +-
 tutorials/frontend/from_mxnet.py                   |   2 +-
 tutorials/get_started/cross_compilation_and_rpc.py |   2 +-
 web/README.md                                      |   4 +-
 66 files changed, 209 insertions(+), 222 deletions(-)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index 5f01340..650d1bc 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -139,7 +139,7 @@ We do encourage everyone to work anything they are 
interested in.
 - [Lianmin Zheng](https://github.com/merrymercy): @merrymercy
 
 ## List of Contributors
-- [Full List of 
Contributors](https://github.com/apache/incubator-tvm/graphs/contributors)
+- [Full List of 
Contributors](https://github.com/apache/tvm/graphs/contributors)
   - To contributors: please add your name to the list.
 - [Qiao Zhang](https://github.com/zhangqiaorjc)
 - [Haolong Zhang](https://github.com/haolongzhangm)
diff --git a/README.md b/README.md
index 779487e..13a04f6 100644
--- a/README.md
+++ b/README.md
@@ -15,7 +15,7 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-<img 
src=https://raw.githubusercontent.com/apache/incubator-tvm-site/main/images/logo/tvm-logo-small.png
 width=128/> Open Deep Learning Compiler Stack
+<img 
src=https://raw.githubusercontent.com/apache/tvm-site/main/images/logo/tvm-logo-small.png
 width=128/> Open Deep Learning Compiler Stack
 ==============================================
 [Documentation](https://tvm.apache.org/docs) |
 [Contributors](CONTRIBUTORS.md) |
@@ -23,7 +23,7 @@
 [Release Notes](NEWS.md)
 
 [![Build 
Status](https://ci.tlcpack.ai/buildStatus/icon?job=tvm/main)](https://ci.tlcpack.ai/job/tvm/job/main/)
-[![WinMacBuild](https://github.com/apache/incubator-tvm/workflows/WinMacBuild/badge.svg)](https://github.com/apache/incubator-tvm/actions?query=workflow%3AWinMacBuild)
+[![WinMacBuild](https://github.com/apache/tvm/workflows/WinMacBuild/badge.svg)](https://github.com/apache/tvm/actions?query=workflow%3AWinMacBuild)
 
 Apache TVM (incubating) is a compiler stack for deep learning systems. It is 
designed to close the gap between the
 productivity-focused deep learning frameworks, and the performance- and 
efficiency-focused hardware backends.
diff --git a/apps/android_deploy/README.md b/apps/android_deploy/README.md
index d5efba8..32e6018 100644
--- a/apps/android_deploy/README.md
+++ b/apps/android_deploy/README.md
@@ -34,7 +34,7 @@ Alternatively, you may execute Docker image we provide which 
contains the requir
 
 ### Build APK
 
-Before you build the Android application, please refer to [TVM4J Installation 
Guide](https://github.com/apache/incubator-tvm/blob/main/jvm/README.md) and 
install tvm4j-core to your local maven repository. You can find tvm4j 
dependency declare in `app/build.gradle`. Modify it if it is necessary.
+Before you build the Android application, please refer to [TVM4J Installation 
Guide](https://github.com/apache/tvm/blob/main/jvm/README.md) and install 
tvm4j-core to your local maven repository. You can find tvm4j dependency 
declare in `app/build.gradle`. Modify it if it is necessary.
 
 ```
 dependencies {
@@ -124,7 +124,7 @@ If everything goes well, you will find compile tools in 
`/opt/android-toolchain-
 
 Follow instruction to get compiled version model for android target 
[here.](https://tvm.apache.org/docs/deploy/android.html)
 
-Copied these compiled model deploy_lib.so, deploy_graph.json and 
deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM 
flavor changes on 
[java](https://github.com/apache/incubator-tvm/blob/main/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L81)
+Copied these compiled model deploy_lib.so, deploy_graph.json and 
deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM 
flavor changes on 
[java](https://github.com/apache/tvm/blob/main/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L81)
 
 `CPU Verison flavor`
 ```
diff --git a/apps/android_rpc/README.md b/apps/android_rpc/README.md
index 29962d3..c5e21ec 100644
--- a/apps/android_rpc/README.md
+++ b/apps/android_rpc/README.md
@@ -28,7 +28,7 @@ You will need JDK, [Android 
NDK](https://developer.android.com/ndk) and an Andro
 
 We use [Gradle](https://gradle.org) to build. Please follow [the installation 
instruction](https://gradle.org/install) for your operating system.
 
-Before you build the Android application, please refer to [TVM4J Installation 
Guide](https://github.com/apache/incubator-tvm/blob/main/jvm/README.md) and 
install tvm4j-core to your local maven repository. You can find tvm4j 
dependency declare in `app/build.gradle`. Modify it if it is necessary.
+Before you build the Android application, please refer to [TVM4J Installation 
Guide](https://github.com/apache/tvm/blob/main/jvm/README.md) and install 
tvm4j-core to your local maven repository. You can find tvm4j dependency 
declare in `app/build.gradle`. Modify it if it is necessary.
 
 ```
 dependencies {
@@ -146,7 +146,7 @@ android   1      1     0
 ```
 
 
-Then checkout 
[android\_rpc/tests/android\_rpc\_test.py](https://github.com/apache/incubator-tvm/blob/main/apps/android_rpc/tests/android_rpc_test.py)
 and run,
+Then checkout 
[android\_rpc/tests/android\_rpc\_test.py](https://github.com/apache/tvm/blob/main/apps/android_rpc/tests/android_rpc_test.py)
 and run,
 
 ```bash
 # Specify the RPC tracker
@@ -157,7 +157,7 @@ export 
TVM_NDK_CC=/opt/android-toolchain-arm64/bin/aarch64-linux-android-g++
 python android_rpc_test.py
 ```
 
-This will compile TVM IR to shared libraries (CPU, OpenCL and Vulkan) and run 
vector addition on your Android device. To verify compiled TVM IR shared 
libraries on OpenCL target set `'test_opencl = True'` and on Vulkan target set 
`'test_vulkan = True'` in  
[tests/android_rpc_test.py](https://github.com/apache/incubator-tvm/blob/main/apps/android_rpc/tests/android_rpc_test.py),
 by default on CPU target will execute.
+This will compile TVM IR to shared libraries (CPU, OpenCL and Vulkan) and run 
vector addition on your Android device. To verify compiled TVM IR shared 
libraries on OpenCL target set `'test_opencl = True'` and on Vulkan target set 
`'test_vulkan = True'` in  
[tests/android_rpc_test.py](https://github.com/apache/tvm/blob/main/apps/android_rpc/tests/android_rpc_test.py),
 by default on CPU target will execute.
 On my test device, it gives following results.
 
 ```bash
diff --git a/apps/benchmark/README.md b/apps/benchmark/README.md
index 920033f..43d93d9 100644
--- a/apps/benchmark/README.md
+++ b/apps/benchmark/README.md
@@ -20,7 +20,7 @@
 
 ## Results
 
-See results on wiki page https://github.com/apache/incubator-tvm/wiki/Benchmark
+See results on wiki page https://github.com/apache/tvm/wiki/Benchmark
 
 ## How to Reproduce
 
@@ -78,7 +78,7 @@ python3 -m tvm.exec.rpc_tracker
   `python3 -m tvm.exec.rpc_server --tracker=10.77.1.123:9190 --key=rk3399`, 
where 10.77.1.123 is the IP address of the tracker.
 
 * For Android device
-   * Build and install tvm RPC apk on your device 
[Help](https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc).
+   * Build and install tvm RPC apk on your device 
[Help](https://github.com/apache/tvm/tree/main/apps/android_rpc).
      Make sure you can pass the android rpc test. Then you have alreadly known 
how to register.
 
 3. Verify the device registration
diff --git a/apps/microtvm/reference-vm/zephyr/pyproject.toml 
b/apps/microtvm/reference-vm/zephyr/pyproject.toml
index d273b25..f1c15ee 100644
--- a/apps/microtvm/reference-vm/zephyr/pyproject.toml
+++ b/apps/microtvm/reference-vm/zephyr/pyproject.toml
@@ -47,7 +47,7 @@ exclude = '''
 )
 '''
 [tool.poetry]
-name = "incubator-tvm"
+name = "tvm"
 version = "0.1.0"
 description = ""
 authors = ["Your Name <y...@example.com>"]
diff --git a/apps/wasm-standalone/wasm-graph/Cargo.toml 
b/apps/wasm-standalone/wasm-graph/Cargo.toml
index 9cdc8f5..cea491b 100644
--- a/apps/wasm-standalone/wasm-graph/Cargo.toml
+++ b/apps/wasm-standalone/wasm-graph/Cargo.toml
@@ -22,7 +22,7 @@ authors = ["TVM Contributors"]
 edition = "2018"
 description = "WebAssembly graph to deep learning frameworks using TVM"
 readme = "README.md"
-repository = "https://github.com/apache/incubator-tvm";
+repository = "https://github.com/apache/tvm";
 license = "Apache-2.0"
 keywords = ["wasm", "machine learning", "tvm"]
 
diff --git a/apps/wasm-standalone/wasm-runtime/Cargo.toml 
b/apps/wasm-standalone/wasm-runtime/Cargo.toml
index db00a55..99f6db5 100644
--- a/apps/wasm-standalone/wasm-runtime/Cargo.toml
+++ b/apps/wasm-standalone/wasm-runtime/Cargo.toml
@@ -21,7 +21,7 @@ version = "0.1.0"
 authors = ["TVM Contributors"]
 edition = "2018"
 description = "WebAssembly runtime to deep learning frameworks using wasmtime"
-repository = "https://github.com/apache/incubator-tvm";
+repository = "https://github.com/apache/tvm";
 license = "Apache-2.0"
 keywords = ["wasm", "machine learning", "wasmtime"]
 
diff --git a/docker/Dockerfile.demo_android b/docker/Dockerfile.demo_android
index cf13daa..039439a 100644
--- a/docker/Dockerfile.demo_android
+++ b/docker/Dockerfile.demo_android
@@ -53,7 +53,7 @@ RUN git clone https://github.com/KhronosGroup/OpenCL-Headers 
/usr/local/OpenCL-H
 
 # Build TVM
 RUN cd /usr && \
-    git clone --depth=1 https://github.com/apache/incubator-tvm tvm 
--recursive && \
+    git clone --depth=1 https://github.com/apache/tvm tvm --recursive && \
     cd /usr/tvm && \
     mkdir -p build && \
     cd build && \
diff --git a/docker/Dockerfile.demo_opencl b/docker/Dockerfile.demo_opencl
index e39ee41..2f534d8 100644
--- a/docker/Dockerfile.demo_opencl
+++ b/docker/Dockerfile.demo_opencl
@@ -62,7 +62,7 @@ RUN echo "Cloning TVM source & submodules"
 ENV TVM_PAR_DIR="/usr"
 RUN mkdir -p TVM_PAR_DIR && \
        cd ${TVM_PAR_DIR} && \
-       git clone --depth=1 https://github.com/apache/incubator-tvm tvm 
--recursive
+       git clone --depth=1 https://github.com/apache/tvm tvm --recursive
 #RUN git submodule update --init --recursive
 
 
diff --git a/docker/install/install_tvm_cpu.sh 
b/docker/install/install_tvm_cpu.sh
index b11c979..c3a15fa 100755
--- a/docker/install/install_tvm_cpu.sh
+++ b/docker/install/install_tvm_cpu.sh
@@ -21,7 +21,7 @@ set -u
 set -o pipefail
 
 cd /usr
-git clone https://github.com/apache/incubator-tvm tvm --recursive
+git clone https://github.com/apache/tvm tvm --recursive
 cd /usr/tvm
 # checkout a hash-tag
 git checkout 4b13bf668edc7099b38d463e5db94ebc96c80470
diff --git a/docker/install/install_tvm_gpu.sh 
b/docker/install/install_tvm_gpu.sh
index 2dbf8e1..fe2214d 100755
--- a/docker/install/install_tvm_gpu.sh
+++ b/docker/install/install_tvm_gpu.sh
@@ -21,7 +21,7 @@ set -u
 set -o pipefail
 
 cd /usr
-git clone https://github.com/apache/incubator-tvm tvm --recursive
+git clone https://github.com/apache/tvm tvm --recursive
 cd /usr/tvm
 # checkout a hash-tag
 git checkout 4b13bf668edc7099b38d463e5db94ebc96c80470
diff --git a/docs/conf.py b/docs/conf.py
index e3ddae2..32bc095 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -48,7 +48,7 @@ sys.path.insert(0, os.path.join(curr_path, "../vta/python"))
 project = "tvm"
 author = "Apache Software Foundation"
 copyright = "2020, %s" % author
-github_doc_root = "https://github.com/apache/incubator-tvm/tree/main/docs/";
+github_doc_root = "https://github.com/apache/tvm/tree/main/docs/";
 
 os.environ["TVM_BUILD_DOC"] = "1"
 # Version information.
@@ -309,12 +309,6 @@ import tlcpack_sphinx_addon
 footer_copyright = "© 2020 Apache Software Foundation | All right reserved"
 footer_note = " ".join(
     """
-Apache TVM is an effort undergoing incubation at The Apache Software 
Foundation (ASF),
-sponsored by the Apache Incubator. Incubation is required of all newly 
accepted projects
-until a further review indicates that the infrastructure, communications, and 
decision making
-process have stabilized in a manner consistent with other successful ASF 
projects. While
-incubation status is not necessarily a reflection of the completeness or 
stability of the code,
-it does indicate that the project has yet to be fully endorsed by the ASF.
 Copyright © 2020 The Apache Software Foundation. Apache TVM, Apache, the 
Apache feather,
 and the Apache TVM project logo are either trademarks or registered trademarks 
of
 the Apache Software Foundation.""".split(
@@ -332,7 +326,7 @@ header_links = [
     ("Blog", "https://tvm.apache.org/blog";),
     ("Docs", "https://tvm.apache.org/docs";),
     ("Conference", "https://tvmconf.org";),
-    ("Github", "https://github.com/apache/incubator-tvm/";),
+    ("Github", "https://github.com/apache/tvm/";),
 ]
 
 header_dropdown = {
diff --git a/docs/contribute/community.rst b/docs/contribute/community.rst
index fd6df0f..8867202 100644
--- a/docs/contribute/community.rst
+++ b/docs/contribute/community.rst
@@ -20,7 +20,7 @@
 TVM Community Guideline
 =======================
 
-TVM adopts the Apache style model and governs by merit. We believe that it is 
important to create an inclusive community where everyone can use, contribute 
to, and influence the direction of the project. See `CONTRIBUTORS.md 
<https://github.com/apache/incubator-tvm/blob/main/CONTRIBUTORS.md>`_ for the 
current list of contributors.
+TVM adopts the Apache style model and governs by merit. We believe that it is 
important to create an inclusive community where everyone can use, contribute 
to, and influence the direction of the project. See `CONTRIBUTORS.md 
<https://github.com/apache/tvm/blob/main/CONTRIBUTORS.md>`_ for the current 
list of contributors.
 
 
 
diff --git a/docs/contribute/document.rst b/docs/contribute/document.rst
index 1bfab1e..3652a28 100644
--- a/docs/contribute/document.rst
+++ b/docs/contribute/document.rst
@@ -68,7 +68,7 @@ Be careful to leave blank lines between sections of your 
documents.
 In the above case, there has to be a blank line before `Parameters`, `Returns` 
and `Examples`
 in order for the doc to be built correctly. To add a new function to the doc,
 we need to add the `sphinx.autodoc 
<http://www.sphinx-doc.org/en/master/ext/autodoc.html>`_
-rules to the `docs/api/python 
<https://github.com/apache/incubator-tvm/tree/main/docs/api/python>`_).
+rules to the `docs/api/python 
<https://github.com/apache/tvm/tree/main/docs/api/python>`_).
 You can refer to the existing files under this folder on how to add the 
functions.
 
 
@@ -96,7 +96,7 @@ to add comments about code logics to improve readability.
 Write Tutorials
 ---------------
 We use the `sphinx-gallery <https://sphinx-gallery.github.io/>`_ to build 
python tutorials.
-You can find the source code under `tutorials 
<https://github.com/apache/incubator-tvm/tree/main/tutorials>`_ quite self 
explanatory.
+You can find the source code under `tutorials 
<https://github.com/apache/tvm/tree/main/tutorials>`_ quite self explanatory.
 One thing that worth noting is that the comment blocks are written in 
reStructuredText instead of markdown so be aware of the syntax.
 
 The tutorial code will run on our build server to generate the document page.
diff --git a/docs/contribute/release_process.rst 
b/docs/contribute/release_process.rst
index 0f1e515..f330a7d 100644
--- a/docs/contribute/release_process.rst
+++ b/docs/contribute/release_process.rst
@@ -17,8 +17,8 @@
 
 .. _release_process:
 
-Apache TVM (incubating) Release Process
-=======================================
+Apache TVM Release Process
+==========================
 
 The release manager role in TVM means you are responsible for a few different 
things:
 
@@ -64,13 +64,13 @@ The last step is to update the KEYS file with your code 
signing key https://www.
 .. code-block:: bash
 
        # the --depth=files will avoid checkout existing folders
-       svn co --depth=files 
"https://dist.apache.org/repos/dist/dev/incubator/tvm"; svn-tvm
+       svn co --depth=files "https://dist.apache.org/repos/dist/dev/tvm"; 
svn-tvm
        cd svn-tvm
        # edit KEYS file
        svn ci --username $ASF_USERNAME --password "$ASF_PASSWORD" -m "Update 
KEYS"
        # update downloads.apache.org
-       svn rm --username $ASF_USERNAME --password "$ASF_PASSWORD" 
https://dist.apache.org/repos/dist/release/incubator/tvm/KEYS -m "Update KEYS"
-       svn cp --username $ASF_USERNAME --password "$ASF_PASSWORD" 
https://dist.apache.org/repos/dist/dev/incubator/tvm/KEYS 
https://dist.apache.org/repos/dist/release/incubator/tvm/ -m "Update KEYS"
+       svn rm --username $ASF_USERNAME --password "$ASF_PASSWORD" 
https://dist.apache.org/repos/dist/release/tvm/KEYS -m "Update KEYS"
+       svn cp --username $ASF_USERNAME --password "$ASF_PASSWORD" 
https://dist.apache.org/repos/dist/dev/tvm/KEYS 
https://dist.apache.org/repos/dist/release/tvm/ -m "Update KEYS"
 
 
 Cut a Release Candidate
@@ -80,8 +80,8 @@ To cut a release candidate, one needs to first cut a branch 
using selected versi
 
 .. code-block:: bash
 
-       git clone https://github.com/apache/incubator-tvm.git
-       cd incubator-tvm/
+       git clone https://github.com/apache/tvm.git
+       cd tvm/
        git branch v0.6.0
        git push --set-upstream origin v0.6.0
 
@@ -107,8 +107,8 @@ Create source code artifacts,
 
 .. code-block:: bash
 
-       git clone g...@github.com:apache/incubator-tvm.git 
apache-tvm-src-v0.6.0.rc0-incubating
-       cd apache-tvm-src-v0.6.0.rc0-incubating
+       git clone g...@github.com:apache/tvm.git apache-tvm-src-v0.6.0.rc0
+       cd apache-tvm-src-v0.6.0.rc0
        git checkout v0.6
        git submodule update --init --recursive
        git checkout v0.6.0.rc0
@@ -116,7 +116,7 @@ Create source code artifacts,
        find . -name ".git*" -print0 | xargs -0 rm -rf
        cd ..
        brew install gnu-tar
-       gtar -czvf apache-tvm-src-v0.6.0.rc0-incubating.tar.gz 
apache-tvm-src-v0.6.0.rc0-incubating
+       gtar -czvf apache-tvm-src-v0.6.0.rc0.tar.gz apache-tvm-src-v0.6.0.rc0
 
 Use your GPG key to sign the created artifact. First make sure your GPG is set 
to use the correct private key,
 
@@ -129,8 +129,8 @@ Create GPG signature as well as the hash of the file,
 
 .. code-block:: bash
 
-       gpg --armor --output apache-tvm-src-v0.6.0.rc0-incubating.tar.gz.asc 
--detach-sig apache-tvm-src-v0.6.0.rc0-incubating.tar.gz
-       shasum -a 512 apache-tvm-src-v0.6.0.rc0-incubating.tar.gz > 
apache-tvm-src-v0.6.0.rc0-incubating.tar.gz.sha512
+       gpg --armor --output apache-tvm-src-v0.6.0.rc0.tar.gz.asc --detach-sig 
apache-tvm-src-v0.6.0.rc0.tar.gz
+       shasum -a 512 apache-tvm-src-v0.6.0.rc0.tar.gz > 
apache-tvm-src-v0.6.0.rc0.tar.gz.sha512
 
 
 Upload the Release Candidate
@@ -143,7 +143,7 @@ The release manager also needs to upload the artifacts to 
ASF SVN,
 .. code-block:: bash
 
        # the --depth=files will avoid checkout existing folders
-       svn co --depth=files 
"https://dist.apache.org/repos/dist/dev/incubator/tvm"; svn-tvm
+       svn co --depth=files "https://dist.apache.org/repos/dist/dev/tvm"; 
svn-tvm
        cd svn-tvm
        mkdir tvm-v0.6.0-rc0
        # copy files into it
@@ -154,9 +154,7 @@ The release manager also needs to upload the artifacts to 
ASF SVN,
 Call a Vote on the Release Candidate
 ------------------------------------
 
-As an incubator project, it requires voting on both dev@ and general@.
-
-The first voting takes place on the Apache TVM (incubator) developers list 
(d...@tvm.apache.org). To get more attention, one can create a github issue 
start with "[VOTE]" instead, it will be mirrored to dev@ automatically. Look at 
past voting threads to see how this proceeds. The email should follow this 
format.
+The first voting takes place on the Apache TVM developers list 
(d...@tvm.apache.org). To get more attention, one can create a github issue 
start with "[VOTE]" instead, it will be mirrored to dev@ automatically. Look at 
past voting threads to see how this proceeds. The email should follow this 
format.
 
 - Provide the link to the draft of the release notes in the email
 - Provide the link to the release candidate artifacts
@@ -164,14 +162,9 @@ The first voting takes place on the Apache TVM (incubator) 
developers list (dev@
 
 For the dev@ vote, there must be at least 3 binding +1 votes and more +1 votes 
than -1 votes. Once the vote is done, you should also send out a summary email 
with the totals, with a subject that looks something like [VOTE][RESULT] ....
 
-The voting then moves onto the gene...@incubator.apache.org. Anyone can 
contribute a vote, but only "Incubator PMC" (IPMC) votes are binding.
-To pass, there must be 3 binding +1 votes and more +1 votes than -1 votes.
-
 In ASF, votes are open "at least" 72hrs (3 days). If you don't get enough 
number of binding votes within that time, you cannot close the voting deadline. 
You need to extend it.
 
-Same as the one on dev@, send out a summary email to general@ once the vote 
passes.
-
-If either voting fails, the community needs to modified the release 
accordingly, create a new release candidate and re-run the voting process.
+If the voting fails, the community needs to modified the release accordingly, 
create a new release candidate and re-run the voting process.
 
 
 Post the Release
@@ -182,12 +175,12 @@ After the vote passes, to upload the binaries to Apache 
mirrors, you move the bi
 .. code-block:: bash
 
        export SVN_EDITOR=vim
-       svn mkdir https://dist.apache.org/repos/dist/release/incubator/tvm
-       svn mv 
https://dist.apache.org/repos/dist/dev/incubator/tvm/tvm-v0.6.0-rc2 
https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.0
+       svn mkdir https://dist.apache.org/repos/dist/release/tvm
+       svn mv https://dist.apache.org/repos/dist/dev/tvm/tvm-v0.6.0-rc2 
https://dist.apache.org/repos/dist/release/tvm/tvm-v0.6.0
 
        # If you've added your signing key to the KEYS file, also update the 
release copy.
-       svn co --depth=files 
"https://dist.apache.org/repos/dist/release/incubator/tvm"; svn-tvm
-       curl "https://dist.apache.org/repos/dist/dev/incubator/tvm/KEYS"; > 
svn-tvm/KEYS
+       svn co --depth=files "https://dist.apache.org/repos/dist/release/tvm"; 
svn-tvm
+       curl "https://dist.apache.org/repos/dist/dev/tvm/KEYS"; > svn-tvm/KEYS
        (cd svn-tvm && svn ci --username $ASF_USERNAME --password 
"$ASF_PASSWORD" -m"Update KEYS")
 
 Remember to create a new release TAG (v0.6.0 in this case) on Github and 
remove the pre-release candidate TAG.
@@ -200,10 +193,10 @@ Remember to create a new release TAG (v0.6.0 in this 
case) on Github and remove
 Update the TVM Website
 ----------------------
 
-The website repository is located at 
`https://github.com/apache/incubator-tvm-site 
<https://github.com/apache/incubator-tvm-site>`_. Modify the download page to 
include the release artifacts as well as the GPG signature and SHA hash.
+The website repository is located at `https://github.com/apache/tvm-site 
<https://github.com/apache/tvm-site>`_. Modify the download page to include the 
release artifacts as well as the GPG signature and SHA hash.
 
 
 Post the Announcement
 ---------------------
 
-Send out an announcement email to gene...@incubator.apache.org, 
annou...@apache.org, and d...@tvm.apache.org. The announcement should include 
the link to release note and download page.
+Send out an announcement email to annou...@apache.org, and 
d...@tvm.apache.org. The announcement should include the link to release note 
and download page.
diff --git a/docs/deploy/android.rst b/docs/deploy/android.rst
index e28eef3..8c8fcfb 100644
--- a/docs/deploy/android.rst
+++ b/docs/deploy/android.rst
@@ -38,5 +38,5 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go 
to android target.
 TVM Runtime for Android Target
 ------------------------------
 
-Refer `here 
<https://github.com/apache/incubator-tvm/blob/main/apps/android_deploy/README.md#build-and-installation>`_
 to build CPU/OpenCL version flavor TVM runtime for android target.
-From android java TVM API to load model & execute can be referred at this 
`java 
<https://github.com/apache/incubator-tvm/blob/main/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java>`_
 sample source.
+Refer `here 
<https://github.com/apache/tvm/blob/main/apps/android_deploy/README.md#build-and-installation>`_
 to build CPU/OpenCL version flavor TVM runtime for android target.
+From android java TVM API to load model & execute can be referred at this 
`java 
<https://github.com/apache/tvm/blob/main/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java>`_
 sample source.
diff --git a/docs/deploy/cpp_deploy.rst b/docs/deploy/cpp_deploy.rst
index f3de69d..44df1e5 100644
--- a/docs/deploy/cpp_deploy.rst
+++ b/docs/deploy/cpp_deploy.rst
@@ -19,7 +19,7 @@
 Deploy TVM Module using C++ API
 ===============================
 
-We provide an example on how to deploy TVM modules in `apps/howto_deploy 
<https://github.com/apache/incubator-tvm/tree/main/apps/howto_deploy>`_
+We provide an example on how to deploy TVM modules in `apps/howto_deploy 
<https://github.com/apache/tvm/tree/main/apps/howto_deploy>`_
 
 To run the example, you can use the following command
 
@@ -38,17 +38,17 @@ TVM provides a minimum runtime, which costs around 300K to 
600K depending on how
 In most cases, we can use ``libtvm_runtime.so`` that comes with the build.
 
 If somehow you find it is hard to build ``libtvm_runtime``, checkout
-`tvm_runtime_pack.cc 
<https://github.com/apache/incubator-tvm/tree/main/apps/howto_deploy/tvm_runtime_pack.cc>`_.
+`tvm_runtime_pack.cc 
<https://github.com/apache/tvm/tree/main/apps/howto_deploy/tvm_runtime_pack.cc>`_.
 It is an example all in one file that gives you TVM runtime.
 You can compile this file using your build system and include this into your 
project.
 
-You can also checkout `apps 
<https://github.com/apache/incubator-tvm/tree/main/apps/>`_ for example 
applications build with TVM on iOS, Android and others.
+You can also checkout `apps <https://github.com/apache/tvm/tree/main/apps/>`_ 
for example applications build with TVM on iOS, Android and others.
 
 Dynamic Library vs. System Module
 ---------------------------------
 TVM provides two ways to use the compiled library.
-You can checkout `prepare_test_libs.py 
<https://github.com/apache/incubator-tvm/tree/main/apps/howto_deploy/prepare_test_libs.py>`_
-on how to generate the library and `cpp_deploy.cc 
<https://github.com/apache/incubator-tvm/tree/main/apps/howto_deploy/cpp_deploy.cc>`_
 on how to use them.
+You can checkout `prepare_test_libs.py 
<https://github.com/apache/tvm/tree/main/apps/howto_deploy/prepare_test_libs.py>`_
+on how to generate the library and `cpp_deploy.cc 
<https://github.com/apache/tvm/tree/main/apps/howto_deploy/cpp_deploy.cc>`_ on 
how to use them.
 
 - Store library as a shared library and dynamically load the library into your 
project.
 - Bundle the compiled library into your project in system module mode.
diff --git a/docs/deploy/index.rst b/docs/deploy/index.rst
index e47b0a3..2b37f73 100644
--- a/docs/deploy/index.rst
+++ b/docs/deploy/index.rst
@@ -38,7 +38,7 @@ on a Linux based embedded system such as Raspberry Pi:
 
 .. code:: bash
 
-    git clone --recursive https://github.com/apache/incubator-tvm tvm
+    git clone --recursive https://github.com/apache/tvm tvm
     cd tvm
     mkdir build
     cp cmake/config.cmake build
diff --git a/docs/deploy/vitis_ai.rst b/docs/deploy/vitis_ai.rst
index f0bd3ed..df29f16 100755
--- a/docs/deploy/vitis_ai.rst
+++ b/docs/deploy/vitis_ai.rst
@@ -101,11 +101,11 @@ Hardware setup and docker build
    .. code:: bash
 
       git clone --recurse-submodules https://github.com/Xilinx/Vitis-AI
-   
+
 2. Install Docker, and add the user to the docker group. Link the user
    to docker installation instructions from the following docker's
    website:
-   
+
 
    -  https://docs.docker.com/install/linux/docker-ce/ubuntu/
    -  https://docs.docker.com/install/linux/docker-ce/centos/
@@ -114,11 +114,11 @@ Hardware setup and docker build
 3. Download the latest Vitis AI Docker with the following command. This 
container runs on CPU.
 
    .. code:: bash
-      
+
       docker pull xilinx/vitis-ai:latest
-    
+
    To accelerate the quantization, you can optionally use the Vitis-AI GPU 
docker image. Use the below commands to build the Vitis-AI GPU docker container:
-   
+
    .. code:: bash
 
       cd Vitis-AI/docker
@@ -141,32 +141,32 @@ Hardware setup and docker build
    -  Run the following commands:
 
       .. code:: bash
-      
+
          cd Vitis-AI/alveo/packages
          sudo su
          ./install.sh
-      
+
    -  Power cycle the system.
-   
+
 5. Clone tvm repo and pyxir repo
 
    .. code:: bash
-     
-      git clone --recursive https://github.com/apache/incubator-tvm.git
+
+      git clone --recursive https://github.com/apache/tvm.git
       git clone --recursive https://github.com/Xilinx/pyxir.git
-   
+
 6. Build and start the tvm runtime Vitis-AI Docker Container.
 
    .. code:: bash
 
-      ./incubator-tvm/docker/build.sh demo_vitis_ai bash
-      ./incubator-tvm/docker/bash.sh tvm.demo_vitis_ai
-         
+      ./tvm/docker/build.sh demo_vitis_ai bash
+      ./tvm/docker/bash.sh tvm.demo_vitis_ai
+
       #Setup inside container
       source /opt/xilinx/xrt/setup.sh
       . $VAI_ROOT/conda/etc/profile.d/conda.sh
       conda activate vitis-ai-tensorflow
-      
+
 7. Install PyXIR
 
    .. code:: bash
@@ -174,27 +174,27 @@ Hardware setup and docker build
      cd pyxir
      python3 setup.py install --use_vai_rt_dpucadx8g --user
 
-   
+
 8. Build TVM inside the container with Vitis-AI
 
    .. code:: bash
 
-      cd incubator-tvm
+      cd tvm
       mkdir build
       cp cmake/config.cmake build
-      cd build  
+      cd build
       echo set\(USE_LLVM ON\) >> config.cmake
       echo set\(USE_VITIS_AI ON\) >> config.cmake
       cmake ..
       make -j$(nproc)
-   
+
 9.  Install TVM
 
     .. code:: bash
 
-      cd incubator-tvm/python
+      cd tvm/python
       pip3 install -e . --user
-      
+
 Edge (DPUCZDX8G)
 ^^^^^^^^^^^^^^^^
 
@@ -238,19 +238,19 @@ Host setup and docker build
 
    .. code:: bash
 
-      git clone --recursive https://github.com/apache/incubator-tvm.git
+      git clone --recursive https://github.com/apache/tvm.git
 2. Build and start the tvm runtime Vitis-AI Docker Container.
 
    .. code:: bash
 
-      cd incubator-tvm 
-      ./incubator-tvm/docker/build.sh demo_vitis_ai bash
-      ./incubator-tvm/docker/bash.sh tvm.demo_vitis_ai
-   
+      cd tvm
+      ./tvm/docker/build.sh demo_vitis_ai bash
+      ./tvm/docker/bash.sh tvm.demo_vitis_ai
+
       #Setup inside container
       . $VAI_ROOT/conda/etc/profile.d/conda.sh
       conda activate vitis-ai-tensorflow
-   
+
 3. Install PyXIR
 
    .. code:: bash
@@ -258,13 +258,13 @@ Host setup and docker build
       git clone --recursive https://github.com/Xilinx/pyxir.git
       cd pyxir
       python3 setup.py install --user
-   
-   
+
+
 4. Build TVM inside the container with Vitis-AI.
 
    .. code:: bash
 
-      cd incubator-tvm 
+      cd tvm
       mkdir build
       cp cmake/config.cmake build
       cd build
@@ -272,12 +272,12 @@ Host setup and docker build
       echo set\(USE_VITIS_AI ON\) >> config.cmake
       cmake ..
       make -j$(nproc)
-   
+
 5. Install TVM
 
    .. code:: bash
 
-      cd incubator-tvm/python
+      cd tvm/python
       pip3 install -e . --user
 
 Edge requirements
@@ -299,10 +299,10 @@ platform. The following development boards can be used 
out-of-the-box:
 
 Edge hardware setup
 ^^^^^^^^^^^^^^^^^^^
-.. note:: 
+.. note::
 
-  This section provides instructions for setting up with the `Pynq 
<http://www.pynq.io/>`__ platform but 
-  Petalinux based flows are also supported. 
+  This section provides instructions for setting up with the `Pynq 
<http://www.pynq.io/>`__ platform but
+  Petalinux based flows are also supported.
 
 1. Download the Pynq v2.5 image for your target (use Z1 or Z2 for
    Ultra96 target depending on board version) Link to image:
@@ -318,7 +318,7 @@ Edge hardware setup
    .. code:: bash
 
      python3 -c 'from pynq_dpu import DpuOverlay ; overlay = 
DpuOverlay("dpu.bit")'
-  
+
 6. Check whether the DPU kernel is alive:
 
    .. code:: bash
@@ -328,10 +328,10 @@ Edge hardware setup
 Edge TVM setup
 ^^^^^^^^^^^^^^
 
-.. note:: 
+.. note::
 
-  When working on Petalinux instead of Pynq, the following steps might take 
more manual work (e.g building     
-  hdf5 from source). Also, TVM has a scipy dependency which you then might 
have to build from source or 
+  When working on Petalinux instead of Pynq, the following steps might take 
more manual work (e.g building
+  hdf5 from source). Also, TVM has a scipy dependency which you then might 
have to build from source or
   circumvent. We don't depend on scipy in our flow.
 
 Building TVM depends on the Xilinx
@@ -344,7 +344,7 @@ interface between TVM and Vitis-AI tools.
 
       apt-get install libhdf5-dev
       pip3 install pydot h5py
-      
+
 2. Install PyXIR
 
    .. code:: bash
@@ -352,25 +352,25 @@ interface between TVM and Vitis-AI tools.
       git clone --recursive https://github.com/Xilinx/pyxir.git
       cd pyxir
       sudo python3 setup.py install --use_vai_rt_dpuczdx8g
-   
+
 3. Build TVM with Vitis-AI
 
    .. code:: bash
 
-      git clone --recursive https://github.com/apache/incubator-tvm
-      cd incubator-tvm
+      git clone --recursive https://github.com/apache/tvm
+      cd tvm
       mkdir build
       cp cmake/config.cmake build
       cd build
       echo set\(USE_VITIS_AI ON\) >> config.cmake
-      cmake ..     
+      cmake ..
       make
-   
+
 4. Install TVM
 
    .. code:: bash
 
-      cd incubator-tvm/python
+      cd tvm/python
       pip3 install -e . --user
 
 5. Check whether the setup was successful in the Python shell:
@@ -467,7 +467,7 @@ build call.
    tvm_target = 'llvm'
    target='DPUCADX8G'
 
-   with tvm.transform.PassContext(opt_level=3, config= 
{'relay.ext.vitis_ai.options.target': target}):   
+   with tvm.transform.PassContext(opt_level=3, config= 
{'relay.ext.vitis_ai.options.target': target}):
       lib = relay.build(mod, tvm_target, params=params)
 
 As one more step before we can accelerate a model with Vitis-AI in TVM
@@ -488,7 +488,7 @@ will take a substantial amount of time.
    # be executed on the CPU
    # This config can be changed by setting the 'PX_QUANT_SIZE' (e.g. export 
PX_QUANT_SIZE=64)
    for i in range(128):
-      module.set_input(input_name, inputs[i]) 
+      module.set_input(input_name, inputs[i])
       module.run()
 
 Afterwards, inference will be accelerated on the DPU.
@@ -563,11 +563,11 @@ The Vitis-AI target is DPUCZDX8G-zcu104 as we are 
targeting the edge DPU
 on the ZCU104 board and this target is passed as a config to the TVM
 build call. Note that different identifiers can be passed for different
 targets, see `edge targets info <#edge-requirements>`__. Additionally, we
-provide the 'export_runtime_module' config that points to a file to which we 
+provide the 'export_runtime_module' config that points to a file to which we
 can export the Vitis-AI runtime module. We have to do this because we will
 first be compiling and quantizing the model on the host machine before building
-the model for edge deployment. As you will see later on, the exported runtime 
-module will be passed to the edge build so that the Vitis-AI runtime module 
+the model for edge deployment. As you will see later on, the exported runtime
+module will be passed to the edge build so that the Vitis-AI runtime module
 can be included.
 
 .. code:: python
@@ -575,17 +575,17 @@ can be included.
    from tvm.contrib import util
 
    temp = util.tempdir()
-   
+
    tvm_target = 'llvm'
    target='DPUCZDX8G-zcu104'
    export_rt_mod_file = temp.relpath("vitis_ai.rtmod")
-  
+
    with tvm.transform.PassContext(opt_level=3, config= 
{'relay.ext.vitis_ai.options.target': target,
-                                                       
'relay.ext.vitis_ai.options.export_runtime_module': export_rt_mod_file}):   
+                                                       
'relay.ext.vitis_ai.options.export_runtime_module': export_rt_mod_file}):
       lib = relay.build(mod, tvm_target, params=params)
-      
-We will quantize and compile the model for execution on the DPU using 
on-the-fly 
-quantization on the host machine. This makes use of TVM inference calls 
+
+We will quantize and compile the model for execution on the DPU using 
on-the-fly
+quantization on the host machine. This makes use of TVM inference calls
 (module.run) to quantize the model on the host with the first N inputs.
 
 .. code:: python
@@ -596,10 +596,10 @@ quantization on the host machine. This makes use of TVM 
inference calls
    # be executed on the CPU
    # This config can be changed by setting the 'PX_QUANT_SIZE' (e.g. export 
PX_QUANT_SIZE=64)
    for i in range(128):
-      module.set_input(input_name, inputs[i]) 
+      module.set_input(input_name, inputs[i])
       module.run()
-      
-Save the TVM lib module so that the Vitis-AI runtime module will also be 
exported 
+
+Save the TVM lib module so that the Vitis-AI runtime module will also be 
exported
 (to the 'export_runtime_module' path we previously passed as a config).
 
 .. code:: python
@@ -609,9 +609,9 @@ Save the TVM lib module so that the Vitis-AI runtime module 
will also be exporte
    temp = util.tempdir()
    lib.export_library(temp.relpath("tvm_lib.so"))
 
-After quantizing and compiling the model for Vitis-AI acceleration using the 
-first N inputs we can build the model for execution on the ARM edge device. 
-Here we pass the previously exported Vitis-AI runtime module so it can be 
included 
+After quantizing and compiling the model for Vitis-AI acceleration using the
+first N inputs we can build the model for execution on the ARM edge device.
+Here we pass the previously exported Vitis-AI runtime module so it can be 
included
 in the TVM build.
 
 .. code:: python
@@ -637,7 +637,7 @@ section.
 Edge steps
 ^^^^^^^^^^
 
-After setting up TVM with Vitis-AI on the edge device, you can now load 
+After setting up TVM with Vitis-AI on the edge device, you can now load
 the TVM runtime module into memory and feed inputs for inference.
 
 .. code:: python
diff --git a/docs/dev/convert_layout.rst b/docs/dev/convert_layout.rst
index 6c9890f..490df13 100644
--- a/docs/dev/convert_layout.rst
+++ b/docs/dev/convert_layout.rst
@@ -264,5 +264,5 @@ The ordering of the layouts is defined by the 
implementation of `register_conver
 
 Current implementation has support for almost all the operators commonly used 
in image classification models. However, if one encounters too many data layout 
transforms in the graph, it is highly likely that there is an operator whose 
layouts need special handling as described in Section 3. Some pull requests 
that can help in such a situation are
 
-- Layout inference for `Batch Norm 
<https://github.com/apache/incubator-tvm/pull/4600>`_ - Batch normalization 
falls into the category of lightly-sensitive operator. The PR shows how to 
handle the layout inference for batch norm.
-- Python Callback for `Convolution 
<https://github.com/apache/incubator-tvm/pull/4335>`_- For highly-sensitive 
operators, one might have to do python callback as well. The PR shows how to 
define a python callback function for Convolution operator.
+- Layout inference for `Batch Norm <https://github.com/apache/tvm/pull/4600>`_ 
- Batch normalization falls into the category of lightly-sensitive operator. 
The PR shows how to handle the layout inference for batch norm.
+- Python Callback for `Convolution 
<https://github.com/apache/tvm/pull/4335>`_- For highly-sensitive operators, 
one might have to do python callback as well. The PR shows how to define a 
python callback function for Convolution operator.
diff --git a/docs/dev/frontend/tensorflow.rst b/docs/dev/frontend/tensorflow.rst
index b234ed7..dde7179 100644
--- a/docs/dev/frontend/tensorflow.rst
+++ b/docs/dev/frontend/tensorflow.rst
@@ -57,7 +57,7 @@ Export
 
 TensorFlow frontend expects a frozen protobuf (.pb) or saved model as input. 
It currently does not support checkpoint (.ckpt). The graphdef needed by the 
TensorFlow frontend can be extracted from the active session, or by using the 
`TFParser`_ helper class.
 
-.. _TFParser: 
https://github.com/apache/incubator-tvm/blob/main/python/tvm/relay/frontend/tensorflow_parser.py
+.. _TFParser: 
https://github.com/apache/tvm/blob/main/python/tvm/relay/frontend/tensorflow_parser.py
 
 The model should be exported with a number of transformations to prepare the 
model for inference. It is also important to set ```add_shapes=True```, as this 
will embed the output shapes of each node into the graph. Here is one function 
to export a model as a protobuf given a session:
 
@@ -101,7 +101,7 @@ Import the Model
 Explicit Shape:
 ~~~~~~~~~~~~~~~
 
-To ensure shapes can be known throughout the entire graph, pass the 
```shape``` argument to ```from_tensorflow```. This dictionary maps input names 
to input shapes. Please refer to these `test cases 
<https://github.com/apache/incubator-tvm/blob/main/tests/python/frontend/tensorflow/test_forward.py#L36>`_
 for examples.
+To ensure shapes can be known throughout the entire graph, pass the 
```shape``` argument to ```from_tensorflow```. This dictionary maps input names 
to input shapes. Please refer to these `test cases 
<https://github.com/apache/tvm/blob/main/tests/python/frontend/tensorflow/test_forward.py#L36>`_
 for examples.
 
 Data Layout
 ~~~~~~~~~~~
diff --git a/docs/dev/inferbound.rst b/docs/dev/inferbound.rst
index 7d0127a..010d0d4 100644
--- a/docs/dev/inferbound.rst
+++ b/docs/dev/inferbound.rst
@@ -22,7 +22,7 @@ InferBound Pass
 *******************************************
 
 
-The InferBound pass is run after normalize, and before ScheduleOps 
`build_module.py 
<https://github.com/apache/incubator-tvm/blob/main/python/tvm/driver/build_module.py>`_.
 The main job of InferBound is to create the bounds map, which specifies a 
Range for each IterVar in the program. These bounds are then passed to 
ScheduleOps, where they are used to set the extents of For loops, see 
`MakeLoopNest 
<https://github.com/apache/incubator-tvm/blob/main/src/te/operation/op_util.cc>`_,
 and to  [...]
+The InferBound pass is run after normalize, and before ScheduleOps 
`build_module.py 
<https://github.com/apache/tvm/blob/main/python/tvm/driver/build_module.py>`_. 
The main job of InferBound is to create the bounds map, which specifies a Range 
for each IterVar in the program. These bounds are then passed to ScheduleOps, 
where they are used to set the extents of For loops, see `MakeLoopNest 
<https://github.com/apache/tvm/blob/main/src/te/operation/op_util.cc>`_, and to 
set the sizes of all [...]
 
 The output of InferBound is a map from IterVar to Range:
 
@@ -53,9 +53,9 @@ Therefore, let's review the Range and IterVar classes:
        };
    }
 
-Note that IterVarNode also contains a Range ``dom``. This ``dom`` may or may 
not have a meaningful value, depending on when the IterVar was created. For 
example, when ``tvm.compute`` is called, an `IterVar is created 
<https://github.com/apache/incubator-tvm/blob/main/src/te/operation/compute_op.cc>`_
 for each axis and reduce axis, with dom's equal to the shape supplied in the 
call to ``tvm.compute``.
+Note that IterVarNode also contains a Range ``dom``. This ``dom`` may or may 
not have a meaningful value, depending on when the IterVar was created. For 
example, when ``tvm.compute`` is called, an `IterVar is created 
<https://github.com/apache/tvm/blob/main/src/te/operation/compute_op.cc>`_ for 
each axis and reduce axis, with dom's equal to the shape supplied in the call 
to ``tvm.compute``.
 
-On the other hand, when ``tvm.split`` is called, `IterVars are created 
<https://github.com/apache/incubator-tvm/blob/main/src/te/schedule/schedule_lang.cc>`_
 for the inner and outer axes, but these IterVars are not given a meaningful 
``dom`` value.
+On the other hand, when ``tvm.split`` is called, `IterVars are created 
<https://github.com/apache/tvm/blob/main/src/te/schedule/schedule_lang.cc>`_ 
for the inner and outer axes, but these IterVars are not given a meaningful 
``dom`` value.
 
 In any case, the ``dom`` member of an IterVar is never modified during 
InferBound. However, keep in mind that the ``dom`` member of an IterVar is 
sometimes used as default value for the Ranges InferBound computes.
 
@@ -117,7 +117,7 @@ Tensors haven't been mentioned yet, but in the context of 
TVM, a Tensor represen
        int value_index;
    };
 
-In the Operation class declaration above, we can see that each operation also 
has a list of InputTensors. Thus the stages of the schedule form a DAG, where 
each stage is a node in the graph. There is an edge in the graph from Stage A 
to Stage B, if the operation of Stage B has an input tensor whose source 
operation is the op of Stage A. Put simply, there is an edge from A to B, if B 
consumes a tensor produced by A. See the diagram below. This graph is created 
at the beginning of InferBou [...]
+In the Operation class declaration above, we can see that each operation also 
has a list of InputTensors. Thus the stages of the schedule form a DAG, where 
each stage is a node in the graph. There is an edge in the graph from Stage A 
to Stage B, if the operation of Stage B has an input tensor whose source 
operation is the op of Stage A. Put simply, there is an edge from A to B, if B 
consumes a tensor produced by A. See the diagram below. This graph is created 
at the beginning of InferBou [...]
 
 .. image:: 
https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/stage_graph.png
     :align: center
diff --git a/docs/dev/pass_infra.rst b/docs/dev/pass_infra.rst
index 898e517..3680cb8 100644
--- a/docs/dev/pass_infra.rst
+++ b/docs/dev/pass_infra.rst
@@ -528,22 +528,22 @@ optimization pipeline and debug Relay and tir passes, 
please refer to the
 
 .. _Sequential: 
https://pytorch.org/docs/stable/nn.html?highlight=sequential#torch.nn.Sequential
 
-.. _Block: 
https://mxnet.incubator.apache.org/api/python/docs/api/gluon/block.html#gluon-block
+.. _Block: 
https://mxnet.apache.org/api/python/docs/api/gluon/block.html#gluon-block
 
-.. _include/tvm/ir/transform.h: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/ir/transform.h
+.. _include/tvm/ir/transform.h: 
https://github.com/apache/tvm/blob/main/include/tvm/ir/transform.h
 
-.. _src/relay/ir/transform.cc: 
https://github.com/apache/incubator-tvm/blob/main/src/relay/ir/transform.cc
+.. _src/relay/ir/transform.cc: 
https://github.com/apache/tvm/blob/main/src/relay/ir/transform.cc
 
-.. _src/ir/transform.cc: 
https://github.com/apache/incubator-tvm/blob/main/src/ir/transform.cc
+.. _src/ir/transform.cc: 
https://github.com/apache/tvm/blob/main/src/ir/transform.cc
 
-.. _src/relay/pass/fold_constant.cc: 
https://github.com/apache/incubator-tvm/blob/main/src/relay/pass/fold_constant.cc
+.. _src/relay/pass/fold_constant.cc: 
https://github.com/apache/tvm/blob/main/src/relay/pass/fold_constant.cc
 
-.. _python/tvm/relay/transform.py: 
https://github.com/apache/incubator-tvm/blob/main/python/tvm/relay/transform.py
+.. _python/tvm/relay/transform.py: 
https://github.com/apache/tvm/blob/main/python/tvm/relay/transform.py
 
-.. _include/tvm/relay/transform.h: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/relay/transform.h
+.. _include/tvm/relay/transform.h: 
https://github.com/apache/tvm/blob/main/include/tvm/relay/transform.h
 
-.. _python/tvm/ir/transform.py: 
https://github.com/apache/incubator-tvm/blob/main/python/tvm/ir/transform.py
+.. _python/tvm/ir/transform.py: 
https://github.com/apache/tvm/blob/main/python/tvm/ir/transform.py
 
-.. _src/tir/transforms/unroll_loop.cc: 
https://github.com/apache/incubator-tvm/blob/main/src/tir/transforms/unroll_loop.cc
+.. _src/tir/transforms/unroll_loop.cc: 
https://github.com/apache/tvm/blob/main/src/tir/transforms/unroll_loop.cc
 
-.. _use pass infra: 
https://github.com/apache/incubator-tvm/blob/main/tutorials/dev/use_pass_infra.py
+.. _use pass infra: 
https://github.com/apache/tvm/blob/main/tutorials/dev/use_pass_infra.py
diff --git a/docs/dev/relay_add_pass.rst b/docs/dev/relay_add_pass.rst
index 02c0ba2..0661df0 100644
--- a/docs/dev/relay_add_pass.rst
+++ b/docs/dev/relay_add_pass.rst
@@ -399,8 +399,8 @@ information about the pass manager interface can be found 
in :ref:`pass-infra`.
 Relay's standard passes are listed in `include/tvm/relay/transform.h`_ and 
implemented
 in `src/relay/pass/`_.
 
-.. _include/tvm/relay/transform.h: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/relay/transform.h
+.. _include/tvm/relay/transform.h: 
https://github.com/apache/tvm/blob/main/include/tvm/relay/transform.h
 
-.. _src/relay/pass/: 
https://github.com/apache/incubator-tvm/tree/main/src/relay/pass
+.. _src/relay/pass/: https://github.com/apache/tvm/tree/main/src/relay/pass
 
-.. _src/relay/transforms/fold_constant.cc: 
https://github.com/apache/incubator-tvm/blob/main/src/relay/transforms/fold_constant.cc
+.. _src/relay/transforms/fold_constant.cc: 
https://github.com/apache/tvm/blob/main/src/relay/transforms/fold_constant.cc
diff --git a/docs/dev/relay_bring_your_own_codegen.rst 
b/docs/dev/relay_bring_your_own_codegen.rst
index a4d4ebd..3fcd336 100644
--- a/docs/dev/relay_bring_your_own_codegen.rst
+++ b/docs/dev/relay_bring_your_own_codegen.rst
@@ -137,7 +137,7 @@ Here we highlight the notes marked in the above code:
 
 * **Note 3** is a TVM runtime compatible wrapper function. It accepts a list 
of input tensors and one output tensor (the last argument), casts them to the 
right data type, and invokes the subgraph function described in Note 2. In 
addition, ``TVM_DLL_EXPORT_TYPED_FUNC`` is a TVM macro that generates another 
function ``gcc_0`` with unified the function arguments by packing all tensors 
to ``TVMArgs``. As a result, the TVM runtime can directly invoke ``gcc_0`` to 
execute the subgraph without [...]
 
-In the rest of this section, we will implement a codegen step-by-step to 
generate the above code. Your own codegen has to be located at 
``src/relay/backend/contrib/<your-codegen-name>/``. In our example, we name our 
codegen "codegen_c" and put it under `/src/relay/backend/contrib/codegen_c/ 
<https://github.com/apache/incubator-tvm/blob/main/src/relay/backend/contrib/codegen_c/codegen.cc>`_.
 Feel free to check this file for a complete implementation.
+In the rest of this section, we will implement a codegen step-by-step to 
generate the above code. Your own codegen has to be located at 
``src/relay/backend/contrib/<your-codegen-name>/``. In our example, we name our 
codegen "codegen_c" and put it under `/src/relay/backend/contrib/codegen_c/ 
<https://github.com/apache/tvm/blob/main/src/relay/backend/contrib/codegen_c/codegen.cc>`_.
 Feel free to check this file for a complete implementation.
 
 Specifically, we are going to implement two classes in this file and here is 
their relationship:
 
diff --git a/docs/dev/runtime.rst b/docs/dev/runtime.rst
index 91b19ee..c77b693 100644
--- a/docs/dev/runtime.rst
+++ b/docs/dev/runtime.rst
@@ -45,7 +45,7 @@ PackedFunc
 `PackedFunc`_ is a simple but elegant solution
 we find to solve the challenges listed. The following code block provides an 
example in C++
 
-.. _PackedFunc: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/packed_func.h
+.. _PackedFunc: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/packed_func.h
 
 .. code:: c
 
@@ -131,9 +131,9 @@ which allows us to embed the PackedFunc into any languages. 
Besides python, so f
 `java`_ and `javascript`_.
 This philosophy of embedded API is very like Lua, except that we don't have a 
new language but use C++.
 
-.. _minimum C API: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/c_runtime_api.h
-.. _java: https://github.com/apache/incubator-tvm/tree/main/jvm
-.. _javascript: https://github.com/apache/incubator-tvm/tree/main/web
+.. _minimum C API: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/c_runtime_api.h
+.. _java: https://github.com/apache/tvm/tree/main/jvm
+.. _javascript: https://github.com/apache/tvm/tree/main/web
 
 
 One fun fact about PackedFunc is that we use it for both compiler and 
deployment stack.
@@ -141,7 +141,7 @@ One fun fact about PackedFunc is that we use it for both 
compiler and deployment
 - All TVM's compiler pass functions are exposed to frontend as PackedFunc, see 
`here`_
 - The compiled module also returns the compiled function as PackedFunc
 
-.. _here: https://github.com/apache/incubator-tvm/tree/main/src/api
+.. _here: https://github.com/apache/tvm/tree/main/src/api
 
 To keep the runtime minimum, we isolated the IR Object support from the 
deployment runtime. The resulting runtime takes around 200K - 600K depending on 
how many runtime driver modules (e.g., CUDA) get included.
 
@@ -162,7 +162,7 @@ TVM defines the compiled object as `Module`_.
 The user can get the compiled function from Module as PackedFunc.
 The generated compiled code can dynamically get function from Module in 
runtime. It caches the function handle in the first call and reuses in 
subsequent calls. We use this to link device code and callback into any 
PackedFunc(e.g., python) from generated code.
 
-.. _Module: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/module.h
+.. _Module: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/module.h
 
 The ModuleNode is an abstract class that can be implemented by each type of 
device.
 So far we support modules for CUDA, Metal, OpenCL and loading dynamic shared 
libraries. This abstraction makes introduction
@@ -198,7 +198,7 @@ All the language object in the compiler stack is a subclass 
of ``Object``. Each
 the type of object. We choose string instead of int as type key so new 
``Object`` class can be added in the decentralized fashion without
 adding the code back to the central repo. To ease the speed of dispatching, we 
allocate an integer type_index at runtime for each type_key.
 
-.. _Object: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/object.h
+.. _Object: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/object.h
 
 Since usually one ``Object`` could be referenced in multiple places in the 
language, we use a shared_ptr to keep
 track of reference. We use ``ObjectRef`` class to represent a reference to the 
``Object``.
@@ -279,17 +279,17 @@ Each argument in PackedFunc contains a union value 
`TVMValue`_
 and a type code. This design allows the dynamically typed language to convert 
to the corresponding type directly, and statically typed language to
 do runtime type checking during conversion.
 
-.. _TVMValue: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/c_runtime_api.h#L122
+.. _TVMValue: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/c_runtime_api.h#L122
 
 The relevant files are
 
 - `packed_func.h`_ for C++ API
 - `c_runtime_api.cc`_ for C API and how to provide callback.
 
-.. _packed_func.h: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/packed_func.h
-.. _c_runtime_api.cc: 
https://github.com/apache/incubator-tvm/blob/main/src/runtime/c_runtime_api.cc#L262
+.. _packed_func.h: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/packed_func.h
+.. _c_runtime_api.cc: 
https://github.com/apache/tvm/blob/main/src/runtime/c_runtime_api.cc#L262
 
 To support extension types, we used a registry system to register type related 
information, like support of any
 in C++, see `Extension types`_ for more details.
 
-.. _Extension types: 
https://github.com/apache/incubator-tvm/tree/main/apps/extension
+.. _Extension types: https://github.com/apache/tvm/tree/main/apps/extension
diff --git a/docs/dev/virtual_machine.rst b/docs/dev/virtual_machine.rst
index 0986328..9081d50 100644
--- a/docs/dev/virtual_machine.rst
+++ b/docs/dev/virtual_machine.rst
@@ -278,11 +278,11 @@ to represent tensor, tuple/list, and closure data, 
respectively. More details
 for each of them can be found at `include/tvm/runtime/ndarray.h`_,
 `include/tvm/runtime/vm/vm.h`_, and `include/tvm/runtime/container.h`_, 
respectively.
 
-.. _include/tvm/runtime/ndarray.h: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/ndarray.h
+.. _include/tvm/runtime/ndarray.h: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/ndarray.h
 
-.. _include/tvm/runtime/vm/vm.h: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/vm/vm.h
+.. _include/tvm/runtime/vm/vm.h: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/vm/vm.h
 
-.. _include/tvm/runtime/container.h: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/container.h
+.. _include/tvm/runtime/container.h: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/container.h
 
 Stack and State
 ~~~~~~~~~~~~~~~
@@ -326,7 +326,7 @@ The functions contain metadata about the function as well 
as its compiled byteco
 object then can be loaded and run by a ``tvm::relay::vm::VirtualMachine`` 
object. For full definitions of the
 data structures, please see `include/tvm/runtime/vm/executable.h`_ and 
`include/tvm/runtime/vm/vm.h`_.
 
-.. _include/tvm/runtime/vm/executable.h: 
https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/vm/executable.h
+.. _include/tvm/runtime/vm/executable.h: 
https://github.com/apache/tvm/blob/main/include/tvm/runtime/vm/executable.h
 
 Optimizations
 ~~~~~~~~~~~~~
@@ -343,11 +343,11 @@ Optimizations marked with `TODO` are not implemented yet.
 - Tail Call Optimization (TODO)
 - Liveness Analysis (TODO)
 
-.. _src/relay/vm/lambda_lift.cc: 
https://github.com/apache/incubator-tvm/blob/main/src/relay/backend/vm/lambda_lift.cc
+.. _src/relay/vm/lambda_lift.cc: 
https://github.com/apache/tvm/blob/main/src/relay/backend/vm/lambda_lift.cc
 
-.. _src/relay/vm/inline_primitives.cc: 
https://github.com/apache/incubator-tvm/blob/main/src/relay/backend/vm/inline_primitives.cc
+.. _src/relay/vm/inline_primitives.cc: 
https://github.com/apache/tvm/blob/main/src/relay/backend/vm/inline_primitives.cc
 
-.. _src/relay/backend/vm/compiler.cc: 
https://github.com/apache/incubator-tvm/blob/main/src/relay/backend/vm/compiler.cc
+.. _src/relay/backend/vm/compiler.cc: 
https://github.com/apache/tvm/blob/main/src/relay/backend/vm/compiler.cc
 
 Serialization
 ~~~~~~~~~~~~~
@@ -386,7 +386,7 @@ load the serialized kernel binary and executable related 
binary code, which will
 instantiate a VM object. Please refer to the `test_vm_serialization.py`_ file 
for more
 examples.
 
-.. _test_vm_serialization.py: 
https://github.com/apache/incubator-tvm/blob/main/tests/python/relay/test_vm_serialization.py
+.. _test_vm_serialization.py: 
https://github.com/apache/tvm/blob/main/tests/python/relay/test_vm_serialization.py
 
 Unresolved Questions
 ~~~~~~~~~~~~~~~~~~~~
diff --git a/docs/install/docker.rst b/docs/install/docker.rst
index 243e438..768cad2 100644
--- a/docs/install/docker.rst
+++ b/docs/install/docker.rst
@@ -28,7 +28,7 @@ Get a tvm source distribution or clone the github repo to get 
the auxiliary scri
 
 .. code:: bash
 
-    git clone --recursive https://github.com/apache/incubator-tvm tvm
+    git clone --recursive https://github.com/apache/tvm tvm
 
 
 We can then use the following command to launch a docker image.
@@ -67,7 +67,7 @@ with ``localhost`` when pasting it into browser.
 
 Docker Source
 -------------
-Check out `The docker source 
<https://github.com/apache/incubator-tvm/tree/main/docker>`_ if you are 
interested in
+Check out `The docker source 
<https://github.com/apache/tvm/tree/main/docker>`_ if you are interested in
 building your own docker images.
 
 
diff --git a/docs/install/from_source.rst b/docs/install/from_source.rst
index f329e9f..3cf0a78 100644
--- a/docs/install/from_source.rst
+++ b/docs/install/from_source.rst
@@ -34,7 +34,7 @@ It is important to clone the submodules along, with 
``--recursive`` option.
 
 .. code:: bash
 
-    git clone --recursive https://github.com/apache/incubator-tvm tvm
+    git clone --recursive https://github.com/apache/tvm tvm
 
 For windows users who use github tools, you can open the git shell, and type 
the following command.
 
diff --git a/docs/install/nnpack.rst b/docs/install/nnpack.rst
index 10497ba..2afd95a 100644
--- a/docs/install/nnpack.rst
+++ b/docs/install/nnpack.rst
@@ -105,7 +105,7 @@ Build TVM with NNPACK support
 
 .. code:: bash
 
-   git clone --recursive https://github.com/apache/incubator-tvm tvm
+   git clone --recursive https://github.com/apache/tvm tvm
 
 - Set `set(USE_NNPACK ON)` in config.cmake.
 - Set `NNPACK_PATH` to the $(YOUR_NNPACK_INSTALL_PATH)
diff --git a/docs/langref/relay_adt.rst b/docs/langref/relay_adt.rst
index a53c751..dab2e3e 100644
--- a/docs/langref/relay_adt.rst
+++ b/docs/langref/relay_adt.rst
@@ -387,7 +387,7 @@ The following left fold flattens a list of lists (using 
concatenation):
 Note that these iteration constructs can be implemented directly in Relay's
 source language and more can easily be defined (and for more data types, like 
trees),
 rather than being constructs built into the language (e.g.,
-`"foreach" in MXNet 
<https://mxnet.incubator.apache.org/versions/master/tutorials/control_flow/ControlFlowTutorial.html>`__).
+`"foreach" in MXNet 
<https://mxnet.apache.org/versions/master/tutorials/control_flow/ControlFlowTutorial.html>`__).
 ADTs and their extensibility allow for a broad range of iterations and data 
structures to be expressed
 in Relay and supported by the type system without having to modify the 
language implementation.
 
diff --git a/docs/langref/relay_pattern.rst b/docs/langref/relay_pattern.rst
index 17282e1..8b34b76 100644
--- a/docs/langref/relay_pattern.rst
+++ b/docs/langref/relay_pattern.rst
@@ -35,7 +35,7 @@ There are quite a few properties of operators that are worth 
matching. Below we
 demonstrates how to write patterns. It is recommended to check 
`tests/python/relay/test_dataflow_pattern.py`_
 for more use cases.
 
-.. _tests/python/relay/test_dataflow_pattern.py: 
https://github.com/apache/incubator-tvm/blob/main/tests/python/relay/test_dataflow_pattern.py
+.. _tests/python/relay/test_dataflow_pattern.py: 
https://github.com/apache/tvm/blob/main/tests/python/relay/test_dataflow_pattern.py
 
 .. note::
 
diff --git a/docs/vta/install.rst b/docs/vta/install.rst
index bb5c1c9..2248975 100644
--- a/docs/vta/install.rst
+++ b/docs/vta/install.rst
@@ -135,7 +135,7 @@ Because the direct board-to-computer connection prevents 
the board from directly
    mkdir <mountpoint>
    sshfs xilinx@192.168.2.99:/home/xilinx <mountpoint>
    cd <mountpoint>
-   git clone --recursive https://github.com/apache/incubator-tvm tvm
+   git clone --recursive https://github.com/apache/tvm tvm
    # When finished, you can leave the moutpoint and unmount the directory
    cd ~
    sudo umount <mountpoint>
@@ -466,7 +466,7 @@ This would add quartus binary path into your ``PATH`` 
environment variable, so y
 Chisel-based Custom VTA Bitstream Compilation for DE10-Nano
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-Similar to the HLS-based design, high-level hardware parameters in 
Chisel-based design are listed in the VTA configuration file `Configs.scala 
<https://github.com/apache/incubator-tvm/blob/main/3rdparty/vta-hw/hardware/chisel/src/main/scala/core/Configs.scala>`_,
 and they can be customized by the user.
+Similar to the HLS-based design, high-level hardware parameters in 
Chisel-based design are listed in the VTA configuration file `Configs.scala 
<https://github.com/apache/tvm/blob/main/3rdparty/vta-hw/hardware/chisel/src/main/scala/core/Configs.scala>`_,
 and they can be customized by the user.
 
 For Intel FPGA, bitstream generation is driven by a top-level ``Makefile`` 
under ``<tvm root>/3rdparty/vta-hw/hardware/intel``.
 
diff --git a/jvm/README.md b/jvm/README.md
index 320e769..e23c632 100644
--- a/jvm/README.md
+++ b/jvm/README.md
@@ -176,4 +176,4 @@ Server server = new Server(proxyHost, proxyPort, "key");
 server.start();
 ```
 
-You can also use `StandaloneServerProcessor` and `ConnectProxyServerProcessor` 
to build your own RPC server. Refer to [Android RPC 
Server](https://github.com/apache/incubator-tvm/blob/main/apps/android_rpc/app/src/main/java/org/apache/tvm/tvmrpc/RPCProcessor.java)
 for more details.
+You can also use `StandaloneServerProcessor` and `ConnectProxyServerProcessor` 
to build your own RPC server. Refer to [Android RPC 
Server](https://github.com/apache/tvm/blob/main/apps/android_rpc/app/src/main/java/org/apache/tvm/tvmrpc/RPCProcessor.java)
 for more details.
diff --git a/jvm/pom.xml b/jvm/pom.xml
index 886f0e6..1aeaa0e 100644
--- a/jvm/pom.xml
+++ b/jvm/pom.xml
@@ -7,7 +7,7 @@
   <artifactId>tvm4j-parent</artifactId>
   <version>0.0.1-SNAPSHOT</version>
   <name>TVM4J Package - Parent</name>
-  <url>https://github.com/apache/incubator-tvm/tree/main/jvm</url>
+  <url>https://github.com/apache/tvm/tree/main/jvm</url>
   <description>TVM4J Package</description>
   <organization>
     <name>Apache Software Foundation</name>
@@ -20,9 +20,9 @@
     </license>
   </licenses>
   <scm>
-    <connection>scm:git:g...@github.com:apache/incubator-tvm.git</connection>
-    
<developerConnection>scm:git:g...@github.com:apache/incubator-tvm.git</developerConnection>
-    <url>https://github.com/apache/incubator-tvm</url>
+    <connection>scm:git:g...@github.com:apache/tvm.git</connection>
+    
<developerConnection>scm:git:g...@github.com:apache/tvm.git</developerConnection>
+    <url>https://github.com/apache/tvm</url>
   </scm>
 
   <properties>
diff --git a/python/setup.py b/python/setup.py
index ec98e94..8af62f9 100644
--- a/python/setup.py
+++ b/python/setup.py
@@ -207,7 +207,7 @@ setup(
     package_dir={"tvm": "tvm"},
     package_data={"tvm": get_package_data_files()},
     distclass=BinaryDistribution,
-    url="https://github.com/apache/incubator-tvm";,
+    url="https://github.com/apache/tvm";,
     ext_modules=config_cython(),
     **setup_kwargs,
 )
diff --git a/python/tvm/relay/qnn/op/legalizations.py 
b/python/tvm/relay/qnn/op/legalizations.py
index 50e5a02..3f151eb 100644
--- a/python/tvm/relay/qnn/op/legalizations.py
+++ b/python/tvm/relay/qnn/op/legalizations.py
@@ -75,7 +75,7 @@ def helper_no_fast_int8_hw_legalization(attrs, inputs, types, 
relay_op):
     """Converts QNN operators into a sequence of Relay operators that are 
friendly to HW that do
     not have fast Int8 arithmetic. For example, for ARM, LLVM utilizes the 
assembly instructions
     much more efficiently if the convolution or dense operator input datatypes 
are int16 instead of
-    int8. More details are present at 
https://github.com/apache/incubator-tvm/pull/4277.
+    int8. More details are present at https://github.com/apache/tvm/pull/4277.
 
     Parameters
     ----------
diff --git a/python/tvm/topi/x86/conv2d.py b/python/tvm/topi/x86/conv2d.py
index 7b9da8a..a3b7e47 100644
--- a/python/tvm/topi/x86/conv2d.py
+++ b/python/tvm/topi/x86/conv2d.py
@@ -263,7 +263,7 @@ def schedule_conv2d_NCHWc(cfg, outs):
     return s
 
 
-# FIXME - https://github.com/apache/incubator-tvm/issues/4122
+# FIXME - https://github.com/apache/tvm/issues/4122
 # _declaration_conv_nhwc_pack expects kernel layout to be HWOI. However, the 
tests use HWIO
 # layout. Commenting until we have clarity about the nhwc_pack implementation 
from the author.
 # elif layout == 'NHWC' and kh == 1 and kw == 1 and kernel.dtype == "int8":
diff --git a/python/tvm/topi/x86/conv2d_avx_1x1.py 
b/python/tvm/topi/x86/conv2d_avx_1x1.py
index b4a966e..3e5a12b 100644
--- a/python/tvm/topi/x86/conv2d_avx_1x1.py
+++ b/python/tvm/topi/x86/conv2d_avx_1x1.py
@@ -229,7 +229,7 @@ def _schedule_conv_nhwc_pack_int8(s, cfg, data, conv_out, 
last):
     packing of weight to make the address access be friendly to int8
     intrinsic
     """
-    # FIXME - https://github.com/apache/incubator-tvm/issues/3598
+    # FIXME - https://github.com/apache/tvm/issues/3598
     # pylint: disable=unreachable
     return s
 
diff --git a/rust/tvm-graph-rt/Cargo.toml b/rust/tvm-graph-rt/Cargo.toml
index d8dfcdb..13837f6 100644
--- a/rust/tvm-graph-rt/Cargo.toml
+++ b/rust/tvm-graph-rt/Cargo.toml
@@ -20,7 +20,7 @@ name = "tvm-graph-rt"
 version = "0.1.0"
 license = "Apache-2.0"
 description = "A static graph runtime for TVM."
-repository = "https://github.com/apache/incubator-tvm";
+repository = "https://github.com/apache/tvm";
 readme = "README.md"
 keywords = ["tvm"]
 categories = ["api-bindings", "science"]
diff --git a/rust/tvm-macros/Cargo.toml b/rust/tvm-macros/Cargo.toml
index e491177..37275d6 100644
--- a/rust/tvm-macros/Cargo.toml
+++ b/rust/tvm-macros/Cargo.toml
@@ -20,7 +20,7 @@ name = "tvm-macros"
 version = "0.1.1"
 license = "Apache-2.0"
 description = "Procedural macros of the TVM crate."
-repository = "https://github.com/apache/incubator-tvm";
+repository = "https://github.com/apache/tvm";
 readme = "README.md"
 keywords = ["tvm"]
 authors = ["TVM Contributors"]
diff --git a/rust/tvm-rt/Cargo.toml b/rust/tvm-rt/Cargo.toml
index 9660943..13c0537 100644
--- a/rust/tvm-rt/Cargo.toml
+++ b/rust/tvm-rt/Cargo.toml
@@ -20,8 +20,8 @@ name = "tvm-rt"
 version = "0.1.0"
 license = "Apache-2.0"
 description = "Rust bindings for the TVM runtime API."
-repository = "https://github.com/apache/incubator-tvm";
-homepage = "https://github.com/apache/incubator-tvm";
+repository = "https://github.com/apache/tvm";
+homepage = "https://github.com/apache/tvm";
 readme = "README.md"
 keywords = ["rust", "tvm"]
 categories = ["api-bindings", "science"]
diff --git a/rust/tvm-rt/README.md b/rust/tvm-rt/README.md
index a586cd7..a99eeaa 100644
--- a/rust/tvm-rt/README.md
+++ b/rust/tvm-rt/README.md
@@ -17,7 +17,7 @@
 
 # TVM Runtime Support
 
-This crate provides an idiomatic Rust API for 
[TVM](https://github.com/apache/incubator-tvm) runtime.
+This crate provides an idiomatic Rust API for 
[TVM](https://github.com/apache/tvm) runtime.
 Currently this is tested on `1.42.0` and above.
 
 ## What Does This Crate Offer?
diff --git a/rust/tvm-rt/src/lib.rs b/rust/tvm-rt/src/lib.rs
index 84951f4..4b163ef 100644
--- a/rust/tvm-rt/src/lib.rs
+++ b/rust/tvm-rt/src/lib.rs
@@ -17,7 +17,7 @@
  * under the License.
  */
 
-//! [TVM](https://github.com/apache/incubator-tvm) is a compiler stack for 
deep learning systems.
+//! [TVM](https://github.com/apache/tvm) is a compiler stack for deep learning 
systems.
 //!
 //! This crate provides an idiomatic Rust API for TVM runtime.
 //!
diff --git a/rust/tvm-sys/src/context.rs b/rust/tvm-sys/src/context.rs
index 3747bfc..a5165fc 100644
--- a/rust/tvm-sys/src/context.rs
+++ b/rust/tvm-sys/src/context.rs
@@ -51,7 +51,7 @@ use enumn::N;
 use thiserror::Error;
 
 /// Device type represents the set of devices supported by
-/// [TVM](https://github.com/apache/incubator-tvm).
+/// [TVM](https://github.com/apache/tvm).
 ///
 /// ## Example
 ///
diff --git a/rust/tvm/Cargo.toml b/rust/tvm/Cargo.toml
index 153a195..29d2003 100644
--- a/rust/tvm/Cargo.toml
+++ b/rust/tvm/Cargo.toml
@@ -20,8 +20,8 @@ name = "tvm"
 version = "0.1.0"
 license = "Apache-2.0"
 description = "Rust frontend support for TVM"
-repository = "https://github.com/apache/incubator-tvm";
-homepage = "https://github.com/apache/incubator-tvm";
+repository = "https://github.com/apache/tvm";
+homepage = "https://github.com/apache/tvm";
 readme = "README.md"
 keywords = ["rust", "tvm"]
 categories = ["api-bindings", "science"]
diff --git a/rust/tvm/README.md b/rust/tvm/README.md
index 13aef89..26f9f1f 100644
--- a/rust/tvm/README.md
+++ b/rust/tvm/README.md
@@ -17,13 +17,13 @@
 
 # TVM Runtime Frontend Support
 
-This crate provides an idiomatic Rust API for 
[TVM](https://github.com/apache/incubator-tvm) runtime frontend. Currently this 
requires **Nightly Rust** and tested on `rustc 1.32.0-nightly`
+This crate provides an idiomatic Rust API for 
[TVM](https://github.com/apache/tvm) runtime frontend. Currently this requires 
**Nightly Rust** and tested on `rustc 1.32.0-nightly`
 
 ## What Does This Crate Offer?
 
 Here is a major workflow
 
-1. Train your **Deep Learning** model using any major framework such as 
[PyTorch](https://pytorch.org/), [Apache 
MXNet](https://mxnet.incubator.apache.org/) or 
[TensorFlow](https://www.tensorflow.org/)
+1. Train your **Deep Learning** model using any major framework such as 
[PyTorch](https://pytorch.org/), [Apache MXNet](https://mxnet.apache.org/) or 
[TensorFlow](https://www.tensorflow.org/)
 2. Use **TVM** to build optimized model artifacts on a supported context such 
as CPU, GPU, OpenCL and specialized accelerators.
 3. Deploy your models using **Rust** :heart:
 
diff --git a/rust/tvm/src/lib.rs b/rust/tvm/src/lib.rs
index 27e7949..e86420e 100644
--- a/rust/tvm/src/lib.rs
+++ b/rust/tvm/src/lib.rs
@@ -17,7 +17,7 @@
  * under the License.
  */
 
-//! [TVM](https://github.com/apache/incubator-tvm) is a compiler stack for 
deep learning systems.
+//! [TVM](https://github.com/apache/tvm) is a compiler stack for deep learning 
systems.
 //!
 //! This crate provides an idiomatic Rust API for TVM runtime frontend.
 //!
diff --git a/src/parser/tokenizer.h b/src/parser/tokenizer.h
index a9ae64b..c6fb3e0 100644
--- a/src/parser/tokenizer.h
+++ b/src/parser/tokenizer.h
@@ -328,7 +328,7 @@ struct Tokenizer {
       }
     } else if (next == '"') {
       // TODO(@jroesch): Properly tokenize escape sequences in strings.
-      // see https://github.com/apache/incubator-tvm/issues/6153.
+      // see https://github.com/apache/tvm/issues/6153.
       Next();
       std::stringstream string_content;
       while (More() && Peek() != '"') {
diff --git a/tests/python/frontend/tflite/test_forward.py 
b/tests/python/frontend/tflite/test_forward.py
index 89ae348..b7f3b91 100644
--- a/tests/python/frontend/tflite/test_forward.py
+++ b/tests/python/frontend/tflite/test_forward.py
@@ -1100,7 +1100,7 @@ def test_forward_convolution():
             [1, 17, 17, 124], [1, 1, 124, 19], [1, 1], [1, 1], "SAME", "NHWC", 
quantized=True
         )
 
-        # Disable as tests are flaky - 
https://github.com/apache/incubator-tvm/issues/6064
+        # Disable as tests are flaky - 
https://github.com/apache/tvm/issues/6064
         # depthwise convolution
         # _test_tflite2_quantized_depthwise_convolution([1, 8, 8, 128], [1, 1, 
128, 1], [1, 1], [1, 1],
         #                                               'SAME', 'NHWC', 1)
diff --git a/tests/python/relay/test_op_level2.py 
b/tests/python/relay/test_op_level2.py
index a2b791d..06bd01b 100644
--- a/tests/python/relay/test_op_level2.py
+++ b/tests/python/relay/test_op_level2.py
@@ -298,7 +298,7 @@ def test_conv2d_run():
     )
 
     # CUDA is disabled for 'direct' schedule:
-    # https://github.com/apache/incubator-tvm/pull/3070#issuecomment-486597553
+    # https://github.com/apache/tvm/pull/3070#issuecomment-486597553
     # group conv2d
     dshape = (1, 32, 18, 18)
     kshape = (32, 4, 3, 3)
diff --git a/tests/python/topi/python/test_topi_conv2d_nhwc_pack_int8.py 
b/tests/python/topi/python/test_topi_conv2d_nhwc_pack_int8.py
index c547ec7..66ce6ff 100644
--- a/tests/python/topi/python/test_topi_conv2d_nhwc_pack_int8.py
+++ b/tests/python/topi/python/test_topi_conv2d_nhwc_pack_int8.py
@@ -73,7 +73,7 @@ def verify_conv2d_1x1_nhwc_pack_int8(
         check_device(device)
 
 
-# TODO(@llyfacebook): Please fix 
https://github.com/apache/incubator-tvm/issues/4122 to enable this test.
+# TODO(@llyfacebook): Please fix https://github.com/apache/tvm/issues/4122 to 
enable this test.
 @pytest.mark.skip
 def test_conv2d_nhwc():
     verify_conv2d_1x1_nhwc_pack_int8(1, 256, 32, 256, 1, 1, 0)
diff --git a/tests/python/topi/python/test_topi_vision.py 
b/tests/python/topi/python/test_topi_vision.py
index 24035ba..22c9045 100644
--- a/tests/python/topi/python/test_topi_vision.py
+++ b/tests/python/topi/python/test_topi_vision.py
@@ -124,7 +124,7 @@ def verify_get_valid_counts(dshape, score_threshold, 
id_index, score_index):
 @tvm.testing.uses_gpu
 @pytest.mark.skip(
     "Skip this test as it is intermittent."
-    "See 
https://github.com/apache/incubator-tvm/pull/4901#issuecomment-595040094";
+    "See https://github.com/apache/tvm/pull/4901#issuecomment-595040094";
 )
 def test_get_valid_counts():
     verify_get_valid_counts((1, 1000, 5), 0.5, -1, 0)
diff --git a/tests/python/unittest/test_autotvm_graph_tuner_core.py 
b/tests/python/unittest/test_autotvm_graph_tuner_core.py
index f594761..3d7d304 100644
--- a/tests/python/unittest/test_autotvm_graph_tuner_core.py
+++ b/tests/python/unittest/test_autotvm_graph_tuner_core.py
@@ -18,7 +18,7 @@
 # NOTE: We name this test file to start with test_graph_tuner
 # to make it execute after zero_rank tensor test cases. This
 # helps avoid topi arithmetic operator overloading issue:
-# https://github.com/apache/incubator-tvm/issues/3240.
+# https://github.com/apache/tvm/issues/3240.
 # TODO: restore the file name after this issue is resolved.
 import os
 import copy
diff --git a/tests/python/unittest/test_autotvm_graph_tuner_utils.py 
b/tests/python/unittest/test_autotvm_graph_tuner_utils.py
index 9fc415c..6ab194c 100644
--- a/tests/python/unittest/test_autotvm_graph_tuner_utils.py
+++ b/tests/python/unittest/test_autotvm_graph_tuner_utils.py
@@ -18,7 +18,7 @@
 # NOTE: We name this test file to start with test_graph_tuner
 # to make it execute after zero_rank tensor test cases. This
 # helps avoid topi arithmetic operator overloading issue:
-# https://github.com/apache/incubator-tvm/issues/3240
+# https://github.com/apache/tvm/issues/3240
 # TODO: restore the file name after this issue is resolved.
 import tvm
 from tvm import te
diff --git a/tutorials/autotvm/tune_relay_arm.py 
b/tutorials/autotvm/tune_relay_arm.py
index 1e1e98a..317af5f 100644
--- a/tutorials/autotvm/tune_relay_arm.py
+++ b/tutorials/autotvm/tune_relay_arm.py
@@ -33,7 +33,7 @@ the best knob values for all required operators. When the TVM 
compiler compiles
 these operators, it will query this log file to get the best knob values.
 
 We also released pre-tuned parameters for some arm devices. You can go to
-`ARM CPU Benchmark 
<https://github.com/apache/incubator-tvm/wiki/Benchmark#arm-cpu>`_
+`ARM CPU Benchmark <https://github.com/apache/tvm/wiki/Benchmark#arm-cpu>`_
 to see the results.
 
 Note that this tutorial will not run on Windows or recent versions of macOS. To
@@ -164,7 +164,7 @@ def get_network(name, batch_size):
 #   (replace :code:`[HOST_IP]` with the IP address of your host machine)
 #
 # * For Android:
-#   Follow this `readme page 
<https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc>`_ to
+#   Follow this `readme page 
<https://github.com/apache/tvm/tree/main/apps/android_rpc>`_ to
 #   install the TVM RPC APK on the android device. Make sure you can pass the 
android rpc test.
 #   Then you have already registered your device. During tuning, you have to 
go to developer option
 #   and enable "Keep screen awake during changing" and charge your phone to 
make it stable.
diff --git a/tutorials/autotvm/tune_relay_cuda.py 
b/tutorials/autotvm/tune_relay_cuda.py
index 33b62bb..76a30ec 100644
--- a/tutorials/autotvm/tune_relay_cuda.py
+++ b/tutorials/autotvm/tune_relay_cuda.py
@@ -31,7 +31,7 @@ the best knob values for all required operators. When the TVM 
compiler compiles
 these operators, it will query this log file to get the best knob values.
 
 We also released pre-tuned parameters for some NVIDIA GPUs. You can go to
-`NVIDIA GPU Benchmark 
<https://github.com/apache/incubator-tvm/wiki/Benchmark#nvidia-gpu>`_
+`NVIDIA GPU Benchmark 
<https://github.com/apache/tvm/wiki/Benchmark#nvidia-gpu>`_
 to see the results.
 
 Note that this tutorial will not run on Windows or recent versions of macOS. To
diff --git a/tutorials/autotvm/tune_relay_mobile_gpu.py 
b/tutorials/autotvm/tune_relay_mobile_gpu.py
index 10e201f..5e97273 100644
--- a/tutorials/autotvm/tune_relay_mobile_gpu.py
+++ b/tutorials/autotvm/tune_relay_mobile_gpu.py
@@ -31,7 +31,7 @@ the best knob values for all required operators. When the TVM 
compiler compiles
 these operators, it will query this log file to get the best knob values.
 
 We also released pre-tuned parameters for some arm devices. You can go to
-`Mobile GPU Benchmark 
<https://github.com/apache/incubator-tvm/wiki/Benchmark#mobile-gpu>`_
+`Mobile GPU Benchmark 
<https://github.com/apache/tvm/wiki/Benchmark#mobile-gpu>`_
 to see the results.
 
 Note that this tutorial will not run on Windows or recent versions of macOS. To
@@ -163,7 +163,7 @@ def get_network(name, batch_size):
 #   (replace :code:`[HOST_IP]` with the IP address of your host machine)
 #
 # * For Android:
-#   Follow this `readme page 
<https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc>`_ to
+#   Follow this `readme page 
<https://github.com/apache/tvm/tree/main/apps/android_rpc>`_ to
 #   install TVM RPC APK on the android device. Make sure you can pass the 
android RPC test.
 #   Then you have already registered your device. During tuning, you have to 
go to developer option
 #   and enable "Keep screen awake during changing" and charge your phone to 
make it stable.
diff --git a/tutorials/dev/bring_your_own_datatypes.py 
b/tutorials/dev/bring_your_own_datatypes.py
index c85ec07..f9dc8bc 100644
--- a/tutorials/dev/bring_your_own_datatypes.py
+++ b/tutorials/dev/bring_your_own_datatypes.py
@@ -116,7 +116,7 @@ tvm.target.datatype.register("myfloat", 150)
 
 ######################################################################
 # Note that the type code, 150, is currently chosen manually by the user.
-# See ``TVMTypeCode::kCustomBegin`` in `include/tvm/runtime/c_runtime_api.h 
<https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/data_type.h>`_.
+# See ``TVMTypeCode::kCustomBegin`` in `include/tvm/runtime/c_runtime_api.h 
<https://github.com/apache/tvm/blob/main/include/tvm/runtime/data_type.h>`_.
 # Now we can generate our program again:
 
 x_myfloat = relay.cast(x, dtype="custom[myfloat]32")
@@ -176,7 +176,7 @@ tvm.target.datatype.register_op(
 # To provide for the general case, we have made a helper function, 
``create_lower_func(...)``,
 # which does just this: given a dictionary, it replaces the given operation 
with a ``Call`` to the appropriate function name provided based on the op and 
the bit widths.
 # It additionally removes usages of the custom datatype by storing the custom 
datatype in an opaque ``uint`` of the appropriate width; in our case, a 
``uint32_t``.
-# For more information, see `the source code 
<https://github.com/apache/incubator-tvm/blob/main/python/tvm/target/datatype.py>`_.
+# For more information, see `the source code 
<https://github.com/apache/tvm/blob/main/python/tvm/target/datatype.py>`_.
 
 # We can now re-try running the program:
 try:
diff --git a/tutorials/dev/use_pass_infra.py b/tutorials/dev/use_pass_infra.py
index b16eb93..6a33d14 100644
--- a/tutorials/dev/use_pass_infra.py
+++ b/tutorials/dev/use_pass_infra.py
@@ -142,7 +142,7 @@ print(mod)
 # packing them as a whole to execute. For example, the same passes can now be
 # applied using the sequential style as the following. 
:py:class:`tvm.transform.Sequential` is
 # similiar to `torch.nn.sequential 
<https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential>`_
-# and `mxnet.gluon.block 
<https://mxnet.incubator.apache.org/api/python/docs/_modules/mxnet/gluon/block.html>`_.
+# and `mxnet.gluon.block 
<https://mxnet.apache.org/api/python/docs/_modules/mxnet/gluon/block.html>`_.
 # For example, `torch.nn.sequential` is used to contain a sequence of PyTorch
 # `Modules` that will be added to build a network. It focuses on the network
 # layers. Instead, the :py:class:`tvm.transform.Sequential` in our pass infra 
works on the optimizing
diff --git a/tutorials/frontend/deploy_model_on_android.py 
b/tutorials/frontend/deploy_model_on_android.py
index 35f989c..ff7ef44 100644
--- a/tutorials/frontend/deploy_model_on_android.py
+++ b/tutorials/frontend/deploy_model_on_android.py
@@ -47,7 +47,7 @@ from tvm.contrib.download import download_testdata
 #
 # .. code-block:: bash
 #
-#   git clone --recursive https://github.com/apache/incubator-tvm tvm
+#   git clone --recursive https://github.com/apache/tvm tvm
 #   cd tvm
 #   docker build -t tvm.demo_android -f docker/Dockerfile.demo_android ./docker
 #   docker run --pid=host -h tvm -v $PWD:/workspace \
@@ -106,7 +106,7 @@ from tvm.contrib.download import download_testdata
 # --------------------------------------
 # Now we can register our Android device to the tracker.
 #
-# Follow this `readme page 
<https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc>`_ to
+# Follow this `readme page 
<https://github.com/apache/tvm/tree/main/apps/android_rpc>`_ to
 # install TVM RPC APK on the android device.
 #
 # Here is an example of config.mk. I enabled OpenCL and Vulkan.
@@ -139,7 +139,7 @@ from tvm.contrib.download import download_testdata
 #
 # .. note::
 #
-#   At this time, don't forget to `create a standalone toolchain 
<https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc#architecture-and-android-standalone-toolchain>`_
 .
+#   At this time, don't forget to `create a standalone toolchain 
<https://github.com/apache/tvm/tree/main/apps/android_rpc#architecture-and-android-standalone-toolchain>`_
 .
 #
 #   for example
 #
diff --git a/tutorials/frontend/deploy_model_on_rasp.py 
b/tutorials/frontend/deploy_model_on_rasp.py
index 3687991..cae9d90 100644
--- a/tutorials/frontend/deploy_model_on_rasp.py
+++ b/tutorials/frontend/deploy_model_on_rasp.py
@@ -53,7 +53,7 @@ from tvm.contrib.download import download_testdata
 #
 # .. code-block:: bash
 #
-#   git clone --recursive https://github.com/apache/incubator-tvm tvm
+#   git clone --recursive https://github.com/apache/tvm tvm
 #   cd tvm
 #   mkdir build
 #   cp cmake/config.cmake build
@@ -96,7 +96,7 @@ from tvm.contrib.download import download_testdata
 # Back to the host machine, which should have a full TVM installed (with LLVM).
 #
 # We will use pre-trained model from
-# `MXNet Gluon model zoo 
<https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html>`_.
+# `MXNet Gluon model zoo 
<https://mxnet.apache.org/api/python/gluon/model_zoo.html>`_.
 # You can found more details about this part at tutorial 
:ref:`tutorial-from-mxnet`.
 
 from mxnet.gluon.model_zoo.vision import get_model
diff --git a/tutorials/frontend/from_mxnet.py b/tutorials/frontend/from_mxnet.py
index 3eeef87..d103d17 100644
--- a/tutorials/frontend/from_mxnet.py
+++ b/tutorials/frontend/from_mxnet.py
@@ -33,7 +33,7 @@ A quick solution is
     pip install mxnet --user
 
 or please refer to offical installation guide.
-https://mxnet.incubator.apache.org/versions/master/install/index.html
+https://mxnet.apache.org/versions/master/install/index.html
 """
 # some standard imports
 import mxnet as mx
diff --git a/tutorials/get_started/cross_compilation_and_rpc.py 
b/tutorials/get_started/cross_compilation_and_rpc.py
index 69284c0..2386e7b 100644
--- a/tutorials/get_started/cross_compilation_and_rpc.py
+++ b/tutorials/get_started/cross_compilation_and_rpc.py
@@ -49,7 +49,7 @@ and the Firefly-RK3399 for an OpenCL example.
 #
 # .. code-block:: bash
 #
-#   git clone --recursive https://github.com/apache/incubator-tvm tvm
+#   git clone --recursive https://github.com/apache/tvm tvm
 #   cd tvm
 #   make runtime -j2
 #
diff --git a/web/README.md b/web/README.md
index b4d7eb1..4154300 100644
--- a/web/README.md
+++ b/web/README.md
@@ -63,11 +63,11 @@ This command will create the tvmjs library that we can use 
to interface with the
 
 Check code snippet in
 
-- 
[tests/python/prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/main/web/tests/python/prepare_test_libs.py)
+- 
[tests/python/prepare_test_libs.py](https://github.com/apache/tvm/tree/main/web/tests/python/prepare_test_libs.py)
   shows how to create a wasm library that links with tvm runtime.
   - Note that all wasm libraries have to created using the `--system-lib` 
option
   - emcc.create_wasm will automatically link the runtime library 
`dist/wasm/libtvm_runtime.bc`
-- 
[tests/web/test_module_load.js](https://github.com/apache/incubator-tvm/tree/main/web/tests/node/test_module_load.js)
 demonstrate
+- 
[tests/web/test_module_load.js](https://github.com/apache/tvm/tree/main/web/tests/node/test_module_load.js)
 demonstrate
   how to run the generated library through tvmjs API.
 
 

Reply via email to