This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 3cc9e98506d deploying docs 
(apache/tvm@d4633957069120a3da37af52ead686489cbf04bd)
3cc9e98506d is described below

commit 3cc9e98506d2777bb032839dd356ab6011819674
Author: tvm-bot <[email protected]>
AuthorDate: Mon Feb 23 20:20:34 2026 +0000

    deploying docs (apache/tvm@d4633957069120a3da37af52ead686489cbf04bd)
---
 .../11c11e53c7dace51a8be968ee169ed0d/ir_module.zip | Bin 23874 -> 23874 bytes
 .../tir_transformation.zip                         | Bin 15631 -> 15631 bytes
 .../relax_creation.zip                             | Bin 22418 -> 22418 bytes
 .../relax_transformation.zip                       | Bin 11480 -> 11480 bytes
 .../optimize_llm.zip                               | Bin 54183 -> 54183 bytes
 .../e2e_opt_model.zip                              | Bin 14509 -> 14509 bytes
 .../quick_start.zip                                | Bin 16252 -> 16252 bytes
 .../export_and_load_executable.zip                 | Bin 31492 -> 31492 bytes
 .../tir_creation.zip                               | Bin 24417 -> 24417 bytes
 .../cross_compilation_and_rpc.zip                  | Bin 47233 -> 47233 bytes
 .../customize_opt.zip                              | Bin 20568 -> 20568 bytes
 .../relax/tutorials/sg_execution_times.rst.txt     |   2 +-
 .../tensor_ir/tutorials/sg_execution_times.rst.txt |   6 +++---
 .../tensor_ir/tutorials/tir_creation.rst.txt       |  20 ++++++++---------
 .../tensor_ir/tutorials/tir_transformation.rst.txt |   6 +++---
 .../get_started/tutorials/ir_module.rst.txt        |   8 +++----
 .../get_started/tutorials/quick_start.rst.txt      |   4 ++--
 .../tutorials/sg_execution_times.rst.txt           |   6 +++---
 .../tutorials/cross_compilation_and_rpc.rst.txt    |   6 +++---
 .../how_to/tutorials/customize_opt.rst.txt         |   4 ++--
 .../how_to/tutorials/e2e_opt_model.rst.txt         |   2 +-
 .../how_to/tutorials/sg_execution_times.rst.txt    |  10 ++++-----
 docs/_sources/sg_execution_times.rst.txt           |  18 ++++++++--------
 .../relax/tutorials/sg_execution_times.html        |   2 +-
 .../tensor_ir/tutorials/sg_execution_times.html    |   6 +++---
 .../tensor_ir/tutorials/tir_creation.html          |  20 ++++++++---------
 .../tensor_ir/tutorials/tir_transformation.html    |   6 +++---
 docs/get_started/tutorials/ir_module.html          |  16 +++++++-------
 docs/get_started/tutorials/quick_start.html        |  24 ++++++++++-----------
 docs/get_started/tutorials/sg_execution_times.html |   6 +++---
 .../tutorials/cross_compilation_and_rpc.html       |   6 +++---
 docs/how_to/tutorials/customize_opt.html           |   8 +++----
 docs/how_to/tutorials/e2e_opt_model.html           |   6 +++---
 .../tutorials/export_and_load_executable.html      |   8 +++----
 docs/how_to/tutorials/optimize_llm.html            |  10 ++++-----
 docs/how_to/tutorials/sg_execution_times.html      |  10 ++++-----
 docs/objects.inv                                   | Bin 18841 -> 18848 bytes
 docs/reference/api/python/relax/relax.html         |   2 +-
 docs/reference/api/python/runtime/vm.html          |   2 +-
 docs/searchindex.js                                |   2 +-
 docs/sg_execution_times.html                       |  18 ++++++++--------
 41 files changed, 122 insertions(+), 122 deletions(-)

diff --git a/docs/_downloads/11c11e53c7dace51a8be968ee169ed0d/ir_module.zip 
b/docs/_downloads/11c11e53c7dace51a8be968ee169ed0d/ir_module.zip
index 7d2956b2c99..5e0d3e2469b 100644
Binary files a/docs/_downloads/11c11e53c7dace51a8be968ee169ed0d/ir_module.zip 
and b/docs/_downloads/11c11e53c7dace51a8be968ee169ed0d/ir_module.zip differ
diff --git 
a/docs/_downloads/18ba0d2ee8120824175aaef66bc9c9bf/tir_transformation.zip 
b/docs/_downloads/18ba0d2ee8120824175aaef66bc9c9bf/tir_transformation.zip
index 144a4f88c62..fe0ab881807 100644
Binary files 
a/docs/_downloads/18ba0d2ee8120824175aaef66bc9c9bf/tir_transformation.zip and 
b/docs/_downloads/18ba0d2ee8120824175aaef66bc9c9bf/tir_transformation.zip differ
diff --git 
a/docs/_downloads/4753776bbe68e7c9ee4d19117973fc8b/relax_creation.zip 
b/docs/_downloads/4753776bbe68e7c9ee4d19117973fc8b/relax_creation.zip
index c60c17fafe4..a300c559e05 100644
Binary files 
a/docs/_downloads/4753776bbe68e7c9ee4d19117973fc8b/relax_creation.zip and 
b/docs/_downloads/4753776bbe68e7c9ee4d19117973fc8b/relax_creation.zip differ
diff --git 
a/docs/_downloads/7d201684dfa095a5ea48d98e9a2ef7ad/relax_transformation.zip 
b/docs/_downloads/7d201684dfa095a5ea48d98e9a2ef7ad/relax_transformation.zip
index 178769fcd82..701b70a2a14 100644
Binary files 
a/docs/_downloads/7d201684dfa095a5ea48d98e9a2ef7ad/relax_transformation.zip and 
b/docs/_downloads/7d201684dfa095a5ea48d98e9a2ef7ad/relax_transformation.zip 
differ
diff --git a/docs/_downloads/83e85f38cf16f1d926d06615fd54095c/optimize_llm.zip 
b/docs/_downloads/83e85f38cf16f1d926d06615fd54095c/optimize_llm.zip
index 2f16392b967..9a8faa0f7ad 100644
Binary files 
a/docs/_downloads/83e85f38cf16f1d926d06615fd54095c/optimize_llm.zip and 
b/docs/_downloads/83e85f38cf16f1d926d06615fd54095c/optimize_llm.zip differ
diff --git a/docs/_downloads/a7dd7652b2ad50f82d7b739ce3645799/e2e_opt_model.zip 
b/docs/_downloads/a7dd7652b2ad50f82d7b739ce3645799/e2e_opt_model.zip
index 09a73a4b3c4..4cad0da7c97 100644
Binary files 
a/docs/_downloads/a7dd7652b2ad50f82d7b739ce3645799/e2e_opt_model.zip and 
b/docs/_downloads/a7dd7652b2ad50f82d7b739ce3645799/e2e_opt_model.zip differ
diff --git a/docs/_downloads/bb7db6678496193ed0c55d3b95fa6778/quick_start.zip 
b/docs/_downloads/bb7db6678496193ed0c55d3b95fa6778/quick_start.zip
index e37cde5b111..4ab09060d07 100644
Binary files a/docs/_downloads/bb7db6678496193ed0c55d3b95fa6778/quick_start.zip 
and b/docs/_downloads/bb7db6678496193ed0c55d3b95fa6778/quick_start.zip differ
diff --git 
a/docs/_downloads/bc875d02d5382abc9ea5fb9eb2c1de2c/export_and_load_executable.zip
 
b/docs/_downloads/bc875d02d5382abc9ea5fb9eb2c1de2c/export_and_load_executable.zip
index f90665abf79..027ad9dc243 100644
Binary files 
a/docs/_downloads/bc875d02d5382abc9ea5fb9eb2c1de2c/export_and_load_executable.zip
 and 
b/docs/_downloads/bc875d02d5382abc9ea5fb9eb2c1de2c/export_and_load_executable.zip
 differ
diff --git a/docs/_downloads/be26483bb70b8468499a01c55e8e866c/tir_creation.zip 
b/docs/_downloads/be26483bb70b8468499a01c55e8e866c/tir_creation.zip
index 44b0a2e01db..79ba6d2f405 100644
Binary files 
a/docs/_downloads/be26483bb70b8468499a01c55e8e866c/tir_creation.zip and 
b/docs/_downloads/be26483bb70b8468499a01c55e8e866c/tir_creation.zip differ
diff --git 
a/docs/_downloads/f69380821f417ef2210f45503d81bded/cross_compilation_and_rpc.zip
 
b/docs/_downloads/f69380821f417ef2210f45503d81bded/cross_compilation_and_rpc.zip
index ca44616dc42..c42938a8948 100644
Binary files 
a/docs/_downloads/f69380821f417ef2210f45503d81bded/cross_compilation_and_rpc.zip
 and 
b/docs/_downloads/f69380821f417ef2210f45503d81bded/cross_compilation_and_rpc.zip
 differ
diff --git a/docs/_downloads/f69433a4a80715725df90d1386679956/customize_opt.zip 
b/docs/_downloads/f69433a4a80715725df90d1386679956/customize_opt.zip
index f6d71fcd5a9..6ebfa55e03d 100644
Binary files 
a/docs/_downloads/f69433a4a80715725df90d1386679956/customize_opt.zip and 
b/docs/_downloads/f69433a4a80715725df90d1386679956/customize_opt.zip differ
diff --git a/docs/_sources/deep_dive/relax/tutorials/sg_execution_times.rst.txt 
b/docs/_sources/deep_dive/relax/tutorials/sg_execution_times.rst.txt
index f28d27384b8..831cf48d530 100644
--- a/docs/_sources/deep_dive/relax/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/deep_dive/relax/tutorials/sg_execution_times.rst.txt
@@ -6,7 +6,7 @@
 
 Computation times
 =================
-**00:00.175** total execution time for 2 files **from 
deep_dive/relax/tutorials**:
+**00:00.176** total execution time for 2 files **from 
deep_dive/relax/tutorials**:
 
 .. container::
 
diff --git 
a/docs/_sources/deep_dive/tensor_ir/tutorials/sg_execution_times.rst.txt 
b/docs/_sources/deep_dive/tensor_ir/tutorials/sg_execution_times.rst.txt
index 78cd1871af4..b2f54ea58c8 100644
--- a/docs/_sources/deep_dive/tensor_ir/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/deep_dive/tensor_ir/tutorials/sg_execution_times.rst.txt
@@ -6,7 +6,7 @@
 
 Computation times
 =================
-**00:00.462** total execution time for 2 files **from 
deep_dive/tensor_ir/tutorials**:
+**00:00.467** total execution time for 2 files **from 
deep_dive/tensor_ir/tutorials**:
 
 .. container::
 
@@ -33,8 +33,8 @@ Computation times
      - Time
      - Mem (MB)
    * - :ref:`sphx_glr_deep_dive_tensor_ir_tutorials_tir_transformation.py` 
(``tir_transformation.py``)
-     - 00:00.292
+     - 00:00.295
      - 0.0
    * - :ref:`sphx_glr_deep_dive_tensor_ir_tutorials_tir_creation.py` 
(``tir_creation.py``)
-     - 00:00.170
+     - 00:00.173
      - 0.0
diff --git a/docs/_sources/deep_dive/tensor_ir/tutorials/tir_creation.rst.txt 
b/docs/_sources/deep_dive/tensor_ir/tutorials/tir_creation.rst.txt
index 82f196d81cd..4f7852b4b73 100644
--- a/docs/_sources/deep_dive/tensor_ir/tutorials/tir_creation.rst.txt
+++ b/docs/_sources/deep_dive/tensor_ir/tutorials/tir_creation.rst.txt
@@ -319,17 +319,17 @@ Now let's check the runtime dynamic shape inference:
 
  .. code-block:: none
 
-    [[1.0687387  1.7051318  0.53478473 1.2447947 ]
-     [0.7868451  1.0988005  0.5212298  0.7379694 ]
-     [0.59672266 1.1634411  0.8761779  1.6228483 ]
-     [0.96112955 1.6041667  1.0700647  1.8662562 ]]
-    [[31.131159 30.285185 32.26687  ... 33.98063  31.474232 32.56088 ]
-     [31.128223 32.93424  30.910475 ... 34.165596 32.894897 32.72144 ]
-     [31.077154 31.886332 32.360718 ... 35.320423 33.263638 34.036896]
+    [[1.1769985  1.2572315  1.1415789  1.0133203 ]
+     [1.2764876  1.0447661  0.66577244 0.78141856]
+     [1.0145769  1.106837   0.9101097  0.8087496 ]
+     [0.8890752  0.849807   0.7336039  0.65813607]]
+    [[30.404625 33.646057 33.519825 ... 33.448086 30.380777 30.60512 ]
+     [29.384144 32.906975 31.675583 ... 32.788273 27.780428 27.227789]
+     [31.738853 33.849815 33.60188  ... 34.209877 31.041008 28.577234]
      ...
-     [30.598007 32.030666 31.956154 ... 34.328274 31.760609 34.897697]
-     [29.185843 31.302677 29.177567 ... 33.337902 30.030025 34.534695]
-     [29.86443  31.52192  29.263405 ... 32.486103 30.365438 32.073902]]
+     [32.214645 33.77155  35.148952 ... 33.381836 30.82872  31.09151 ]
+     [34.32221  34.37794  35.519894 ... 35.138638 32.85187  30.50213 ]
+     [29.590946 30.900402 30.792732 ... 33.40585  28.857025 26.465826]]
 
 
 
diff --git 
a/docs/_sources/deep_dive/tensor_ir/tutorials/tir_transformation.rst.txt 
b/docs/_sources/deep_dive/tensor_ir/tutorials/tir_transformation.rst.txt
index 8578b6e6c8e..123fdc10530 100644
--- a/docs/_sources/deep_dive/tensor_ir/tutorials/tir_transformation.rst.txt
+++ b/docs/_sources/deep_dive/tensor_ir/tutorials/tir_transformation.rst.txt
@@ -117,7 +117,7 @@ original implementation.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       2.7287       2.7287       2.7287       2.7287       0.0000              
    
+       2.7383       2.7383       2.7383       2.7383       0.0000              
    
 
 
 
@@ -289,7 +289,7 @@ action involves reordering these two loops.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       0.8568       0.8568       0.8568       0.8568       0.0000              
    
+       0.8579       0.8579       0.8579       0.8579       0.0000              
    
 
 
 
@@ -417,7 +417,7 @@ from the reduction update via the **decompose_reduction** 
primitive.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       0.3380       0.3380       0.3380       0.3380       0.0000              
    
+       0.3377       0.3377       0.3377       0.3377       0.0000              
    
 
 
 
diff --git a/docs/_sources/get_started/tutorials/ir_module.rst.txt 
b/docs/_sources/get_started/tutorials/ir_module.rst.txt
index 1ac9f9659ba..44b520297ab 100644
--- a/docs/_sources/get_started/tutorials/ir_module.rst.txt
+++ b/docs/_sources/get_started/tutorials/ir_module.rst.txt
@@ -694,8 +694,8 @@ We can deploy the IRModule on CPU by specifying the target 
as ``llvm``.
 
  .. code-block:: none
 
-    [[-0.01371826 -0.04909292 -0.07197412 -0.06977656  0.11713963  0.02887076
-       0.08924652 -0.27186972  0.05595757 -0.11292398]]
+    [[ 0.03734957 -0.11577559  0.07155224  0.02716034  0.09010193  0.4489682
+       0.07317595  0.18780974  0.05461347 -0.00979966]]
 
 
 
@@ -761,8 +761,8 @@ Now we can compile the IRModule on GPU, the similar way as 
we did on CPU.
 
  .. code-block:: none
 
-    [[-0.01371826 -0.04909296 -0.07197418 -0.06977656  0.11713966  0.02887076
-       0.08924655 -0.27186978  0.05595753 -0.11292395]]
+    [[ 0.03734961 -0.11577554  0.07155231  0.02716034  0.09010193  0.44896832
+       0.07317595  0.18780974  0.05461347 -0.00979968]]
 
 
 
diff --git a/docs/_sources/get_started/tutorials/quick_start.rst.txt 
b/docs/_sources/get_started/tutorials/quick_start.rst.txt
index dab94c475f5..973a3141634 100644
--- a/docs/_sources/get_started/tutorials/quick_start.rst.txt
+++ b/docs/_sources/get_started/tutorials/quick_start.rst.txt
@@ -224,8 +224,8 @@ different devices.
 
  .. code-block:: none
 
-    [[24482.695 26273.605 26221.25  25371.219 25193.14  25728.459 26213.656
-      26440.39  25291.395 24768.943]]
+    [[25907.166 23492.523 25295.543 24138.213 25481.852 25200.75  25039.266
+      24356.322 24633.045 22952.268]]
 
 
 
diff --git a/docs/_sources/get_started/tutorials/sg_execution_times.rst.txt 
b/docs/_sources/get_started/tutorials/sg_execution_times.rst.txt
index a5f869d7bc8..f98016a5c37 100644
--- a/docs/_sources/get_started/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/get_started/tutorials/sg_execution_times.rst.txt
@@ -6,7 +6,7 @@
 
 Computation times
 =================
-**00:06.213** total execution time for 2 files **from get_started/tutorials**:
+**00:06.109** total execution time for 2 files **from get_started/tutorials**:
 
 .. container::
 
@@ -33,8 +33,8 @@ Computation times
      - Time
      - Mem (MB)
    * - :ref:`sphx_glr_get_started_tutorials_ir_module.py` (``ir_module.py``)
-     - 00:06.041
+     - 00:05.937
      - 0.0
    * - :ref:`sphx_glr_get_started_tutorials_quick_start.py` 
(``quick_start.py``)
-     - 00:00.172
+     - 00:00.173
      - 0.0
diff --git a/docs/_sources/how_to/tutorials/cross_compilation_and_rpc.rst.txt 
b/docs/_sources/how_to/tutorials/cross_compilation_and_rpc.rst.txt
index 0e199cccabc..3711454824d 100644
--- a/docs/_sources/how_to/tutorials/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/how_to/tutorials/cross_compilation_and_rpc.rst.txt
@@ -266,7 +266,7 @@ device and returns the measured cost. Network overhead is 
excluded.
 
  .. code-block:: none
 
-    1.17e-07 secs/op
+    1.12e-07 secs/op
 
 
 
@@ -651,8 +651,8 @@ This workflow is applicable to various deployment scenarios:
     Converted PyTorch model to Relax:
       - Number of parameters: 4
     Using local target for demonstration
-    Exported library to: /tmp/tmpg7pb2i6l/model_deployed.so
-    Saved parameters to: /tmp/tmpg7pb2i6l/model_params.npz
+    Exported library to: /tmp/tmpfee9hh49/model_deployed.so
+    Saved parameters to: /tmp/tmpfee9hh49/model_params.npz
 
     RPC workflow (works for any remote device):
     ==================================================
diff --git a/docs/_sources/how_to/tutorials/customize_opt.rst.txt 
b/docs/_sources/how_to/tutorials/customize_opt.rst.txt
index 2d3d5d644e6..17cf3685761 100644
--- a/docs/_sources/how_to/tutorials/customize_opt.rst.txt
+++ b/docs/_sources/how_to/tutorials/customize_opt.rst.txt
@@ -425,8 +425,8 @@ We can build and deploy the optimized model to the TVM 
runtime.
 
  .. code-block:: none
 
-    [[25882.344 26176.549 25374.908 25411.805 23633.246 24337.842 24618.688
-      26268.145 25842.059 25533.342]]
+    [[24750.184 25984.256 24911.973 24691.074 25399.715 25066.629 24797.062
+      23657.46  25855.477 24778.65 ]]
 
 
 
diff --git a/docs/_sources/how_to/tutorials/e2e_opt_model.rst.txt 
b/docs/_sources/how_to/tutorials/e2e_opt_model.rst.txt
index 44f6b238797..9ba3c66f480 100644
--- a/docs/_sources/how_to/tutorials/e2e_opt_model.rst.txt
+++ b/docs/_sources/how_to/tutorials/e2e_opt_model.rst.txt
@@ -54,7 +54,7 @@ PyTorch.
  .. code-block:: none
 
     Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth"; 
to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-       0%|          | 0.00/44.7M [00:00<?, ?B/s]      33%|███▎      | 
14.9M/44.7M [00:00<00:00, 156MB/s]     100%|██████████| 44.7M/44.7M 
[00:00<00:00, 250MB/s]
+       0%|          | 0.00/44.7M [00:00<?, ?B/s]      58%|█████▊    | 
26.1M/44.7M [00:00<00:00, 273MB/s]     100%|██████████| 44.7M/44.7M 
[00:00<00:00, 289MB/s]
 
 
 
diff --git a/docs/_sources/how_to/tutorials/sg_execution_times.rst.txt 
b/docs/_sources/how_to/tutorials/sg_execution_times.rst.txt
index 0b261405782..492c98e8fe0 100644
--- a/docs/_sources/how_to/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tutorials/sg_execution_times.rst.txt
@@ -6,7 +6,7 @@
 
 Computation times
 =================
-**00:34.061** total execution time for 5 files **from how_to/tutorials**:
+**00:34.449** total execution time for 5 files **from how_to/tutorials**:
 
 .. container::
 
@@ -33,16 +33,16 @@ Computation times
      - Time
      - Mem (MB)
    * - :ref:`sphx_glr_how_to_tutorials_optimize_llm.py` (``optimize_llm.py``)
-     - 00:32.321
+     - 00:32.768
      - 0.0
    * - :ref:`sphx_glr_how_to_tutorials_customize_opt.py` (``customize_opt.py``)
-     - 00:00.670
+     - 00:00.653
      - 0.0
    * - :ref:`sphx_glr_how_to_tutorials_cross_compilation_and_rpc.py` 
(``cross_compilation_and_rpc.py``)
-     - 00:00.573
+     - 00:00.580
      - 0.0
    * - :ref:`sphx_glr_how_to_tutorials_e2e_opt_model.py` (``e2e_opt_model.py``)
-     - 00:00.495
+     - 00:00.445
      - 0.0
    * - :ref:`sphx_glr_how_to_tutorials_export_and_load_executable.py` 
(``export_and_load_executable.py``)
      - 00:00.002
diff --git a/docs/_sources/sg_execution_times.rst.txt 
b/docs/_sources/sg_execution_times.rst.txt
index db1c846ea37..49e32d7d007 100644
--- a/docs/_sources/sg_execution_times.rst.txt
+++ b/docs/_sources/sg_execution_times.rst.txt
@@ -6,7 +6,7 @@
 
 Computation times
 =================
-**00:40.911** total execution time for 11 files **from all galleries**:
+**00:41.202** total execution time for 11 files **from all galleries**:
 
 .. container::
 
@@ -33,28 +33,28 @@ Computation times
      - Time
      - Mem (MB)
    * - :ref:`sphx_glr_how_to_tutorials_optimize_llm.py` 
(``../how_to/tutorials/optimize_llm.py``)
-     - 00:32.321
+     - 00:32.768
      - 0.0
    * - :ref:`sphx_glr_get_started_tutorials_ir_module.py` 
(``../get_started/tutorials/ir_module.py``)
-     - 00:06.041
+     - 00:05.937
      - 0.0
    * - :ref:`sphx_glr_how_to_tutorials_customize_opt.py` 
(``../how_to/tutorials/customize_opt.py``)
-     - 00:00.670
+     - 00:00.653
      - 0.0
    * - :ref:`sphx_glr_how_to_tutorials_cross_compilation_and_rpc.py` 
(``../how_to/tutorials/cross_compilation_and_rpc.py``)
-     - 00:00.573
+     - 00:00.580
      - 0.0
    * - :ref:`sphx_glr_how_to_tutorials_e2e_opt_model.py` 
(``../how_to/tutorials/e2e_opt_model.py``)
-     - 00:00.495
+     - 00:00.445
      - 0.0
    * - :ref:`sphx_glr_deep_dive_tensor_ir_tutorials_tir_transformation.py` 
(``../deep_dive/tensor_ir/tutorials/tir_transformation.py``)
-     - 00:00.292
+     - 00:00.295
      - 0.0
    * - :ref:`sphx_glr_get_started_tutorials_quick_start.py` 
(``../get_started/tutorials/quick_start.py``)
-     - 00:00.172
+     - 00:00.173
      - 0.0
    * - :ref:`sphx_glr_deep_dive_tensor_ir_tutorials_tir_creation.py` 
(``../deep_dive/tensor_ir/tutorials/tir_creation.py``)
-     - 00:00.170
+     - 00:00.173
      - 0.0
    * - :ref:`sphx_glr_deep_dive_relax_tutorials_relax_creation.py` 
(``../deep_dive/relax/tutorials/relax_creation.py``)
      - 00:00.109
diff --git a/docs/deep_dive/relax/tutorials/sg_execution_times.html 
b/docs/deep_dive/relax/tutorials/sg_execution_times.html
index 58fa03ea91c..5ed58b70cf5 100644
--- a/docs/deep_dive/relax/tutorials/sg_execution_times.html
+++ b/docs/deep_dive/relax/tutorials/sg_execution_times.html
@@ -294,7 +294,7 @@
             
   <section id="computation-times">
 <span 
id="sphx-glr-deep-dive-relax-tutorials-sg-execution-times"></span><h1>Computation
 times<a class="headerlink" href="#computation-times" title="Link to this 
heading"></a></h1>
-<p><strong>00:00.175</strong> total execution time for 2 files <strong>from 
deep_dive/relax/tutorials</strong>:</p>
+<p><strong>00:00.176</strong> total execution time for 2 files <strong>from 
deep_dive/relax/tutorials</strong>:</p>
 <div class="docutils container">
 <style scoped>
 <link 
href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/5.3.0/css/bootstrap.min.css";
 rel="stylesheet" />
diff --git a/docs/deep_dive/tensor_ir/tutorials/sg_execution_times.html 
b/docs/deep_dive/tensor_ir/tutorials/sg_execution_times.html
index 3a29c2ecec5..eec72e8c8ef 100644
--- a/docs/deep_dive/tensor_ir/tutorials/sg_execution_times.html
+++ b/docs/deep_dive/tensor_ir/tutorials/sg_execution_times.html
@@ -294,7 +294,7 @@
             
   <section id="computation-times">
 <span 
id="sphx-glr-deep-dive-tensor-ir-tutorials-sg-execution-times"></span><h1>Computation
 times<a class="headerlink" href="#computation-times" title="Link to this 
heading"></a></h1>
-<p><strong>00:00.462</strong> total execution time for 2 files <strong>from 
deep_dive/tensor_ir/tutorials</strong>:</p>
+<p><strong>00:00.467</strong> total execution time for 2 files <strong>from 
deep_dive/tensor_ir/tutorials</strong>:</p>
 <div class="docutils container">
 <style scoped>
 <link 
href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/5.3.0/css/bootstrap.min.css";
 rel="stylesheet" />
@@ -316,11 +316,11 @@ $(document).ready( function () {
 </thead>
 <tbody>
 <tr class="row-even"><td><p><a class="reference internal" 
href="tir_transformation.html#sphx-glr-deep-dive-tensor-ir-tutorials-tir-transformation-py"><span
 class="std std-ref">Transformation</span></a> (<code class="docutils literal 
notranslate"><span class="pre">tir_transformation.py</span></code>)</p></td>
-<td><p>00:00.292</p></td>
+<td><p>00:00.295</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" 
href="tir_creation.html#sphx-glr-deep-dive-tensor-ir-tutorials-tir-creation-py"><span
 class="std std-ref">TensorIR Creation</span></a> (<code class="docutils 
literal notranslate"><span class="pre">tir_creation.py</span></code>)</p></td>
-<td><p>00:00.170</p></td>
+<td><p>00:00.173</p></td>
 <td><p>0.0</p></td>
 </tr>
 </tbody>
diff --git a/docs/deep_dive/tensor_ir/tutorials/tir_creation.html 
b/docs/deep_dive/tensor_ir/tutorials/tir_creation.html
index 65cf819a7f6..df43b1efa8b 100644
--- a/docs/deep_dive/tensor_ir/tutorials/tir_creation.html
+++ b/docs/deep_dive/tensor_ir/tutorials/tir_creation.html
@@ -490,17 +490,17 @@ be used to ascertain the shape and data type of a 
TensorIR.</p>
 <span class="nb">print</span><span class="p">(</span><span 
class="n">evaluate_dynamic_shape</span><span class="p">(</span><span 
class="n">dyn_shape_lib</span><span class="p">,</span> <span 
class="n">m</span><span class="o">=</span><span class="mi">64</span><span 
class="p">,</span> <span class="n">n</span><span class="o">=</span><span 
class="mi">64</span><span class="p">,</span> <a 
href="../../../reference/api/python/tir/tir.html#tvm.tir.IterVar" 
title="tvm.tir.IterVar" class="sphx-glr-ba [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[1.0687387  1.7051318  0.53478473 
1.2447947 ]
- [0.7868451  1.0988005  0.5212298  0.7379694 ]
- [0.59672266 1.1634411  0.8761779  1.6228483 ]
- [0.96112955 1.6041667  1.0700647  1.8662562 ]]
-[[31.131159 30.285185 32.26687  ... 33.98063  31.474232 32.56088 ]
- [31.128223 32.93424  30.910475 ... 34.165596 32.894897 32.72144 ]
- [31.077154 31.886332 32.360718 ... 35.320423 33.263638 34.036896]
+<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[1.1769985  1.2572315  1.1415789  
1.0133203 ]
+ [1.2764876  1.0447661  0.66577244 0.78141856]
+ [1.0145769  1.106837   0.9101097  0.8087496 ]
+ [0.8890752  0.849807   0.7336039  0.65813607]]
+[[30.404625 33.646057 33.519825 ... 33.448086 30.380777 30.60512 ]
+ [29.384144 32.906975 31.675583 ... 32.788273 27.780428 27.227789]
+ [31.738853 33.849815 33.60188  ... 34.209877 31.041008 28.577234]
  ...
- [30.598007 32.030666 31.956154 ... 34.328274 31.760609 34.897697]
- [29.185843 31.302677 29.177567 ... 33.337902 30.030025 34.534695]
- [29.86443  31.52192  29.263405 ... 32.486103 30.365438 32.073902]]
+ [32.214645 33.77155  35.148952 ... 33.381836 30.82872  31.09151 ]
+ [34.32221  34.37794  35.519894 ... 35.138638 32.85187  30.50213 ]
+ [29.590946 30.900402 30.792732 ... 33.40585  28.857025 26.465826]]
 </pre></div>
 </div>
 </section>
diff --git a/docs/deep_dive/tensor_ir/tutorials/tir_transformation.html 
b/docs/deep_dive/tensor_ir/tutorials/tir_transformation.html
index 3f483932788..ccc16741810 100644
--- a/docs/deep_dive/tensor_ir/tutorials/tir_transformation.html
+++ b/docs/deep_dive/tensor_ir/tutorials/tir_transformation.html
@@ -368,7 +368,7 @@ original implementation.</p>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   2.7287       2.7287       2.7287       2.7287       0.0000
+   2.7383       2.7383       2.7383       2.7383       0.0000
 </pre></div>
 </div>
 <section id="initialization-schedule">
@@ -464,7 +464,7 @@ class Module:
 
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   0.8568       0.8568       0.8568       0.8568       0.0000
+   0.8579       0.8579       0.8579       0.8579       0.0000
 </pre></div>
 </div>
 </section>
@@ -558,7 +558,7 @@ class Module:
 
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   0.3380       0.3380       0.3380       0.3380       0.0000
+   0.3377       0.3377       0.3377       0.3377       0.0000
 </pre></div>
 </div>
 </section>
diff --git a/docs/get_started/tutorials/ir_module.html 
b/docs/get_started/tutorials/ir_module.html
index 0fd68ce1065..df9e69750e9 100644
--- a/docs/get_started/tutorials/ir_module.html
+++ b/docs/get_started/tutorials/ir_module.html
@@ -806,16 +806,16 @@ backends.</p>
 <p>We can deploy the IRModule on CPU by specifying the target as <code 
class="docutils literal notranslate"><span class="pre">llvm</span></code>.</p>
 <div class="highlight-Python notranslate"><div 
class="highlight"><pre><span></span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">exec</span></a> <span class="o">=</span> <a 
href="../../reference/api/python/driver.html#tvm.compile" title="tvm.compile" 
class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-func [...]
 <span class="n">dev</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">cpu</span><span 
class="p">()</span>
-<a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a> <span 
class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span class=" [...]
+<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class [...]
 
 <span class="n">raw_data</span> <span class="o">=</span> <span 
class="n">np</span><span class="o">.</span><span class="n">random</span><span 
class="o">.</span><span class="n">rand</span><span class="p">(</span><span 
class="mi">1</span><span class="p">,</span> <span class="mi">784</span><span 
class="p">)</span><span class="o">.</span><span class="n">astype</span><span 
class="p">(</span><span class="s2">&quot;float32&quot;</span><span 
class="p">)</span>
 <span class="n">data</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">runtime</span><span 
class="o">.</span><span class="n">tensor</span><span class="p">(</span><span 
class="n">raw_data</span><span class="p">,</span> <span 
class="n">dev</span><span class="p">)</span>
-<span class="n">cpu_out</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span class="s2">&quot;main&quot;</span><span 
class="p">](</span><span class="n">data</span><span class="p">,</span> <span 
class="o">*</span><a href="https:// [...]
+<span class="n">cpu_out</span> <span class="o">=</span> <span 
class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;main&quot;</span><span class="p">](</span><span 
class="n">data</span><span class="p">,</span> <span class="o">*</span><a 
href="https://docs.python.org/3/library/stdtypes.html#dict"; 
title="builtins.dict" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">params_from_torch</span></a><span class="p">[</ [...]
 <span class="nb">print</span><span class="p">(</span><span 
class="n">cpu_out</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[-0.01371826 -0.04909292 -0.07197412 
-0.06977656  0.11713963  0.02887076
-   0.08924652 -0.27186972  0.05595757 -0.11292398]]
+<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[ 0.03734957 -0.11577559  0.07155224  
0.02716034  0.09010193  0.4489682
+   0.07317595  0.18780974  0.05461347 -0.00979966]]
 </pre></div>
 </div>
 </section>
@@ -838,19 +838,19 @@ the details of <code class="docutils literal 
notranslate"><span class="pre">DLig
 <p>Now we can compile the IRModule on GPU, the similar way as we did on 
CPU.</p>
 <div class="highlight-Python notranslate"><div 
class="highlight"><pre><span></span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">exec</span></a> <span class="o">=</span> <a 
href="../../reference/api/python/driver.html#tvm.compile" title="tvm.compile" 
class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-func [...]
 <span class="n">dev</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">device</span><span 
class="p">(</span><span class="s2">&quot;cuda&quot;</span><span 
class="p">,</span> <span class="mi">0</span><span class="p">)</span>
-<a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a> <span 
class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span class=" [...]
+<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class [...]
 <span class="c1"># Need to allocate data and params on GPU device</span>
 <span class="n">data</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">runtime</span><span 
class="o">.</span><span class="n">tensor</span><span class="p">(</span><span 
class="n">raw_data</span><span class="p">,</span> <span 
class="n">dev</span><span class="p">)</span>
 <a href="https://docs.python.org/3/library/stdtypes.html#list"; 
title="builtins.list" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">gpu_params</span></a> <span class="o">=</span> <span 
class="p">[</span><span class="n">tvm</span><span class="o">.</span><span 
class="n">runtime</span><span class="o">.</span><span 
class="n">tensor</span><span class="p">(</span><span class="n">p</span><span 
class="p">,</span> <span class="n"> [...]
-<span class="n">gpu_out</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span class="s2">&quot;main&quot;</span><span 
class="p">](</span><span class="n">data</span><span class="p">,</span> <span 
class="o">*</span><a href="https:// [...]
+<span class="n">gpu_out</span> <span class="o">=</span> <span 
class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;main&quot;</span><span class="p">](</span><span 
class="n">data</span><span class="p">,</span> <span class="o">*</span><a 
href="https://docs.python.org/3/library/stdtypes.html#list"; 
title="builtins.list" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">gpu_params</span></a><span class="p">)</span><s [...]
 <span class="nb">print</span><span class="p">(</span><span 
class="n">gpu_out</span><span class="p">)</span>
 
 <span class="c1"># Check the correctness of the results</span>
 <span class="k">assert</span> <span class="n">np</span><span 
class="o">.</span><span class="n">allclose</span><span class="p">(</span><span 
class="n">cpu_out</span><span class="p">,</span> <span 
class="n">gpu_out</span><span class="p">,</span> <span 
class="n">atol</span><span class="o">=</span><span class="mf">1e-3</span><span 
class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[-0.01371826 -0.04909296 -0.07197418 
-0.06977656  0.11713966  0.02887076
-   0.08924655 -0.27186978  0.05595753 -0.11292395]]
+<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[ 0.03734961 -0.11577554  0.07155231  
0.02716034  0.09010193  0.44896832
+   0.07317595  0.18780974  0.05461347 -0.00979968]]
 </pre></div>
 </div>
 </section>
diff --git a/docs/get_started/tutorials/quick_start.html 
b/docs/get_started/tutorials/quick_start.html
index 84d972bc3f2..289a0e33173 100644
--- a/docs/get_started/tutorials/quick_start.html
+++ b/docs/get_started/tutorials/quick_start.html
@@ -449,16 +449,16 @@ different devices.</p>
 <a href="../../reference/api/python/target.html#tvm.target.Target" 
title="tvm.target.Target" class="sphx-glr-backref-module-tvm-target 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">target</span></a> <span class="o">=</span> <a 
href="../../reference/api/python/target.html#tvm.target.Target" 
title="tvm.target.Target" class="sphx-glr-backref-module-tvm-target 
sphx-glr-backref-type-py-class"><span class="n">tvm</span><span 
class="o">.</span><span class="n">target< [...]
 <a href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">ex</span></a> <span class="o">=</span> <a 
href="../../reference/api/python/driver.html#tvm.compile" title="tvm.compile" 
class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span 
class="n">tvm</span><span class="o">.</span><span class="n">compile</span [...]
 <span class="n">device</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">cpu</span><span 
class="p">()</span>
-<a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a> <span 
class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span class=" [...]
+<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class [...]
 <span class="n">data</span> <span class="o">=</span> <span 
class="n">np</span><span class="o">.</span><span class="n">random</span><span 
class="o">.</span><span class="n">rand</span><span class="p">(</span><span 
class="mi">1</span><span class="p">,</span> <span class="mi">784</span><span 
class="p">)</span><span class="o">.</span><span class="n">astype</span><span 
class="p">(</span><span class="s2">&quot;float32&quot;</span><span 
class="p">)</span>
 <span class="n">tvm_data</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">runtime</span><span 
class="o">.</span><span class="n">tensor</span><span class="p">(</span><span 
class="n">data</span><span class="p">,</span> <span 
class="n">device</span><span class="o">=</span><span 
class="n">device</span><span class="p">)</span>
 <a href="https://docs.python.org/3/library/stdtypes.html#list"; 
title="builtins.list" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">params</span></a> <span class="o">=</span> <span 
class="p">[</span><span class="n">np</span><span class="o">.</span><span 
class="n">random</span><span class="o">.</span><span class="n">rand</span><span 
class="p">(</span><span class="o">*</span><span class="n">param</span><span 
class="o">.</sp [...]
 <a href="https://docs.python.org/3/library/stdtypes.html#list"; 
title="builtins.list" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">params</span></a> <span class="o">=</span> <span 
class="p">[</span><span class="n">tvm</span><span class="o">.</span><span 
class="n">runtime</span><span class="o">.</span><span 
class="n">tensor</span><span class="p">(</span><span 
class="n">param</span><span class="p">,</span> <span class="n"> [...]
-<span class="nb">print</span><span class="p">(</span><a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span class="s2">&quot;forward&quot;</span><span 
class="p">](</span><span class="n">tvm_data</span><span class="p">,</span> 
<span class="o">*</span><a href="http [...]
+<span class="nb">print</span><span class="p">(</span><span 
class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;forward&quot;</span><span class="p">](</span><span 
class="n">tvm_data</span><span class="p">,</span> <span class="o">*</span><a 
href="https://docs.python.org/3/library/stdtypes.html#list"; 
title="builtins.list" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">params</span></a><span class="p">)</span><s [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[24482.695 26273.605 26221.25  25371.219 
25193.14  25728.459 26213.656
-  26440.39  25291.395 24768.943]]
+<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[25907.166 23492.523 25295.543 24138.213 
25481.852 25200.75  25039.266
+  24356.322 24633.045 22952.268]]
 </pre></div>
 </div>
 <p>Our goal is to bring machine learning to the application with any language 
of interest,
@@ -466,8 +466,8 @@ with the minimum runtime support.</p>
 <ul>
 <li><p>Each function in IRModule becomes a runnable function in the runtime. 
For example in LLM
 cases, we can call <code class="docutils literal notranslate"><span 
class="pre">prefill</span></code> and <code class="docutils literal 
notranslate"><span class="pre">decode</span></code> functions directly.</p>
-<div class="highlight-Python notranslate"><div 
class="highlight"><pre><span></span><span class="n">prefill_logits</span> <span 
class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span class="s2">&quot;prefill&quot;</span><span 
class="p">](</span> [...]
-<span class="n">decoded_logits</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span class="s2">&quot;decode&quot;</span><span 
class="p">](</span><span class="n">inputs</span><span class="p">,</span> <span 
class="n">weight</span>< [...]
+<div class="highlight-Python notranslate"><div 
class="highlight"><pre><span></span><span class="n">prefill_logits</span> <span 
class="o">=</span> <span class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;prefill&quot;</span><span class="p">](</span><span 
class="n">inputs</span><span class="p">,</span> <span 
class="n">weight</span><span class="p">,</span> <span 
class="n">kv_cache</span><span class="p">)</span>
+<span class="n">decoded_logits</span> <span class="o">=</span> <span 
class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;decode&quot;</span><span class="p">](</span><span 
class="n">inputs</span><span class="p">,</span> <span 
class="n">weight</span><span class="p">,</span> <span 
class="n">kv_cache</span><span class="p">)</span>
 </pre></div>
 </div>
 </li>
@@ -482,15 +482,15 @@ copy exchange with existing ecosystem (DLPack exchange 
with PyTorch)</p>
 </li>
 <li><p>TVM runtime works in non-python environments, so it works on settings 
such as mobile</p>
 <div class="highlight-C++ notranslate"><div 
class="highlight"><pre><span></span><span class="c1">// C++ snippet</span>
-<span class="n">runtime</span><span class="o">::</span><span 
class="n">Module</span><span class="w"> </span><a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span class="w"> 
</span><span class="o">=</span><span class="w"> </span><a 
href="../../reference/api/python/relax/relax.html#tvm.r [...]
-<a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">.</span><span class="n">GetFunction</span><span 
class="p">(</span><span class="s">&quot;init&quot;</span><span 
class="p">)(...);</span>
-<span class="n">Tensor</span><span class="w"> </span><span 
class="n">out</span><span class="w"> </span><span class="o">=</span><span 
class="w"> </span><a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">.</span><span class="n">GetFunction</span><span 
class="p">(</span><span [...]
+<span class="n">runtime</span><span class="o">::</span><span 
class="n">Module</span><span class="w"> </span><span class="n">vm</span><span 
class="w"> </span><span class="o">=</span><span class="w"> </span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">ex</span></a><span class="p">.</span><span class="n">GetFunction [...]
+<span class="n">vm</span><span class="p">.</span><span 
class="n">GetFunction</span><span class="p">(</span><span 
class="s">&quot;init&quot;</span><span class="p">)(...);</span>
+<span class="n">Tensor</span><span class="w"> </span><span 
class="n">out</span><span class="w"> </span><span class="o">=</span><span 
class="w"> </span><span class="n">vm</span><span class="p">.</span><span 
class="n">GetFunction</span><span class="p">(</span><span 
class="s">&quot;prefill&quot;</span><span class="p">)(</span><span 
class="n">data</span><span class="p">,</span><span class="w"> </span><span 
class="n">weight</span><span class="p">,</span><span class="w"> </span><span 
class="n" [...]
 </pre></div>
 </div>
 <div class="highlight-Java notranslate"><div 
class="highlight"><pre><span></span><span class="c1">// Java snippet</span>
-<span class="n">Module</span><span class="w"> </span><a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span class="w"> 
</span><span class="o">=</span><span class="w"> </span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class [...]
-<a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">.</span><span class="na">getFunction</span><span 
class="p">(</span><span class="s">&quot;init&quot;</span><span 
class="p">).</span><span class="na">pushArg</span><span 
class="p">(...).</span><span class="na">invoke</span>< [...]
-<span class="n">Tensor</span><span class="w"> </span><span 
class="n">out</span><span class="w"> </span><span class="o">=</span><span 
class="w"> </span><a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">.</span><span class="na">getFunction</span><span 
class="p">(</span><spa [...]
+<span class="n">Module</span><span class="w"> </span><span 
class="n">vm</span><span class="w"> </span><span class="o">=</span><span 
class="w"> </span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">ex</span></a><span class="p">.</span><span 
class="na">getFunction</span><span class="p">(</span><span class="s">&quot;l 
[...]
+<span class="n">vm</span><span class="p">.</span><span 
class="na">getFunction</span><span class="p">(</span><span 
class="s">&quot;init&quot;</span><span class="p">).</span><span 
class="na">pushArg</span><span class="p">(...).</span><span 
class="na">invoke</span><span class="p">;</span>
+<span class="n">Tensor</span><span class="w"> </span><span 
class="n">out</span><span class="w"> </span><span class="o">=</span><span 
class="w"> </span><span class="n">vm</span><span class="p">.</span><span 
class="na">getFunction</span><span class="p">(</span><span 
class="s">&quot;prefill&quot;</span><span class="p">).</span><span 
class="na">pushArg</span><span class="p">(</span><span 
class="n">data</span><span class="p">).</span><span 
class="na">pushArg</span><span class="p">(</span><spa [...]
 </pre></div>
 </div>
 </li>
diff --git a/docs/get_started/tutorials/sg_execution_times.html 
b/docs/get_started/tutorials/sg_execution_times.html
index 10128d66770..dbe6f4a53b2 100644
--- a/docs/get_started/tutorials/sg_execution_times.html
+++ b/docs/get_started/tutorials/sg_execution_times.html
@@ -294,7 +294,7 @@
             
   <section id="computation-times">
 <span 
id="sphx-glr-get-started-tutorials-sg-execution-times"></span><h1>Computation 
times<a class="headerlink" href="#computation-times" title="Link to this 
heading"></a></h1>
-<p><strong>00:06.213</strong> total execution time for 2 files <strong>from 
get_started/tutorials</strong>:</p>
+<p><strong>00:06.109</strong> total execution time for 2 files <strong>from 
get_started/tutorials</strong>:</p>
 <div class="docutils container">
 <style scoped>
 <link 
href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/5.3.0/css/bootstrap.min.css";
 rel="stylesheet" />
@@ -316,11 +316,11 @@ $(document).ready( function () {
 </thead>
 <tbody>
 <tr class="row-even"><td><p><a class="reference internal" 
href="ir_module.html#sphx-glr-get-started-tutorials-ir-module-py"><span 
class="std std-ref">IRModule</span></a> (<code class="docutils literal 
notranslate"><span class="pre">ir_module.py</span></code>)</p></td>
-<td><p>00:06.041</p></td>
+<td><p>00:05.937</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" 
href="quick_start.html#sphx-glr-get-started-tutorials-quick-start-py"><span 
class="std std-ref">Quick Start</span></a> (<code class="docutils literal 
notranslate"><span class="pre">quick_start.py</span></code>)</p></td>
-<td><p>00:00.172</p></td>
+<td><p>00:00.173</p></td>
 <td><p>0.0</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tutorials/cross_compilation_and_rpc.html 
b/docs/how_to/tutorials/cross_compilation_and_rpc.html
index 2b456edf517..52e05a1d930 100644
--- a/docs/how_to/tutorials/cross_compilation_and_rpc.html
+++ b/docs/how_to/tutorials/cross_compilation_and_rpc.html
@@ -472,7 +472,7 @@ device and returns the measured cost. Network overhead is 
excluded.</p>
 <span class="nb">print</span><span class="p">(</span><span 
class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span 
class="n">cost</span><span class="si">:</span><span class="s2">g</span><span 
class="si">}</span><span class="s2"> secs/op&quot;</span><span 
class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>1.17e-07 secs/op
+<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>1.12e-07 secs/op
 </pre></div>
 </div>
 </section>
@@ -823,8 +823,8 @@ for ONNX models. Simply replace <code class="docutils 
literal notranslate"><span
 Converted PyTorch model to Relax:
   - Number of parameters: 4
 Using local target for demonstration
-Exported library to: /tmp/tmpg7pb2i6l/model_deployed.so
-Saved parameters to: /tmp/tmpg7pb2i6l/model_params.npz
+Exported library to: /tmp/tmpfee9hh49/model_deployed.so
+Saved parameters to: /tmp/tmpfee9hh49/model_params.npz
 
 RPC workflow (works for any remote device):
 ==================================================
diff --git a/docs/how_to/tutorials/customize_opt.html 
b/docs/how_to/tutorials/customize_opt.html
index 6fdfaede422..17c57145e0f 100644
--- a/docs/how_to/tutorials/customize_opt.html
+++ b/docs/how_to/tutorials/customize_opt.html
@@ -609,16 +609,16 @@ pushing the performance to the limit. The current 
optimization may not be the be
 <p>We can build and deploy the optimized model to the TVM runtime.</p>
 <div class="highlight-Python notranslate"><div 
class="highlight"><pre><span></span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">ex</span></a> <span class="o">=</span> <a 
href="../../reference/api/python/driver.html#tvm.compile" title="tvm.compile" 
class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-functi [...]
 <span class="n">dev</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">device</span><span 
class="p">(</span><span class="s2">&quot;cuda&quot;</span><span 
class="p">,</span> <span class="mi">0</span><span class="p">)</span>
-<a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a> <span 
class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span class=" [...]
+<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class [...]
 <span class="c1"># Need to allocate data and params on GPU device</span>
 <span class="n">data</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">runtime</span><span 
class="o">.</span><span class="n">tensor</span><span class="p">(</span><span 
class="n">np</span><span class="o">.</span><span class="n">random</span><span 
class="o">.</span><span class="n">rand</span><span class="p">(</span><span 
class="o">*</span><a 
href="https://docs.python.org/3/library/stdtypes.html#tuple"; 
title="builtins.tuple" class="sphx-glr-ba [...]
 <a href="https://docs.python.org/3/library/stdtypes.html#list"; 
title="builtins.list" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">gpu_params</span></a> <span class="o">=</span> <span 
class="p">[</span><span class="n">tvm</span><span class="o">.</span><span 
class="n">runtime</span><span class="o">.</span><span 
class="n">tensor</span><span class="p">(</span><span class="n">np</span><span 
class="o">.</span><span class="n"> [...]
-<span class="n">gpu_out</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span class="s2">&quot;forward&quot;</span><span 
class="p">](</span><span class="n">data</span><span class="p">,</span> <span 
class="o">*</span><a href="https [...]
+<span class="n">gpu_out</span> <span class="o">=</span> <span 
class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;forward&quot;</span><span class="p">](</span><span 
class="n">data</span><span class="p">,</span> <span class="o">*</span><a 
href="https://docs.python.org/3/library/stdtypes.html#list"; 
title="builtins.list" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">gpu_params</span></a><span class="p">)</span [...]
 <span class="nb">print</span><span class="p">(</span><span 
class="n">gpu_out</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[25882.344 26176.549 25374.908 25411.805 
23633.246 24337.842 24618.688
-  26268.145 25842.059 25533.342]]
+<div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>[[24750.184 25984.256 24911.973 24691.074 
25399.715 25066.629 24797.062
+  23657.46  25855.477 24778.65 ]]
 </pre></div>
 </div>
 </section>
diff --git a/docs/how_to/tutorials/e2e_opt_model.html 
b/docs/how_to/tutorials/e2e_opt_model.html
index 02092bf67ac..b4072ba6e69 100644
--- a/docs/how_to/tutorials/e2e_opt_model.html
+++ b/docs/how_to/tutorials/e2e_opt_model.html
@@ -329,8 +329,8 @@ PyTorch.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div 
class="highlight"><pre><span></span>Downloading: 
&quot;https://download.pytorch.org/models/resnet18-f37072fd.pth&quot; to 
/workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
 
   0%|          | 0.00/44.7M [00:00&lt;?, ?B/s]
- 33%|███▎      | 14.9M/44.7M [00:00&lt;00:00, 156MB/s]
-100%|██████████| 44.7M/44.7M [00:00&lt;00:00, 250MB/s]
+ 58%|█████▊    | 26.1M/44.7M [00:00&lt;00:00, 273MB/s]
+100%|██████████| 44.7M/44.7M [00:00&lt;00:00, 289MB/s]
 </pre></div>
 </div>
 </section>
@@ -431,7 +431,7 @@ We skip this step in the CI environment.</p>
         <span class="n">mod</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">tir</span><span 
class="o">.</span><span class="n">transform</span><span class="o">.</span><span 
class="n">DefaultGPUSchedule</span><span class="p">()(</span><span 
class="n">mod</span><span class="p">)</span>
     <span class="n">ex</span> <span class="o">=</span> <a 
href="../../reference/api/python/driver.html#tvm.compile" title="tvm.compile" 
class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span 
class="n">tvm</span><span class="o">.</span><span 
class="n">compile</span></a><span class="p">(</span><span 
class="n">mod</span><span class="p">,</span> <a 
href="../../reference/api/python/target.html#tvm.target.Target" 
title="tvm.target.Target" class="sphx-glr-backref-module-tvm [...]
     <span class="n">dev</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">device</span><span 
class="p">(</span><span class="s2">&quot;cuda&quot;</span><span 
class="p">,</span> <span class="mi">0</span><span class="p">)</span>
-    <span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span class="n">relax</span><span 
class="o">.</span><span class="n">VirtualMachine</span></a><span 
class="p">(</span><span class="n">ex</span><span class="p">,</span> <span 
class="n">dev</span><span class="p">)</span>
+    <span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><span 
class="n">ex</span><span class="p">,</span> <span class="n">dev</span><span 
class="p">)</span>
     <span class="c1"># Need to allocate data and params on GPU device</span>
     <span class="n">gpu_data</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">runtime</span><span 
class="o">.</span><span class="n">tensor</span><span class="p">(</span><span 
class="n">np</span><span class="o">.</span><span class="n">random</span><span 
class="o">.</span><span class="n">rand</span><span class="p">(</span><span 
class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span 
class="p">,</span> <span class="mi">224< [...]
     <span class="n">gpu_params</span> <span class="o">=</span> <span 
class="p">[</span><span class="n">tvm</span><span class="o">.</span><span 
class="n">runtime</span><span class="o">.</span><span 
class="n">tensor</span><span class="p">(</span><span class="n">p</span><span 
class="p">,</span> <span class="n">dev</span><span class="p">)</span> <span 
class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span 
class="n">params</span><span class="p">[</span><span class="s2" [...]
diff --git a/docs/how_to/tutorials/export_and_load_executable.html 
b/docs/how_to/tutorials/export_and_load_executable.html
index 04274e32f0b..4731455cd73 100644
--- a/docs/how_to/tutorials/export_and_load_executable.html
+++ b/docs/how_to/tutorials/export_and_load_executable.html
@@ -441,7 +441,7 @@ runtime module directly.</p>
 <div class="highlight-Python notranslate"><div 
class="highlight"><pre><span></span><span class="k">if</span> <a 
href="https://docs.python.org/3/library/functions.html#bool"; 
title="builtins.bool" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">RUN_EXAMPLE</span></a><span class="p">:</span>
     <span class="n">loaded_rt_mod</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">runtime</span><span 
class="o">.</span><span class="n">load_module</span><span 
class="p">(</span><span class="nb">str</span><span class="p">(</span><span 
class="n">library_path</span><span class="p">))</span>
     <span class="n">dev</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">cpu</span><span 
class="p">(</span><span class="mi">0</span><span class="p">)</span>
-    <span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span class="n">relax</span><span 
class="o">.</span><span class="n">VirtualMachine</span></a><span 
class="p">(</span><span class="n">loaded_rt_mod</span><span class="p">,</span> 
<span class="n">dev</span><span class="p">)</span>
+    <span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><span 
class="n">loaded_rt_mod</span><span class="p">,</span> <span 
class="n">dev</span><span class="p">)</span>
 
     <span class="c1"># Prepare input data</span>
     <span class="n">input_tensor</span> <span class="o">=</span> <span 
class="n">torch</span><span class="o">.</span><span class="n">randn</span><span 
class="p">(</span><span class="mi">1</span><span class="p">,</span> <span 
class="mi">1</span><span class="p">,</span> <span class="mi">28</span><span 
class="p">,</span> <span class="mi">28</span><span class="p">,</span> <span 
class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span 
class="o">.</span><span class="n">f [...]
@@ -522,7 +522,7 @@ of how to reload and run the model. Save this as <code 
class="docutils literal n
 
 <span class="c1"># Step 2: Create Virtual Machine</span>
 <span class="n">device</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">cpu</span><span 
class="p">(</span><span class="mi">0</span><span class="p">)</span>
-<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span class="n">relax</span><span 
class="o">.</span><span class="n">VirtualMachine</span></a><span 
class="p">(</span><span class="n">lib</span><span class="p">,</span> <span 
class="n">device</span><span class="p">)</span>
+<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><span 
class="n">lib</span><span class="p">,</span> <span class="n">device</span><span 
class="p">)</span>
 
 <span class="c1"># Step 3: Load parameters from the .npz file</span>
 <span class="n">params_npz</span> <span class="o">=</span> <span 
class="n">np</span><span class="o">.</span><span class="n">load</span><span 
class="p">(</span><span 
class="s2">&quot;relax_export_artifacts/model_params.npz&quot;</span><span 
class="p">)</span>
@@ -557,7 +557,7 @@ To run on GPU instead of CPU, make the following 
changes:</p>
 </li>
 <li><p><strong>Use GPU device in the script</strong>:</p>
 <div class="highlight-python notranslate"><div 
class="highlight"><pre><span></span><span class="n">device</span> <span 
class="o">=</span> <span class="n">tvm</span><span class="o">.</span><span 
class="n">cuda</span><span class="p">(</span><span class="mi">0</span><span 
class="p">)</span>  <span class="c1"># Use CUDA device instead of CPU</span>
-<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span class="n">relax</span><span 
class="o">.</span><span class="n">VirtualMachine</span></a><span 
class="p">(</span><span class="n">lib</span><span class="p">,</span> <span 
class="n">device</span><span class="p">)</span>
+<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><span 
class="n">lib</span><span class="p">,</span> <span class="n">device</span><span 
class="p">)</span>
 
 <span class="c1"># Load parameters to GPU</span>
 <span class="n">params</span> <span class="o">=</span> <span 
class="p">[</span><span class="n">tvm</span><span class="o">.</span><span 
class="n">runtime</span><span class="o">.</span><span 
class="n">tensor</span><span class="p">(</span><span 
class="n">params_npz</span><span class="p">[</span><span 
class="sa">f</span><span class="s2">&quot;p_</span><span 
class="si">{</span><span class="n">i</span><span class="si">}</span><span 
class="s2">&quot;</span><span class="p">],</span> <span class= [...]
@@ -622,7 +622,7 @@ for a comprehensive guide on:</p>
 
 <span class="c1"># Step 4: Load and run on remote device</span>
 <span class="n">lib</span> <span class="o">=</span> <span 
class="n">remote</span><span class="o">.</span><span 
class="n">load_module</span><span class="p">(</span><span 
class="s2">&quot;mlp_arm.so&quot;</span><span class="p">)</span>
-<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span class="n">relax</span><span 
class="o">.</span><span class="n">VirtualMachine</span></a><span 
class="p">(</span><span class="n">lib</span><span class="p">,</span> <span 
class="n">remote</span><span class="o">.</span><span class="n">cpu</ [...]
+<span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><span 
class="n">lib</span><span class="p">,</span> <span class="n">remote</span><span 
class="o">.</span><span cla [...]
 <span class="c1"># ... prepare input and params, then run inference</span>
 </pre></div>
 </div>
diff --git a/docs/how_to/tutorials/optimize_llm.html 
b/docs/how_to/tutorials/optimize_llm.html
index 502f457cd50..10b137374a3 100644
--- a/docs/how_to/tutorials/optimize_llm.html
+++ b/docs/how_to/tutorials/optimize_llm.html
@@ -726,7 +726,7 @@ is designed specifically for the LLMs.</p>
 
 <span class="k">with</span> <a 
href="../../reference/api/python/target.html#tvm.target.Target" 
title="tvm.target.Target" class="sphx-glr-backref-module-tvm-target 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">target</span></a><span class="p">:</span>
     <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">ex</span></a> <span class="o">=</span> <a 
href="../../reference/api/python/driver.html#tvm.compile" title="tvm.compile" 
class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span 
class="n">tvm</span><span class="o">.</span><span class="n">compile</ [...]
-    <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a> <span 
class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm 
sphx-glr-backref-type-py-class"><span cla [...]
+    <span class="n">vm</span> <span class="o">=</span> <a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VirtualMachine" 
title="tvm.relax.VirtualMachine" class="sphx-glr-backref-module-tvm-relax 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">relax</span><span class="o">.</span><span 
class="n">VirtualMachine</span></a><span class="p">(</span><a 
href="../../reference/api/python/relax/relax.html#tvm.relax.VMExecutable" 
title="tvm.relax.VMExecutable" c [...]
 </pre></div>
 </div>
 </section>
@@ -824,7 +824,7 @@ the model documentation for the correct tokenization and 
prompt format.</p>
 key and value tensors for the attention layer. Apache TVM provides a 
PagedKVCache to store the
 key and value tensors. We create the PagedKVCache with the specified 
parameters.</p>
 <div class="highlight-Python notranslate"><div 
class="highlight"><pre><span></span><span class="k">if</span> <span 
class="ow">not</span> <a 
href="https://docs.python.org/3/library/functions.html#bool"; 
title="builtins.bool" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span 
class="n">IS_IN_CI</span></a><span class="p">:</span>
-    <span class="n">kv_cache</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span 
class="s2">&quot;create_tir_paged_kv_cache&quot;</span><span class="p">](</span>
+    <span class="n">kv_cache</span> <span class="o">=</span> <span 
class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;create_tir_paged_kv_cache&quot;</span><span class="p">](</span>
         <a href="https://docs.python.org/3/library/stdtypes.html#tuple"; 
title="builtins.tuple" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class"><span class="n">ShapeTuple</span></a><span 
class="p">([</span><span class="mi">1</span><span class="p">]),</span>  <span 
class="c1"># max_batch_size=1</span>
         <a href="https://docs.python.org/3/library/stdtypes.html#tuple"; 
title="builtins.tuple" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class"><span class="n">ShapeTuple</span></a><span 
class="p">([</span><span class="mi">2048</span><span class="p">]),</span>  
<span class="c1"># max_total_seq_len=2048</span>
         <a href="https://docs.python.org/3/library/stdtypes.html#tuple"; 
title="builtins.tuple" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class"><span class="n">ShapeTuple</span></a><span 
class="p">([</span><span class="mi">2048</span><span class="p">]),</span>  
<span class="c1"># prefill_chunk_size=2048</span>
@@ -841,7 +841,7 @@ compiled in the Relax IRModule to embed the tokens into the 
hidden states.</p>
 
 
 <span class="k">def</span><span class="w"> </span><span 
class="nf">embed</span><span class="p">(</span><span 
class="n">tokens</span><span class="p">,</span> <span 
class="n">params</span><span class="p">):</span>
-    <span class="n">_embed</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span class="s2">&quot;embed&quot;</span><span 
class="p">](</span><span class="n">tokens</span><span class="p">,</span> <span 
class="n">params</span><span  [...]
+    <span class="n">_embed</span> <span class="o">=</span> <span 
class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;embed&quot;</span><span class="p">](</span><span 
class="n">tokens</span><span class="p">,</span> <span 
class="n">params</span><span class="p">)</span>
     <span class="c1"># Reshape hidden from [seq_len, hidden_size] to [1, 
seq_len, hidden_size]</span>
     <span class="n">_embed</span> <span class="o">=</span> <span 
class="n">nd_view_func</span><span class="p">(</span><span 
class="n">_embed</span><span class="p">,</span> <a 
href="https://docs.python.org/3/library/stdtypes.html#tuple"; 
title="builtins.tuple" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class"><span class="n">ShapeTuple</span></a><span 
class="p">([</span><span class="mi">1</span><span class="p">,</span> <span 
class="n">_embed</span><span class="o">.</s [...]
     <span class="k">return</span> <span class="n">_embed</span>
@@ -864,7 +864,7 @@ and <cite>end_forward_func</cite> to end the forward 
pass.</p>
     <span class="n">add_sequence_func</span><span class="p">(</span><span 
class="n">kv_cache</span><span class="p">,</span> <span 
class="n">seq_id</span><span class="p">)</span>
     <span class="n">hidden_states</span> <span class="o">=</span> <span 
class="n">embed</span><span class="p">(</span><span 
class="n">tokens</span><span class="p">,</span> <span 
class="n">params</span><span class="p">)</span>
     <span class="n">begin_forward_func</span><span class="p">(</span><span 
class="n">kv_cache</span><span class="p">,</span> <a 
href="https://docs.python.org/3/library/stdtypes.html#tuple"; 
title="builtins.tuple" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class"><span class="n">ShapeTuple</span></a><span 
class="p">([</span><span class="n">seq_id</span><span class="p">]),</span> <a 
href="https://docs.python.org/3/library/stdtypes.html#tuple"; 
title="builtins.tuple" cla [...]
-    <span class="n">logits</span><span class="p">,</span> <span 
class="n">kv_cache</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span class="s2">&quot;prefill&quot;</span><span 
class="p">](</span><span class="n">hidden_states</ [...]
+    <span class="n">logits</span><span class="p">,</span> <span 
class="n">kv_cache</span> <span class="o">=</span> <span 
class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;prefill&quot;</span><span class="p">](</span><span 
class="n">hidden_states</span><span class="p">,</span> <span 
class="n">kv_cache</span><span class="p">,</span> <span 
class="n">params</span><span class="p">)</span>
     <span class="n">end_forward_func</span><span class="p">(</span><span 
class="n">kv_cache</span><span class="p">)</span>
 </pre></div>
 </div>
@@ -896,7 +896,7 @@ IRModule to generate the token.</p>
         <span class="n">tokens</span> <span class="o">=</span> <span 
class="n">tvm</span><span class="o">.</span><span class="n">runtime</span><span 
class="o">.</span><span class="n">tensor</span><span class="p">(</span><span 
class="n">np</span><span class="o">.</span><span class="n">array</span><span 
class="p">([</span><span class="n">last_token</span><span 
class="p">])</span><span class="o">.</span><span class="n">astype</span><span 
class="p">(</span><span class="s2">&quot;int32&quot;< [...]
         <span class="n">hidden_states</span> <span class="o">=</span> <span 
class="n">embed</span><span class="p">(</span><span 
class="n">tokens</span><span class="p">,</span> <span 
class="n">params</span><span class="p">)</span>
         <span class="n">begin_forward_func</span><span class="p">(</span><span 
class="n">kv_cache</span><span class="p">,</span> <a 
href="https://docs.python.org/3/library/stdtypes.html#tuple"; 
title="builtins.tuple" class="sphx-glr-backref-module-builtins 
sphx-glr-backref-type-py-class"><span class="n">ShapeTuple</span></a><span 
class="p">([</span><span class="n">seq_id</span><span class="p">]),</span> <a 
href="https://docs.python.org/3/library/stdtypes.html#tuple"; 
title="builtins.tuple" [...]
-        <span class="n">logits</span><span class="p">,</span> <span 
class="n">kv_cache</span> <span class="o">=</span> <a 
href="../../reference/api/python/runtime/vm.html#tvm.runtime.vm.VirtualMachine" 
title="tvm.runtime.vm.VirtualMachine" 
class="sphx-glr-backref-module-tvm-runtime-vm sphx-glr-backref-type-py-class 
sphx-glr-backref-instance"><span class="n">vm</span></a><span 
class="p">[</span><span class="s2">&quot;decode&quot;</span><span 
class="p">](</span><span class="n">hidden_state [...]
+        <span class="n">logits</span><span class="p">,</span> <span 
class="n">kv_cache</span> <span class="o">=</span> <span 
class="n">vm</span><span class="p">[</span><span 
class="s2">&quot;decode&quot;</span><span class="p">](</span><span 
class="n">hidden_states</span><span class="p">,</span> <span 
class="n">kv_cache</span><span class="p">,</span> <span 
class="n">params</span><span class="p">)</span>
 
         <span class="n">end_forward_func</span><span class="p">(</span><span 
class="n">kv_cache</span><span class="p">)</span>
         <span class="n">last_token</span> <span class="o">=</span> <span 
class="n">sample_token</span><span class="p">(</span><span 
class="n">logits</span><span class="p">)</span>
diff --git a/docs/how_to/tutorials/sg_execution_times.html 
b/docs/how_to/tutorials/sg_execution_times.html
index c350bf0519e..d259df89146 100644
--- a/docs/how_to/tutorials/sg_execution_times.html
+++ b/docs/how_to/tutorials/sg_execution_times.html
@@ -294,7 +294,7 @@
             
   <section id="computation-times">
 <span id="sphx-glr-how-to-tutorials-sg-execution-times"></span><h1>Computation 
times<a class="headerlink" href="#computation-times" title="Link to this 
heading"></a></h1>
-<p><strong>00:34.061</strong> total execution time for 5 files <strong>from 
how_to/tutorials</strong>:</p>
+<p><strong>00:34.449</strong> total execution time for 5 files <strong>from 
how_to/tutorials</strong>:</p>
 <div class="docutils container">
 <style scoped>
 <link 
href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/5.3.0/css/bootstrap.min.css";
 rel="stylesheet" />
@@ -316,19 +316,19 @@ $(document).ready( function () {
 </thead>
 <tbody>
 <tr class="row-even"><td><p><a class="reference internal" 
href="optimize_llm.html#sphx-glr-how-to-tutorials-optimize-llm-py"><span 
class="std std-ref">Optimize Large Language Model</span></a> (<code 
class="docutils literal notranslate"><span 
class="pre">optimize_llm.py</span></code>)</p></td>
-<td><p>00:32.321</p></td>
+<td><p>00:32.768</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" 
href="customize_opt.html#sphx-glr-how-to-tutorials-customize-opt-py"><span 
class="std std-ref">Customize Optimization</span></a> (<code class="docutils 
literal notranslate"><span class="pre">customize_opt.py</span></code>)</p></td>
-<td><p>00:00.670</p></td>
+<td><p>00:00.653</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" 
href="cross_compilation_and_rpc.html#sphx-glr-how-to-tutorials-cross-compilation-and-rpc-py"><span
 class="std std-ref">Cross Compilation and RPC</span></a> (<code 
class="docutils literal notranslate"><span 
class="pre">cross_compilation_and_rpc.py</span></code>)</p></td>
-<td><p>00:00.573</p></td>
+<td><p>00:00.580</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" 
href="e2e_opt_model.html#sphx-glr-how-to-tutorials-e2e-opt-model-py"><span 
class="std std-ref">End-to-End Optimize Model</span></a> (<code class="docutils 
literal notranslate"><span class="pre">e2e_opt_model.py</span></code>)</p></td>
-<td><p>00:00.495</p></td>
+<td><p>00:00.445</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" 
href="export_and_load_executable.html#sphx-glr-how-to-tutorials-export-and-load-executable-py"><span
 class="std std-ref">Export and Load Relax Executables</span></a> (<code 
class="docutils literal notranslate"><span 
class="pre">export_and_load_executable.py</span></code>)</p></td>
diff --git a/docs/objects.inv b/docs/objects.inv
index 353b65ebe02..190ef8fa33c 100644
Binary files a/docs/objects.inv and b/docs/objects.inv differ
diff --git a/docs/reference/api/python/relax/relax.html 
b/docs/reference/api/python/relax/relax.html
index 7169a4dd8d4..340175e62a6 100644
--- a/docs/reference/api/python/relax/relax.html
+++ b/docs/reference/api/python/relax/relax.html
@@ -2625,7 +2625,7 @@ Python functions stored in the IRModule’s pyfuncs 
attribute.</p>
 
 <dl class="py function">
 <dt class="sig sig-object py" id="tvm.relax.build">
-<span class="sig-prename descclassname"><span 
class="pre">tvm.relax.</span></span><span class="sig-name descname"><span 
class="pre">build</span></span><span class="sig-paren">(</span><em 
class="sig-param"><span class="n"><span class="pre">mod</span></span><span 
class="p"><span class="pre">:</span></span><span class="w"> </span><span 
class="n"><a class="reference internal" href="../ir.html#tvm.ir.IRModule" 
title="tvm.ir.module.IRModule"><span 
class="pre">IRModule</span></a></span></em>, < [...]
+<span class="sig-prename descclassname"><span 
class="pre">tvm.relax.</span></span><span class="sig-name descname"><span 
class="pre">build</span></span><span class="sig-paren">(</span><em 
class="sig-param"><span class="n"><span class="pre">mod</span></span><span 
class="p"><span class="pre">:</span></span><span class="w"> </span><span 
class="n"><a class="reference internal" href="../ir.html#tvm.ir.IRModule" 
title="tvm.ir.module.IRModule"><span 
class="pre">IRModule</span></a></span></em>, < [...]
 <dd><p>Build an IRModule to VM executable.</p>
 <dl class="field-list simple">
 <dt class="field-odd">Parameters<span class="colon">:</span></dt>
diff --git a/docs/reference/api/python/runtime/vm.html 
b/docs/reference/api/python/runtime/vm.html
index 5ded5663406..28ba2d21ac3 100644
--- a/docs/reference/api/python/runtime/vm.html
+++ b/docs/reference/api/python/runtime/vm.html
@@ -490,7 +490,7 @@ more details.</p>
 <div class="admonition seealso">
 <p class="admonition-title">See also</p>
 <dl class="simple">
-<dt><a class="reference internal" 
href="#tvm.runtime.vm.VMInstrumentReturnKind" 
title="tvm.runtime.vm.VMInstrumentReturnKind"><code class="xref py py-obj 
docutils literal notranslate"><span 
class="pre">VMInstrumentReturnKind</span></code></a></dt><dd><p>the possible 
return values in VM.</p>
+<dt><a class="reference internal" 
href="../relax/relax.html#tvm.relax.VMInstrumentReturnKind" 
title="tvm.runtime.vm.VMInstrumentReturnKind"><code class="xref py py-obj 
docutils literal notranslate"><span 
class="pre">VMInstrumentReturnKind</span></code></a></dt><dd><p>the possible 
return values in VM.</p>
 </dd>
 </dl>
 </div>
diff --git a/docs/searchindex.js b/docs/searchindex.js
index bd569056872..602038aabcf 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Cross Compile TVM Runtime": [[40, 
"cross-compile-tvm-runtime"]], "1. The lack of numpy on device machine caused 
the RPC server can\u2019t be launched.": [[40, 
"the-lack-of-numpy-on-device-machine-caused-the-rpc-server-can-t-be-launched"]],
 "2. Pack and Deploy to Device Machine": [[40, 
"pack-and-deploy-to-device-machine"]], "2. The lack of cloudpickle on device 
machine caused the RPC server can\u2019t be launched.": [[40, 
"the-lack-of-cloudpickle-on-devi [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Cross Compile TVM Runtime": [[40, 
"cross-compile-tvm-runtime"]], "1. The lack of numpy on device machine caused 
the RPC server can\u2019t be launched.": [[40, 
"the-lack-of-numpy-on-device-machine-caused-the-rpc-server-can-t-be-launched"]],
 "2. Pack and Deploy to Device Machine": [[40, 
"pack-and-deploy-to-device-machine"]], "2. The lack of cloudpickle on device 
machine caused the RPC server can\u2019t be launched.": [[40, 
"the-lack-of-cloudpickle-on-devi [...]
\ No newline at end of file
diff --git a/docs/sg_execution_times.html b/docs/sg_execution_times.html
index 4cbdafb04fd..7353ec1b63e 100644
--- a/docs/sg_execution_times.html
+++ b/docs/sg_execution_times.html
@@ -294,7 +294,7 @@
             
   <section id="computation-times">
 <span id="sphx-glr-sg-execution-times"></span><h1>Computation times<a 
class="headerlink" href="#computation-times" title="Link to this 
heading"></a></h1>
-<p><strong>00:40.911</strong> total execution time for 11 files <strong>from 
all galleries</strong>:</p>
+<p><strong>00:41.202</strong> total execution time for 11 files <strong>from 
all galleries</strong>:</p>
 <div class="docutils container">
 <style scoped>
 <link 
href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/5.3.0/css/bootstrap.min.css";
 rel="stylesheet" />
@@ -316,35 +316,35 @@ $(document).ready( function () {
 </thead>
 <tbody>
 <tr class="row-even"><td><p><a class="reference internal" 
href="how_to/tutorials/optimize_llm.html#sphx-glr-how-to-tutorials-optimize-llm-py"><span
 class="std std-ref">Optimize Large Language Model</span></a> (<code 
class="docutils literal notranslate"><span 
class="pre">../how_to/tutorials/optimize_llm.py</span></code>)</p></td>
-<td><p>00:32.321</p></td>
+<td><p>00:32.768</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" 
href="get_started/tutorials/ir_module.html#sphx-glr-get-started-tutorials-ir-module-py"><span
 class="std std-ref">IRModule</span></a> (<code class="docutils literal 
notranslate"><span 
class="pre">../get_started/tutorials/ir_module.py</span></code>)</p></td>
-<td><p>00:06.041</p></td>
+<td><p>00:05.937</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" 
href="how_to/tutorials/customize_opt.html#sphx-glr-how-to-tutorials-customize-opt-py"><span
 class="std std-ref">Customize Optimization</span></a> (<code class="docutils 
literal notranslate"><span 
class="pre">../how_to/tutorials/customize_opt.py</span></code>)</p></td>
-<td><p>00:00.670</p></td>
+<td><p>00:00.653</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" 
href="how_to/tutorials/cross_compilation_and_rpc.html#sphx-glr-how-to-tutorials-cross-compilation-and-rpc-py"><span
 class="std std-ref">Cross Compilation and RPC</span></a> (<code 
class="docutils literal notranslate"><span 
class="pre">../how_to/tutorials/cross_compilation_and_rpc.py</span></code>)</p></td>
-<td><p>00:00.573</p></td>
+<td><p>00:00.580</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" 
href="how_to/tutorials/e2e_opt_model.html#sphx-glr-how-to-tutorials-e2e-opt-model-py"><span
 class="std std-ref">End-to-End Optimize Model</span></a> (<code 
class="docutils literal notranslate"><span 
class="pre">../how_to/tutorials/e2e_opt_model.py</span></code>)</p></td>
-<td><p>00:00.495</p></td>
+<td><p>00:00.445</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" 
href="deep_dive/tensor_ir/tutorials/tir_transformation.html#sphx-glr-deep-dive-tensor-ir-tutorials-tir-transformation-py"><span
 class="std std-ref">Transformation</span></a> (<code class="docutils literal 
notranslate"><span 
class="pre">../deep_dive/tensor_ir/tutorials/tir_transformation.py</span></code>)</p></td>
-<td><p>00:00.292</p></td>
+<td><p>00:00.295</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" 
href="get_started/tutorials/quick_start.html#sphx-glr-get-started-tutorials-quick-start-py"><span
 class="std std-ref">Quick Start</span></a> (<code class="docutils literal 
notranslate"><span 
class="pre">../get_started/tutorials/quick_start.py</span></code>)</p></td>
-<td><p>00:00.172</p></td>
+<td><p>00:00.173</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" 
href="deep_dive/tensor_ir/tutorials/tir_creation.html#sphx-glr-deep-dive-tensor-ir-tutorials-tir-creation-py"><span
 class="std std-ref">TensorIR Creation</span></a> (<code class="docutils 
literal notranslate"><span 
class="pre">../deep_dive/tensor_ir/tutorials/tir_creation.py</span></code>)</p></td>
-<td><p>00:00.170</p></td>
+<td><p>00:00.173</p></td>
 <td><p>0.0</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" 
href="deep_dive/relax/tutorials/relax_creation.html#sphx-glr-deep-dive-relax-tutorials-relax-creation-py"><span
 class="std std-ref">Relax Creation</span></a> (<code class="docutils literal 
notranslate"><span 
class="pre">../deep_dive/relax/tutorials/relax_creation.py</span></code>)</p></td>


Reply via email to