hogepodge commented on a change in pull request #7612:
URL: https://github.com/apache/tvm/pull/7612#discussion_r590892335
##########
File path: tutorials/get_started/tensor_expr_get_started.py
##########
@@ -302,18 +371,437 @@
fadd_cl(a, b, c)
tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
-######################################################################
-# Summary
-# -------
-# This tutorial provides a walk through of TVM workflow using
-# a vector add example. The general workflow is
+################################################################################
+# .. note:: Code Specialization
+#
+# As you may have noticed, the declarations of A, B and C all take the same
+# shape argument, n. TVM will take advantage of this to pass only a single
+# shape argument to the kernel, as you will find in the printed device code.
+# This is one form of specialization.
+#
+# On the host side, TVM will automatically generate check code that checks
+# the constraints in the parameters. So if you pass arrays with different
+# shapes into fadd, an error will be raised.
+#
+# We can do more specializations. For example, we can write :code:`n =
+# tvm.runtime.convert(1024)` instead of :code:`n = te.var("n")`, in the
+# computation declaration. The generated function will only take vectors with
+# length 1024.
+
+################################################################################
+# .. note:: TE Scheduling Primitives
+#
+# TVM includes a number of different scheduling primitives:
+#
+# - split: splits a specified axis into two axises by the defined factor.
+# - tile: tiles will split a computation across two axes by the defined
factors.
+# - fuse: fuses two consecutive axises of one computation.
+# - reorder: can reorder the axises of a computation into a defined order.
+# - bind: can bind a computation to a specific thread, useful in GPU
programming.
+# - compute_at: by default, TVM will compute tensors at the root by default.
comput_at specifies
Review comment:
I think it means at the root of the computation graph?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]