This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 3f77cc5 Build at Tue Nov 3 12:04:29 EST 2020
3f77cc5 is described below
commit 3f77cc56e40e531f529410f1d8b605e2a04dbc23
Author: tqchen <[email protected]>
AuthorDate: Tue Nov 3 12:04:29 2020 -0500
Build at Tue Nov 3 12:04:29 EST 2020
---
2019/05/30/pytorch-frontend.html | 4 ++--
2020/07/15/how-to-bring-your-own-codegen-to-tvm.html | 2 +-
atom.xml | 8 ++++----
feed.xml | 8 ++++----
rss.xml | 10 +++++-----
vta.html | 2 +-
6 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/2019/05/30/pytorch-frontend.html b/2019/05/30/pytorch-frontend.html
index 4f1ba30..c8e2ada 100644
--- a/2019/05/30/pytorch-frontend.html
+++ b/2019/05/30/pytorch-frontend.html
@@ -179,7 +179,7 @@ torch_tvm.enable()
<p>When <code class="language-plaintext highlighter-rouge">torch_tvm</code> is
enabled, subgraphs of PyTorch IR that can be converted to Relay <code
class="language-plaintext highlighter-rouge">Expr</code>s will be marked as
Relay-compatible. Since PyTorch IR does not always contain shape information,
none of the subgraphs can be compiled in a useful way before invocation.</p>
-<p>During user invocation, the PyTorch JIT runtime will determine input shape
information and compile the previously marked subgraphs with the new Relay C++
<a
href="https://github.com/pytorch/tvm/blob/master/torch_tvm/compiler.cpp#L226-L246">build
system</a>. The compilation is cached based on input shapes for subsequent
runs. More details can be found in the <a
href="https://github.com/pytorch/tvm/blob/master/README.md">README</a>.</p>
+<p>During user invocation, the PyTorch JIT runtime will determine input shape
information and compile the previously marked subgraphs with the new Relay C++
<a
href="https://github.com/pytorch/tvm/blob/main/torch_tvm/compiler.cpp#L226-L246">build
system</a>. The compilation is cached based on input shapes for subsequent
runs. More details can be found in the <a
href="https://github.com/pytorch/tvm/blob/main/README.md">README</a>.</p>
<p><code class="language-plaintext highlighter-rouge">torch_tvm</code> has a
continuous benchmark system set up, which is monitoring the performance of
ResNet18 on CPU.
Out of the box TVM provides over two times the performance of the default
PyTorch JIT backend for various ResNet models.
@@ -227,7 +227,7 @@ with torch.no_grad():
print("Took {}s to run {} iters".format(tvm_time, iters))
</code></pre></div></div>
-<p>Much of this code comes from <a
href="https://github.com/pytorch/tvm/blob/master/test/benchmarks.py">benchmarks.py</a>.
Note that tuned parameters for AVX2 LLVM compilation is in the <code
class="language-plaintext highlighter-rouge">test/</code> folder of the
repo.</p>
+<p>Much of this code comes from <a
href="https://github.com/pytorch/tvm/blob/main/test/benchmarks.py">benchmarks.py</a>.
Note that tuned parameters for AVX2 LLVM compilation is in the <code
class="language-plaintext highlighter-rouge">test/</code> folder of the
repo.</p>
<p>If you are more comfortable using Relay directly, it is possible to simply
extract the expression directly from a
PyTorch function either via (implicit) tracing or TorchScript:</p>
diff --git a/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
b/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
index 155ea18..c88e9db 100644
--- a/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
+++ b/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
@@ -268,7 +268,7 @@ Figure 4: After Graph Partitioning.
<p>In the above example, we specify a list of operators that can be supported
by DNNL codegen.</p>
<h3 id="rules-for-graph-patterns">Rules for graph patterns</h3>
-<p>Your accelerator or compiler may have optimized some patterns (e.g., Conv2D
+ add + ReLU) to be a single instruction or an API. In this case, you can
specify a mapping from a graph pattern to your instruction/API. For the case of
the DNNL, its Conv2D API already includes bias addition and it allows the next
ReLU to be attached, so we can call DNNL as the following code snippet (the
complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blob/master/src/r [...]
+<p>Your accelerator or compiler may have optimized some patterns (e.g., Conv2D
+ add + ReLU) to be a single instruction or an API. In this case, you can
specify a mapping from a graph pattern to your instruction/API. For the case of
the DNNL, its Conv2D API already includes bias addition and it allows the next
ReLU to be attached, so we can call DNNL as the following code snippet (the
complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blob/main/src/run [...]
<div class="language-c highlighter-rouge"><div class="highlight"><pre
class="highlight"><code><span class="n">DNNLConv2d</span><span
class="p">(</span><span class="k">const</span> <span class="n">bool</span>
<span class="n">has_bias</span> <span class="o">=</span> <span
class="nb">false</span><span class="p">,</span> <span class="k">const</span>
<span class="n">bool</span> <span class="n">has_relu</span> <span
class="o">=</span> <span class="nb">false</span><span class="p">)</span> <span
[...]
<span class="c1">// ... skip ...</span>
diff --git a/atom.xml b/atom.xml
index 3b07147..0a39c0e 100644
--- a/atom.xml
+++ b/atom.xml
@@ -4,7 +4,7 @@
<title>TVM</title>
<link href="https://tvm.apache.org" rel="self"/>
<link href="https://tvm.apache.org"/>
- <updated>2020-11-03T09:01:59-05:00</updated>
+ <updated>2020-11-03T12:04:27-05:00</updated>
<id>https://tvm.apache.org</id>
<author>
<name></name>
@@ -426,7 +426,7 @@ Figure 4: After Graph Partitioning.
<p>In the above example, we specify a list of operators that can be
supported by DNNL codegen.</p>
<h3 id="rules-for-graph-patterns">Rules for graph
patterns</h3>
-<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blo [...]
+<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blo [...]
<div class="language-c highlighter-rouge"><div
class="highlight"><pre
class="highlight"><code><span
class="n">DNNLConv2d</span><span
class="p">(</span><span
class="k">const</span> <span
class="n">bool</span> <span
class="n">has_bias</span> <span
class="o">=</span> <span class="nb">false<
[...]
<span class="c1">// ... skip ...</span>
@@ -1735,7 +1735,7 @@ torch_tvm.enable()
<p>When <code class="language-plaintext
highlighter-rouge">torch_tvm</code> is enabled, subgraphs of
PyTorch IR that can be converted to Relay <code
class="language-plaintext highlighter-rouge">Expr</code>s
will be marked as Relay-compatible. Since PyTorch IR does not always contain
shape information, none of the subgraphs can be compiled in a useful way before
invocation.</p>
-<p>During user invocation, the PyTorch JIT runtime will determine input
shape information and compile the previously marked subgraphs with the new
Relay C++ <a
href="https://github.com/pytorch/tvm/blob/master/torch_tvm/compiler.cpp#L226-L246">build
system</a>. The compilation is cached based on input shapes for
subsequent runs. More details can be found in the <a
href="https://github.com/pytorch/tvm/blob/master/README.md">README</a>.</p>
+<p>During user invocation, the PyTorch JIT runtime will determine input
shape information and compile the previously marked subgraphs with the new
Relay C++ <a
href="https://github.com/pytorch/tvm/blob/main/torch_tvm/compiler.cpp#L226-L246">build
system</a>. The compilation is cached based on input shapes for
subsequent runs. More details can be found in the <a
href="https://github.com/pytorch/tvm/blob/main/README.md">README</a>.</p>
<p><code class="language-plaintext
highlighter-rouge">torch_tvm</code> has a continuous benchmark
system set up, which is monitoring the performance of ResNet18 on CPU.
Out of the box TVM provides over two times the performance of the default
PyTorch JIT backend for various ResNet models.
@@ -1783,7 +1783,7 @@ with torch.no_grad():
print("Took {}s to run {} iters".format(tvm_time, iters))
</code></pre></div></div>
-<p>Much of this code comes from <a
href="https://github.com/pytorch/tvm/blob/master/test/benchmarks.py">benchmarks.py</a>.
Note that tuned parameters for AVX2 LLVM compilation is in the <code
class="language-plaintext highlighter-rouge">test/</code>
folder of the repo.</p>
+<p>Much of this code comes from <a
href="https://github.com/pytorch/tvm/blob/main/test/benchmarks.py">benchmarks.py</a>.
Note that tuned parameters for AVX2 LLVM compilation is in the <code
class="language-plaintext highlighter-rouge">test/</code>
folder of the repo.</p>
<p>If you are more comfortable using Relay directly, it is possible to
simply extract the expression directly from a
PyTorch function either via (implicit) tracing or TorchScript:</p>
diff --git a/feed.xml b/feed.xml
index 2406202..c4d1373 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="utf-8"?><feed
xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/"
version="4.1.1">Jekyll</generator><link href="/feed.xml" rel="self"
type="application/atom+xml" /><link href="/" rel="alternate" type="text/html"
/><updated>2020-11-03T09:01:59-05:00</updated><id>/feed.xml</id><title
type="html">TVM</title><author><name>{"name"=>nil}</name></author><entry><title
type="html">Bring Your Own Datatypes: Enabling Custom Datatype [...]
+<?xml version="1.0" encoding="utf-8"?><feed
xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/"
version="4.1.1">Jekyll</generator><link href="/feed.xml" rel="self"
type="application/atom+xml" /><link href="/" rel="alternate" type="text/html"
/><updated>2020-11-03T12:04:27-05:00</updated><id>/feed.xml</id><title
type="html">TVM</title><author><name>{"name"=>nil}</name></author><entry><title
type="html">Bring Your Own Datatypes: Enabling Custom Datatype [...]
<h2 id="introduction">Introduction</h2>
@@ -398,7 +398,7 @@ Figure 4: After Graph Partitioning.
<p>In the above example, we specify a list of operators that can be
supported by DNNL codegen.</p>
<h3 id="rules-for-graph-patterns">Rules for graph
patterns</h3>
-<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blo [...]
+<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blo [...]
<div class="language-c highlighter-rouge"><div
class="highlight"><pre
class="highlight"><code><span
class="n">DNNLConv2d</span><span
class="p">(</span><span
class="k">const</span> <span
class="n">bool</span> <span
class="n">has_bias</span> <span
class="o">=</span> <span class="nb">false<
[...]
<span class="c1">// ... skip ...</span>
@@ -1668,7 +1668,7 @@ torch_tvm.enable()
<p>When <code class="language-plaintext
highlighter-rouge">torch_tvm</code> is enabled, subgraphs of
PyTorch IR that can be converted to Relay <code
class="language-plaintext highlighter-rouge">Expr</code>s
will be marked as Relay-compatible. Since PyTorch IR does not always contain
shape information, none of the subgraphs can be compiled in a useful way before
invocation.</p>
-<p>During user invocation, the PyTorch JIT runtime will determine input
shape information and compile the previously marked subgraphs with the new
Relay C++ <a
href="https://github.com/pytorch/tvm/blob/master/torch_tvm/compiler.cpp#L226-L246">build
system</a>. The compilation is cached based on input shapes for
subsequent runs. More details can be found in the <a
href="https://github.com/pytorch/tvm/blob/master/README.md">README</a>.</p>
+<p>During user invocation, the PyTorch JIT runtime will determine input
shape information and compile the previously marked subgraphs with the new
Relay C++ <a
href="https://github.com/pytorch/tvm/blob/main/torch_tvm/compiler.cpp#L226-L246">build
system</a>. The compilation is cached based on input shapes for
subsequent runs. More details can be found in the <a
href="https://github.com/pytorch/tvm/blob/main/README.md">README</a>.</p>
<p><code class="language-plaintext
highlighter-rouge">torch_tvm</code> has a continuous benchmark
system set up, which is monitoring the performance of ResNet18 on CPU.
Out of the box TVM provides over two times the performance of the default
PyTorch JIT backend for various ResNet models.
@@ -1716,7 +1716,7 @@ with torch.no_grad():
print("Took {}s to run {} iters".format(tvm_time, iters))
</code></pre></div></div>
-<p>Much of this code comes from <a
href="https://github.com/pytorch/tvm/blob/master/test/benchmarks.py">benchmarks.py</a>.
Note that tuned parameters for AVX2 LLVM compilation is in the <code
class="language-plaintext highlighter-rouge">test/</code>
folder of the repo.</p>
+<p>Much of this code comes from <a
href="https://github.com/pytorch/tvm/blob/main/test/benchmarks.py">benchmarks.py</a>.
Note that tuned parameters for AVX2 LLVM compilation is in the <code
class="language-plaintext highlighter-rouge">test/</code>
folder of the repo.</p>
<p>If you are more comfortable using Relay directly, it is possible to
simply extract the expression directly from a
PyTorch function either via (implicit) tracing or TorchScript:</p>
diff --git a/rss.xml b/rss.xml
index cc2324e..6b7d361 100644
--- a/rss.xml
+++ b/rss.xml
@@ -5,8 +5,8 @@
<description>TVM - </description>
<link>https://tvm.apache.org</link>
<atom:link href="https://tvm.apache.org" rel="self"
type="application/rss+xml" />
- <lastBuildDate>Tue, 03 Nov 2020 09:01:59 -0500</lastBuildDate>
- <pubDate>Tue, 03 Nov 2020 09:01:59 -0500</pubDate>
+ <lastBuildDate>Tue, 03 Nov 2020 12:04:27 -0500</lastBuildDate>
+ <pubDate>Tue, 03 Nov 2020 12:04:27 -0500</pubDate>
<ttl>60</ttl>
@@ -421,7 +421,7 @@ Figure 4: After Graph Partitioning.
<p>In the above example, we specify a list of operators that can be
supported by DNNL codegen.</p>
<h3 id="rules-for-graph-patterns">Rules for graph
patterns</h3>
-<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blo [...]
+<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blo [...]
<div class="language-c highlighter-rouge"><div
class="highlight"><pre
class="highlight"><code><span
class="n">DNNLConv2d</span><span
class="p">(</span><span
class="k">const</span> <span
class="n">bool</span> <span
class="n">has_bias</span> <span
class="o">=</span> <span class="nb">false<
[...]
<span class="c1">// ... skip ...</span>
@@ -1730,7 +1730,7 @@ torch_tvm.enable()
<p>When <code class="language-plaintext
highlighter-rouge">torch_tvm</code> is enabled, subgraphs of
PyTorch IR that can be converted to Relay <code
class="language-plaintext highlighter-rouge">Expr</code>s
will be marked as Relay-compatible. Since PyTorch IR does not always contain
shape information, none of the subgraphs can be compiled in a useful way before
invocation.</p>
-<p>During user invocation, the PyTorch JIT runtime will determine input
shape information and compile the previously marked subgraphs with the new
Relay C++ <a
href="https://github.com/pytorch/tvm/blob/master/torch_tvm/compiler.cpp#L226-L246">build
system</a>. The compilation is cached based on input shapes for
subsequent runs. More details can be found in the <a
href="https://github.com/pytorch/tvm/blob/master/README.md">README</a>.</p>
+<p>During user invocation, the PyTorch JIT runtime will determine input
shape information and compile the previously marked subgraphs with the new
Relay C++ <a
href="https://github.com/pytorch/tvm/blob/main/torch_tvm/compiler.cpp#L226-L246">build
system</a>. The compilation is cached based on input shapes for
subsequent runs. More details can be found in the <a
href="https://github.com/pytorch/tvm/blob/main/README.md">README</a>.</p>
<p><code class="language-plaintext
highlighter-rouge">torch_tvm</code> has a continuous benchmark
system set up, which is monitoring the performance of ResNet18 on CPU.
Out of the box TVM provides over two times the performance of the default
PyTorch JIT backend for various ResNet models.
@@ -1778,7 +1778,7 @@ with torch.no_grad():
print("Took {}s to run {} iters".format(tvm_time, iters))
</code></pre></div></div>
-<p>Much of this code comes from <a
href="https://github.com/pytorch/tvm/blob/master/test/benchmarks.py">benchmarks.py</a>.
Note that tuned parameters for AVX2 LLVM compilation is in the <code
class="language-plaintext highlighter-rouge">test/</code>
folder of the repo.</p>
+<p>Much of this code comes from <a
href="https://github.com/pytorch/tvm/blob/main/test/benchmarks.py">benchmarks.py</a>.
Note that tuned parameters for AVX2 LLVM compilation is in the <code
class="language-plaintext highlighter-rouge">test/</code>
folder of the repo.</p>
<p>If you are more comfortable using Relay directly, it is possible to
simply extract the expression directly from a
PyTorch function either via (implicit) tracing or TorchScript:</p>
diff --git a/vta.html b/vta.html
index 8766699..fd6d6f4 100644
--- a/vta.html
+++ b/vta.html
@@ -149,7 +149,7 @@ The current release includes a behavioral hardware
simulator, as well as the inf
By extending the TVM stack with a customizable, and open source deep learning
hardware accelerator design, we are exposing a transparent end-to-end deep
learning stack from the high-level deep learning framework, down to the actual
hardware design and implementation.
This forms a truly end-to-end, from software-to-hardware open source stack for
deep learning systems.</p>
-<p style="text-align: center"><img
src="https://raw.githubusercontent.com/uwsampl/web-data/master/vta/blogpost/vta_stack.png"
alt="image" width="50%" /></p>
+<p style="text-align: center"><img
src="https://raw.githubusercontent.com/uwsampl/web-data/main/vta/blogpost/vta_stack.png"
alt="image" width="50%" /></p>
<p>The VTA and TVM stack together constitute a blueprint for end-to-end,
accelerator-centric deep learning system that can:</p>