This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 3f77cc5  Build at Tue Nov  3 12:04:29 EST 2020
3f77cc5 is described below

commit 3f77cc56e40e531f529410f1d8b605e2a04dbc23
Author: tqchen <[email protected]>
AuthorDate: Tue Nov 3 12:04:29 2020 -0500

    Build at Tue Nov  3 12:04:29 EST 2020
---
 2019/05/30/pytorch-frontend.html                     |  4 ++--
 2020/07/15/how-to-bring-your-own-codegen-to-tvm.html |  2 +-
 atom.xml                                             |  8 ++++----
 feed.xml                                             |  8 ++++----
 rss.xml                                              | 10 +++++-----
 vta.html                                             |  2 +-
 6 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/2019/05/30/pytorch-frontend.html b/2019/05/30/pytorch-frontend.html
index 4f1ba30..c8e2ada 100644
--- a/2019/05/30/pytorch-frontend.html
+++ b/2019/05/30/pytorch-frontend.html
@@ -179,7 +179,7 @@ torch_tvm.enable()
 
 <p>When <code class="language-plaintext highlighter-rouge">torch_tvm</code> is 
enabled, subgraphs of PyTorch IR that can be converted to Relay <code 
class="language-plaintext highlighter-rouge">Expr</code>s will be marked as 
Relay-compatible.  Since PyTorch IR does not always contain shape information, 
none of the subgraphs can be compiled in a useful way before invocation.</p>
 
-<p>During user invocation, the PyTorch JIT runtime will determine input shape 
information and compile the previously marked subgraphs with the new Relay C++ 
<a 
href="https://github.com/pytorch/tvm/blob/master/torch_tvm/compiler.cpp#L226-L246";>build
 system</a>.  The compilation is cached based on input shapes for subsequent 
runs.  More details can be found in the <a 
href="https://github.com/pytorch/tvm/blob/master/README.md";>README</a>.</p>
+<p>During user invocation, the PyTorch JIT runtime will determine input shape 
information and compile the previously marked subgraphs with the new Relay C++ 
<a 
href="https://github.com/pytorch/tvm/blob/main/torch_tvm/compiler.cpp#L226-L246";>build
 system</a>.  The compilation is cached based on input shapes for subsequent 
runs.  More details can be found in the <a 
href="https://github.com/pytorch/tvm/blob/main/README.md";>README</a>.</p>
 
 <p><code class="language-plaintext highlighter-rouge">torch_tvm</code> has a 
continuous benchmark system set up, which is monitoring the performance of 
ResNet18 on CPU.
 Out of the box TVM provides over two times the performance of the default 
PyTorch JIT backend for various ResNet models.
@@ -227,7 +227,7 @@ with torch.no_grad():
     print("Took {}s to run {} iters".format(tvm_time, iters))
 </code></pre></div></div>
 
-<p>Much of this code comes from <a 
href="https://github.com/pytorch/tvm/blob/master/test/benchmarks.py";>benchmarks.py</a>.
  Note that tuned parameters for AVX2 LLVM compilation is in the <code 
class="language-plaintext highlighter-rouge">test/</code> folder of the 
repo.</p>
+<p>Much of this code comes from <a 
href="https://github.com/pytorch/tvm/blob/main/test/benchmarks.py";>benchmarks.py</a>.
  Note that tuned parameters for AVX2 LLVM compilation is in the <code 
class="language-plaintext highlighter-rouge">test/</code> folder of the 
repo.</p>
 
 <p>If you are more comfortable using Relay directly, it is possible to simply 
extract the expression directly from a
 PyTorch function either via (implicit) tracing or TorchScript:</p>
diff --git a/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html 
b/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
index 155ea18..c88e9db 100644
--- a/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
+++ b/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
@@ -268,7 +268,7 @@ Figure 4: After Graph Partitioning.
 <p>In the above example, we specify a list of operators that can be supported 
by DNNL codegen.</p>
 
 <h3 id="rules-for-graph-patterns">Rules for graph patterns</h3>
-<p>Your accelerator or compiler may have optimized some patterns (e.g., Conv2D 
+ add + ReLU) to be a single instruction or an API. In this case, you can 
specify a mapping from a graph pattern to your instruction/API. For the case of 
the DNNL, its Conv2D API already includes bias addition and it allows the next 
ReLU to be attached, so we can call DNNL as the following code snippet (the 
complete implementation can be found <a 
href="[https://github.com/apache/incubator-tvm/blob/master/src/r [...]
+<p>Your accelerator or compiler may have optimized some patterns (e.g., Conv2D 
+ add + ReLU) to be a single instruction or an API. In this case, you can 
specify a mapping from a graph pattern to your instruction/API. For the case of 
the DNNL, its Conv2D API already includes bias addition and it allows the next 
ReLU to be attached, so we can call DNNL as the following code snippet (the 
complete implementation can be found <a 
href="[https://github.com/apache/incubator-tvm/blob/main/src/run [...]
 
 <div class="language-c highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code><span class="n">DNNLConv2d</span><span 
class="p">(</span><span class="k">const</span> <span class="n">bool</span> 
<span class="n">has_bias</span> <span class="o">=</span> <span 
class="nb">false</span><span class="p">,</span> <span class="k">const</span> 
<span class="n">bool</span> <span class="n">has_relu</span> <span 
class="o">=</span> <span class="nb">false</span><span class="p">)</span> <span 
[...]
   <span class="c1">// ... skip ...</span>
diff --git a/atom.xml b/atom.xml
index 3b07147..0a39c0e 100644
--- a/atom.xml
+++ b/atom.xml
@@ -4,7 +4,7 @@
  <title>TVM</title>
  <link href="https://tvm.apache.org"; rel="self"/>
  <link href="https://tvm.apache.org"/>
- <updated>2020-11-03T09:01:59-05:00</updated>
+ <updated>2020-11-03T12:04:27-05:00</updated>
  <id>https://tvm.apache.org</id>
  <author>
    <name></name>
@@ -426,7 +426,7 @@ Figure 4: After Graph Partitioning.
 &lt;p&gt;In the above example, we specify a list of operators that can be 
supported by DNNL codegen.&lt;/p&gt;
 
 &lt;h3 id=&quot;rules-for-graph-patterns&quot;&gt;Rules for graph 
patterns&lt;/h3&gt;
-&lt;p&gt;Your accelerator or compiler may have optimized some patterns (e.g., 
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you 
can specify a mapping from a graph pattern to your instruction/API. For the 
case of the DNNL, its Conv2D API already includes bias addition and it allows 
the next ReLU to be attached, so we can call DNNL as the following code snippet 
(the complete implementation can be found &lt;a 
href=&quot;[https://github.com/apache/incubator-tvm/blo [...]
+&lt;p&gt;Your accelerator or compiler may have optimized some patterns (e.g., 
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you 
can specify a mapping from a graph pattern to your instruction/API. For the 
case of the DNNL, its Conv2D API already includes bias addition and it allows 
the next ReLU to be attached, so we can call DNNL as the following code snippet 
(the complete implementation can be found &lt;a 
href=&quot;[https://github.com/apache/incubator-tvm/blo [...]
 
 &lt;div class=&quot;language-c highlighter-rouge&quot;&gt;&lt;div 
class=&quot;highlight&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span 
class=&quot;n&quot;&gt;DNNLConv2d&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;const&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;bool&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;has_bias&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;false&lt 
[...]
   &lt;span class=&quot;c1&quot;&gt;// ... skip ...&lt;/span&gt;
@@ -1735,7 +1735,7 @@ torch_tvm.enable()
 
 &lt;p&gt;When &lt;code class=&quot;language-plaintext 
highlighter-rouge&quot;&gt;torch_tvm&lt;/code&gt; is enabled, subgraphs of 
PyTorch IR that can be converted to Relay &lt;code 
class=&quot;language-plaintext highlighter-rouge&quot;&gt;Expr&lt;/code&gt;s 
will be marked as Relay-compatible.  Since PyTorch IR does not always contain 
shape information, none of the subgraphs can be compiled in a useful way before 
invocation.&lt;/p&gt;
 
-&lt;p&gt;During user invocation, the PyTorch JIT runtime will determine input 
shape information and compile the previously marked subgraphs with the new 
Relay C++ &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/master/torch_tvm/compiler.cpp#L226-L246&quot;&gt;build
 system&lt;/a&gt;.  The compilation is cached based on input shapes for 
subsequent runs.  More details can be found in the &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/master/README.md&quot;&gt;README&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;During user invocation, the PyTorch JIT runtime will determine input 
shape information and compile the previously marked subgraphs with the new 
Relay C++ &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/main/torch_tvm/compiler.cpp#L226-L246&quot;&gt;build
 system&lt;/a&gt;.  The compilation is cached based on input shapes for 
subsequent runs.  More details can be found in the &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/main/README.md&quot;&gt;README&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;&lt;code class=&quot;language-plaintext 
highlighter-rouge&quot;&gt;torch_tvm&lt;/code&gt; has a continuous benchmark 
system set up, which is monitoring the performance of ResNet18 on CPU.
 Out of the box TVM provides over two times the performance of the default 
PyTorch JIT backend for various ResNet models.
@@ -1783,7 +1783,7 @@ with torch.no_grad():
     print(&quot;Took {}s to run {} iters&quot;.format(tvm_time, iters))
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
 
-&lt;p&gt;Much of this code comes from &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/master/test/benchmarks.py&quot;&gt;benchmarks.py&lt;/a&gt;.
  Note that tuned parameters for AVX2 LLVM compilation is in the &lt;code 
class=&quot;language-plaintext highlighter-rouge&quot;&gt;test/&lt;/code&gt; 
folder of the repo.&lt;/p&gt;
+&lt;p&gt;Much of this code comes from &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/main/test/benchmarks.py&quot;&gt;benchmarks.py&lt;/a&gt;.
  Note that tuned parameters for AVX2 LLVM compilation is in the &lt;code 
class=&quot;language-plaintext highlighter-rouge&quot;&gt;test/&lt;/code&gt; 
folder of the repo.&lt;/p&gt;
 
 &lt;p&gt;If you are more comfortable using Relay directly, it is possible to 
simply extract the expression directly from a
 PyTorch function either via (implicit) tracing or TorchScript:&lt;/p&gt;
diff --git a/feed.xml b/feed.xml
index 2406202..c4d1373 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="utf-8"?><feed 
xmlns="http://www.w3.org/2005/Atom"; ><generator uri="https://jekyllrb.com/"; 
version="4.1.1">Jekyll</generator><link href="/feed.xml" rel="self" 
type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" 
/><updated>2020-11-03T09:01:59-05:00</updated><id>/feed.xml</id><title 
type="html">TVM</title><author><name>{&quot;name&quot;=&gt;nil}</name></author><entry><title
 type="html">Bring Your Own Datatypes: Enabling Custom Datatype [...]
+<?xml version="1.0" encoding="utf-8"?><feed 
xmlns="http://www.w3.org/2005/Atom"; ><generator uri="https://jekyllrb.com/"; 
version="4.1.1">Jekyll</generator><link href="/feed.xml" rel="self" 
type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" 
/><updated>2020-11-03T12:04:27-05:00</updated><id>/feed.xml</id><title 
type="html">TVM</title><author><name>{&quot;name&quot;=&gt;nil}</name></author><entry><title
 type="html">Bring Your Own Datatypes: Enabling Custom Datatype [...]
 
 &lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;
 
@@ -398,7 +398,7 @@ Figure 4: After Graph Partitioning.
 &lt;p&gt;In the above example, we specify a list of operators that can be 
supported by DNNL codegen.&lt;/p&gt;
 
 &lt;h3 id=&quot;rules-for-graph-patterns&quot;&gt;Rules for graph 
patterns&lt;/h3&gt;
-&lt;p&gt;Your accelerator or compiler may have optimized some patterns (e.g., 
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you 
can specify a mapping from a graph pattern to your instruction/API. For the 
case of the DNNL, its Conv2D API already includes bias addition and it allows 
the next ReLU to be attached, so we can call DNNL as the following code snippet 
(the complete implementation can be found &lt;a 
href=&quot;[https://github.com/apache/incubator-tvm/blo [...]
+&lt;p&gt;Your accelerator or compiler may have optimized some patterns (e.g., 
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you 
can specify a mapping from a graph pattern to your instruction/API. For the 
case of the DNNL, its Conv2D API already includes bias addition and it allows 
the next ReLU to be attached, so we can call DNNL as the following code snippet 
(the complete implementation can be found &lt;a 
href=&quot;[https://github.com/apache/incubator-tvm/blo [...]
 
 &lt;div class=&quot;language-c highlighter-rouge&quot;&gt;&lt;div 
class=&quot;highlight&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span 
class=&quot;n&quot;&gt;DNNLConv2d&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;const&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;bool&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;has_bias&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;false&lt 
[...]
   &lt;span class=&quot;c1&quot;&gt;// ... skip ...&lt;/span&gt;
@@ -1668,7 +1668,7 @@ torch_tvm.enable()
 
 &lt;p&gt;When &lt;code class=&quot;language-plaintext 
highlighter-rouge&quot;&gt;torch_tvm&lt;/code&gt; is enabled, subgraphs of 
PyTorch IR that can be converted to Relay &lt;code 
class=&quot;language-plaintext highlighter-rouge&quot;&gt;Expr&lt;/code&gt;s 
will be marked as Relay-compatible.  Since PyTorch IR does not always contain 
shape information, none of the subgraphs can be compiled in a useful way before 
invocation.&lt;/p&gt;
 
-&lt;p&gt;During user invocation, the PyTorch JIT runtime will determine input 
shape information and compile the previously marked subgraphs with the new 
Relay C++ &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/master/torch_tvm/compiler.cpp#L226-L246&quot;&gt;build
 system&lt;/a&gt;.  The compilation is cached based on input shapes for 
subsequent runs.  More details can be found in the &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/master/README.md&quot;&gt;README&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;During user invocation, the PyTorch JIT runtime will determine input 
shape information and compile the previously marked subgraphs with the new 
Relay C++ &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/main/torch_tvm/compiler.cpp#L226-L246&quot;&gt;build
 system&lt;/a&gt;.  The compilation is cached based on input shapes for 
subsequent runs.  More details can be found in the &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/main/README.md&quot;&gt;README&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;&lt;code class=&quot;language-plaintext 
highlighter-rouge&quot;&gt;torch_tvm&lt;/code&gt; has a continuous benchmark 
system set up, which is monitoring the performance of ResNet18 on CPU.
 Out of the box TVM provides over two times the performance of the default 
PyTorch JIT backend for various ResNet models.
@@ -1716,7 +1716,7 @@ with torch.no_grad():
     print(&quot;Took {}s to run {} iters&quot;.format(tvm_time, iters))
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
 
-&lt;p&gt;Much of this code comes from &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/master/test/benchmarks.py&quot;&gt;benchmarks.py&lt;/a&gt;.
  Note that tuned parameters for AVX2 LLVM compilation is in the &lt;code 
class=&quot;language-plaintext highlighter-rouge&quot;&gt;test/&lt;/code&gt; 
folder of the repo.&lt;/p&gt;
+&lt;p&gt;Much of this code comes from &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/main/test/benchmarks.py&quot;&gt;benchmarks.py&lt;/a&gt;.
  Note that tuned parameters for AVX2 LLVM compilation is in the &lt;code 
class=&quot;language-plaintext highlighter-rouge&quot;&gt;test/&lt;/code&gt; 
folder of the repo.&lt;/p&gt;
 
 &lt;p&gt;If you are more comfortable using Relay directly, it is possible to 
simply extract the expression directly from a
 PyTorch function either via (implicit) tracing or TorchScript:&lt;/p&gt;
diff --git a/rss.xml b/rss.xml
index cc2324e..6b7d361 100644
--- a/rss.xml
+++ b/rss.xml
@@ -5,8 +5,8 @@
         <description>TVM - </description>
         <link>https://tvm.apache.org</link>
         <atom:link href="https://tvm.apache.org"; rel="self" 
type="application/rss+xml" />
-        <lastBuildDate>Tue, 03 Nov 2020 09:01:59 -0500</lastBuildDate>
-        <pubDate>Tue, 03 Nov 2020 09:01:59 -0500</pubDate>
+        <lastBuildDate>Tue, 03 Nov 2020 12:04:27 -0500</lastBuildDate>
+        <pubDate>Tue, 03 Nov 2020 12:04:27 -0500</pubDate>
         <ttl>60</ttl>
 
 
@@ -421,7 +421,7 @@ Figure 4: After Graph Partitioning.
 &lt;p&gt;In the above example, we specify a list of operators that can be 
supported by DNNL codegen.&lt;/p&gt;
 
 &lt;h3 id=&quot;rules-for-graph-patterns&quot;&gt;Rules for graph 
patterns&lt;/h3&gt;
-&lt;p&gt;Your accelerator or compiler may have optimized some patterns (e.g., 
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you 
can specify a mapping from a graph pattern to your instruction/API. For the 
case of the DNNL, its Conv2D API already includes bias addition and it allows 
the next ReLU to be attached, so we can call DNNL as the following code snippet 
(the complete implementation can be found &lt;a 
href=&quot;[https://github.com/apache/incubator-tvm/blo [...]
+&lt;p&gt;Your accelerator or compiler may have optimized some patterns (e.g., 
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you 
can specify a mapping from a graph pattern to your instruction/API. For the 
case of the DNNL, its Conv2D API already includes bias addition and it allows 
the next ReLU to be attached, so we can call DNNL as the following code snippet 
(the complete implementation can be found &lt;a 
href=&quot;[https://github.com/apache/incubator-tvm/blo [...]
 
 &lt;div class=&quot;language-c highlighter-rouge&quot;&gt;&lt;div 
class=&quot;highlight&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span 
class=&quot;n&quot;&gt;DNNLConv2d&lt;/span&gt;&lt;span 
class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span 
class=&quot;k&quot;&gt;const&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;bool&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;has_bias&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;false&lt 
[...]
   &lt;span class=&quot;c1&quot;&gt;// ... skip ...&lt;/span&gt;
@@ -1730,7 +1730,7 @@ torch_tvm.enable()
 
 &lt;p&gt;When &lt;code class=&quot;language-plaintext 
highlighter-rouge&quot;&gt;torch_tvm&lt;/code&gt; is enabled, subgraphs of 
PyTorch IR that can be converted to Relay &lt;code 
class=&quot;language-plaintext highlighter-rouge&quot;&gt;Expr&lt;/code&gt;s 
will be marked as Relay-compatible.  Since PyTorch IR does not always contain 
shape information, none of the subgraphs can be compiled in a useful way before 
invocation.&lt;/p&gt;
 
-&lt;p&gt;During user invocation, the PyTorch JIT runtime will determine input 
shape information and compile the previously marked subgraphs with the new 
Relay C++ &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/master/torch_tvm/compiler.cpp#L226-L246&quot;&gt;build
 system&lt;/a&gt;.  The compilation is cached based on input shapes for 
subsequent runs.  More details can be found in the &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/master/README.md&quot;&gt;README&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;During user invocation, the PyTorch JIT runtime will determine input 
shape information and compile the previously marked subgraphs with the new 
Relay C++ &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/main/torch_tvm/compiler.cpp#L226-L246&quot;&gt;build
 system&lt;/a&gt;.  The compilation is cached based on input shapes for 
subsequent runs.  More details can be found in the &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/main/README.md&quot;&gt;README&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;&lt;code class=&quot;language-plaintext 
highlighter-rouge&quot;&gt;torch_tvm&lt;/code&gt; has a continuous benchmark 
system set up, which is monitoring the performance of ResNet18 on CPU.
 Out of the box TVM provides over two times the performance of the default 
PyTorch JIT backend for various ResNet models.
@@ -1778,7 +1778,7 @@ with torch.no_grad():
     print(&quot;Took {}s to run {} iters&quot;.format(tvm_time, iters))
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
 
-&lt;p&gt;Much of this code comes from &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/master/test/benchmarks.py&quot;&gt;benchmarks.py&lt;/a&gt;.
  Note that tuned parameters for AVX2 LLVM compilation is in the &lt;code 
class=&quot;language-plaintext highlighter-rouge&quot;&gt;test/&lt;/code&gt; 
folder of the repo.&lt;/p&gt;
+&lt;p&gt;Much of this code comes from &lt;a 
href=&quot;https://github.com/pytorch/tvm/blob/main/test/benchmarks.py&quot;&gt;benchmarks.py&lt;/a&gt;.
  Note that tuned parameters for AVX2 LLVM compilation is in the &lt;code 
class=&quot;language-plaintext highlighter-rouge&quot;&gt;test/&lt;/code&gt; 
folder of the repo.&lt;/p&gt;
 
 &lt;p&gt;If you are more comfortable using Relay directly, it is possible to 
simply extract the expression directly from a
 PyTorch function either via (implicit) tracing or TorchScript:&lt;/p&gt;
diff --git a/vta.html b/vta.html
index 8766699..fd6d6f4 100644
--- a/vta.html
+++ b/vta.html
@@ -149,7 +149,7 @@ The current release includes a behavioral hardware 
simulator, as well as the inf
 By extending the TVM stack with a customizable, and open source deep learning 
hardware accelerator design, we are exposing a transparent end-to-end deep 
learning stack from the high-level deep learning framework, down to the actual 
hardware design and implementation.
 This forms a truly end-to-end, from software-to-hardware open source stack for 
deep learning systems.</p>
 
-<p style="text-align: center"><img 
src="https://raw.githubusercontent.com/uwsampl/web-data/master/vta/blogpost/vta_stack.png";
 alt="image" width="50%" /></p>
+<p style="text-align: center"><img 
src="https://raw.githubusercontent.com/uwsampl/web-data/main/vta/blogpost/vta_stack.png";
 alt="image" width="50%" /></p>
 
 <p>The VTA and TVM stack together constitute a blueprint for end-to-end, 
accelerator-centric deep learning system that can:</p>
 

Reply via email to