This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 41536bf Build at Thu Mar 18 20:31:02 EDT 2021
41536bf is described below
commit 41536bf318ffc0ae2a38edfd7f70d9918861666e
Author: tqchen <[email protected]>
AuthorDate: Thu Mar 18 20:31:02 2021 -0400
Build at Thu Mar 18 20:31:02 EDT 2021
---
2020/07/15/how-to-bring-your-own-codegen-to-tvm.html | 2 +-
atom.xml | 4 ++--
feed.xml | 4 ++--
rss.xml | 6 +++---
4 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
b/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
index a2066ec..1e16544 100644
--- a/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
+++ b/2020/07/15/how-to-bring-your-own-codegen-to-tvm.html
@@ -268,7 +268,7 @@ Figure 4: After Graph Partitioning.
<p>In the above example, we specify a list of operators that can be supported
by DNNL codegen.</p>
<h3 id="rules-for-graph-patterns">Rules for graph patterns</h3>
-<p>Your accelerator or compiler may have optimized some patterns (e.g., Conv2D
+ add + ReLU) to be a single instruction or an API. In this case, you can
specify a mapping from a graph pattern to your instruction/API. For the case of
the DNNL, its Conv2D API already includes bias addition and it allows the next
ReLU to be attached, so we can call DNNL as the following code snippet (the
complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blob/main/src/run [...]
+<p>Your accelerator or compiler may have optimized some patterns (e.g., Conv2D
+ add + ReLU) to be a single instruction or an API. In this case, you can
specify a mapping from a graph pattern to your instruction/API. For the case of
the DNNL, its Conv2D API already includes bias addition and it allows the next
ReLU to be attached, so we can call DNNL as the following code snippet (the
complete implementation can be found <a
href="https://github.com/apache/incubator-tvm/blob/main/src/runt [...]
<div class="language-c highlighter-rouge"><div class="highlight"><pre
class="highlight"><code><span class="n">DNNLConv2d</span><span
class="p">(</span><span class="k">const</span> <span class="n">bool</span>
<span class="n">has_bias</span> <span class="o">=</span> <span
class="nb">false</span><span class="p">,</span> <span class="k">const</span>
<span class="n">bool</span> <span class="n">has_relu</span> <span
class="o">=</span> <span class="nb">false</span><span class="p">)</span> <span
[...]
<span class="c1">// ... skip ...</span>
diff --git a/atom.xml b/atom.xml
index 0fe4541..18ff94c 100644
--- a/atom.xml
+++ b/atom.xml
@@ -4,7 +4,7 @@
<title>TVM</title>
<link href="https://tvm.apache.org" rel="self"/>
<link href="https://tvm.apache.org"/>
- <updated>2021-03-11T09:40:41-05:00</updated>
+ <updated>2021-03-18T20:30:44-04:00</updated>
<id>https://tvm.apache.org</id>
<author>
<name></name>
@@ -556,7 +556,7 @@ Figure 4: After Graph Partitioning.
<p>In the above example, we specify a list of operators that can be
supported by DNNL codegen.</p>
<h3 id="rules-for-graph-patterns">Rules for graph
patterns</h3>
-<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blo [...]
+<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="https://github.com/apache/incubator-tvm/blob [...]
<div class="language-c highlighter-rouge"><div
class="highlight"><pre
class="highlight"><code><span
class="n">DNNLConv2d</span><span
class="p">(</span><span
class="k">const</span> <span
class="n">bool</span> <span
class="n">has_bias</span> <span
class="o">=</span> <span class="nb">false<
[...]
<span class="c1">// ... skip ...</span>
diff --git a/feed.xml b/feed.xml
index b3c5bbf..18099f5 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="utf-8"?><feed
xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/"
version="4.1.1">Jekyll</generator><link href="/feed.xml" rel="self"
type="application/atom+xml" /><link href="/" rel="alternate" type="text/html"
/><updated>2021-03-11T09:40:41-05:00</updated><id>/feed.xml</id><title
type="html">TVM</title><author><name>{"name"=>nil}</name></author><entry><title
type="html">Introducing TVM Auto-scheduler (a.k.a. Ansor)</tit [...]
+<?xml version="1.0" encoding="utf-8"?><feed
xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/"
version="4.1.1">Jekyll</generator><link href="/feed.xml" rel="self"
type="application/atom+xml" /><link href="/" rel="alternate" type="text/html"
/><updated>2021-03-18T20:30:44-04:00</updated><id>/feed.xml</id><title
type="html">TVM</title><author><name>{"name"=>nil}</name></author><entry><title
type="html">Introducing TVM Auto-scheduler (a.k.a. Ansor)</tit [...]
model size, operator diversity, and hardware heterogeneity.
From a computational perspective, deep neural networks are just layers and
layers of tensor computations.
These tensor computations, such as matmul and conv2d, can be easily described
by mathematical expressions.
@@ -518,7 +518,7 @@ Figure 4: After Graph Partitioning.
<p>In the above example, we specify a list of operators that can be
supported by DNNL codegen.</p>
<h3 id="rules-for-graph-patterns">Rules for graph
patterns</h3>
-<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blo [...]
+<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="https://github.com/apache/incubator-tvm/blob [...]
<div class="language-c highlighter-rouge"><div
class="highlight"><pre
class="highlight"><code><span
class="n">DNNLConv2d</span><span
class="p">(</span><span
class="k">const</span> <span
class="n">bool</span> <span
class="n">has_bias</span> <span
class="o">=</span> <span class="nb">false<
[...]
<span class="c1">// ... skip ...</span>
diff --git a/rss.xml b/rss.xml
index 2f56e76..127d6cc 100644
--- a/rss.xml
+++ b/rss.xml
@@ -5,8 +5,8 @@
<description>TVM - </description>
<link>https://tvm.apache.org</link>
<atom:link href="https://tvm.apache.org" rel="self"
type="application/rss+xml" />
- <lastBuildDate>Thu, 11 Mar 2021 09:40:41 -0500</lastBuildDate>
- <pubDate>Thu, 11 Mar 2021 09:40:41 -0500</pubDate>
+ <lastBuildDate>Thu, 18 Mar 2021 20:30:44 -0400</lastBuildDate>
+ <pubDate>Thu, 18 Mar 2021 20:30:44 -0400</pubDate>
<ttl>60</ttl>
@@ -551,7 +551,7 @@ Figure 4: After Graph Partitioning.
<p>In the above example, we specify a list of operators that can be
supported by DNNL codegen.</p>
<h3 id="rules-for-graph-patterns">Rules for graph
patterns</h3>
-<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="[https://github.com/apache/incubator-tvm/blo [...]
+<p>Your accelerator or compiler may have optimized some patterns (e.g.,
Conv2D + add + ReLU) to be a single instruction or an API. In this case, you
can specify a mapping from a graph pattern to your instruction/API. For the
case of the DNNL, its Conv2D API already includes bias addition and it allows
the next ReLU to be attached, so we can call DNNL as the following code snippet
(the complete implementation can be found <a
href="https://github.com/apache/incubator-tvm/blob [...]
<div class="language-c highlighter-rouge"><div
class="highlight"><pre
class="highlight"><code><span
class="n">DNNLConv2d</span><span
class="p">(</span><span
class="k">const</span> <span
class="n">bool</span> <span
class="n">has_bias</span> <span
class="o">=</span> <span class="nb">false<
[...]
<span class="c1">// ... skip ...</span>