This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 89e5aa6 Build at Wed Mar 4 10:58:39 CST 2020
89e5aa6 is described below
commit 89e5aa65b9a5d26c7f74bc25a2854bbd6bd892fd
Author: tqchen <[email protected]>
AuthorDate: Wed Mar 4 10:58:39 2020 -0600
Build at Wed Mar 4 10:58:39 CST 2020
---
2017/08/17/tvm-release-announcement.html | 2 +-
...s-with-TVM-A-Depthwise-Convolution-Example.html | 4 +--
2017/10/06/nnvm-compiler-announcement.html | 2 +-
...s-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html | 2 +-
2017/11/08/android-rpc-introduction.html | 2 +-
2018/01/16/opt-mali-gpu.html | 2 +-
2018/03/12/webgl.html | 2 +-
2018/03/23/nmt-transformer-optimize.html | 2 +-
2018/07/12/vta-release-announcement.html | 2 +-
2018/08/10/DLPack-Bridge.html | 2 +-
2018/10/03/auto-opt-all.html | 2 +-
2018/10/09/ml-in-tees.html | 2 +-
2018/12/18/lowprecision-conv.html | 2 +-
2019/01/19/Golang.html | 2 +-
2019/03/18/tvm-apache-announcement.html | 2 +-
2019/04/29/opt-cuda-quantized.html | 2 +-
2019/05/30/pytorch-frontend.html | 2 +-
atom.xml | 38 ++++++++++----------
rss.xml | 40 +++++++++++-----------
19 files changed, 57 insertions(+), 57 deletions(-)
diff --git a/2017/08/17/tvm-release-announcement.html
b/2017/08/17/tvm-release-announcement.html
index d802f2a..5ed205a 100644
--- a/2017/08/17/tvm-release-announcement.html
+++ b/2017/08/17/tvm-release-announcement.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>TVM: An End to End IR Stack for Deploying Deep Learning Workloads on
Hardware Platforms </h1>
<p class="post-meta">
- <time datetime="2017-08-17T12:00:00-07:00" itemprop="datePublished">
+ <time datetime="2017-08-17T14:00:00-05:00" itemprop="datePublished">
Aug 17, 2017
</time>
diff --git
a/2017/08/22/Optimize-Deep-Learning-GPU-Operators-with-TVM-A-Depthwise-Convolution-Example.html
b/2017/08/22/Optimize-Deep-Learning-GPU-Operators-with-TVM-A-Depthwise-Convolution-Example.html
index 0900101..367287b 100644
---
a/2017/08/22/Optimize-Deep-Learning-GPU-Operators-with-TVM-A-Depthwise-Convolution-Example.html
+++
b/2017/08/22/Optimize-Deep-Learning-GPU-Operators-with-TVM-A-Depthwise-Convolution-Example.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Optimize Deep Learning GPU Operators with TVM: A Depthwise
Convolution Example </h1>
<p class="post-meta">
- <time datetime="2017-08-22T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2017-08-22T00:00:00-05:00" itemprop="datePublished">
Aug 22, 2017
</time>
@@ -706,7 +706,7 @@ Below is the result with Input = [1, 256, 96, 96], Filter =
[256, 1, 3, 3], stri
<h2 id="show-me-the-code">Show me the code</h2>
<ul>
- <li>Declare: <a
href="https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/convolution.py">https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/convolution.py</a></li>
+ <li>Declare: <a
href="https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/depthwise_conv2d.py">https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/depthwise_conv2d.py</a></li>
<li>Schedule: <a
href="https://github.com/dmlc/tvm/blob/master/topi/python/topi/cuda/depthwise_conv2d.py">https://github.com/dmlc/tvm/blob/master/topi/python/topi/cuda/depthwise_conv2d.py</a></li>
<li>Test: <a
href="https://github.com/dmlc/tvm/blob/master/topi/recipe/conv/depthwise_conv2d_test.py">https://github.com/dmlc/tvm/blob/master/topi/recipe/conv/depthwise_conv2d_test.py</a></li>
</ul>
diff --git a/2017/10/06/nnvm-compiler-announcement.html
b/2017/10/06/nnvm-compiler-announcement.html
index b2678c9..93ee782 100644
--- a/2017/10/06/nnvm-compiler-announcement.html
+++ b/2017/10/06/nnvm-compiler-announcement.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>NNVM Compiler: Open Compiler for AI Frameworks </h1>
<p class="post-meta">
- <time datetime="2017-10-06T08:30:00-07:00" itemprop="datePublished">
+ <time datetime="2017-10-06T10:30:00-05:00" itemprop="datePublished">
Oct 6, 2017
</time>
diff --git
a/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html
b/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html
index 0164602..b9edd7d 100644
--- a/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html
+++ b/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Bringing AMDGPUs to TVM Stack and NNVM Compiler with ROCm </h1>
<p class="post-meta">
- <time datetime="2017-10-30T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2017-10-30T00:00:00-05:00" itemprop="datePublished">
Oct 30, 2017
</time>
diff --git a/2017/11/08/android-rpc-introduction.html
b/2017/11/08/android-rpc-introduction.html
index 4b81368..7ffc0cf 100644
--- a/2017/11/08/android-rpc-introduction.html
+++ b/2017/11/08/android-rpc-introduction.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Remote Profile and Test Deep Learning Cross Compilation on Mobile
Phones with TVM RPC </h1>
<p class="post-meta">
- <time datetime="2017-11-08T00:00:00-08:00" itemprop="datePublished">
+ <time datetime="2017-11-08T00:00:00-06:00" itemprop="datePublished">
Nov 8, 2017
</time>
diff --git a/2018/01/16/opt-mali-gpu.html b/2018/01/16/opt-mali-gpu.html
index 2f80309..44d0f57 100644
--- a/2018/01/16/opt-mali-gpu.html
+++ b/2018/01/16/opt-mali-gpu.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Optimizing Mobile Deep Learning on ARM GPU with TVM </h1>
<p class="post-meta">
- <time datetime="2018-01-16T00:00:00-08:00" itemprop="datePublished">
+ <time datetime="2018-01-16T00:00:00-06:00" itemprop="datePublished">
Jan 16, 2018
</time>
diff --git a/2018/03/12/webgl.html b/2018/03/12/webgl.html
index 9facfe5..cf3ac31 100644
--- a/2018/03/12/webgl.html
+++ b/2018/03/12/webgl.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Compiling Deep Learning Models to WebGL with TVM </h1>
<p class="post-meta">
- <time datetime="2018-03-12T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2018-03-12T00:00:00-05:00" itemprop="datePublished">
Mar 12, 2018
</time>
diff --git a/2018/03/23/nmt-transformer-optimize.html
b/2018/03/23/nmt-transformer-optimize.html
index 441d947..12b9509 100644
--- a/2018/03/23/nmt-transformer-optimize.html
+++ b/2018/03/23/nmt-transformer-optimize.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Bringing TVM into TensorFlow for Optimizing Neural Machine
Translation on GPU </h1>
<p class="post-meta">
- <time datetime="2018-03-23T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2018-03-23T00:00:00-05:00" itemprop="datePublished">
Mar 23, 2018
</time>
diff --git a/2018/07/12/vta-release-announcement.html
b/2018/07/12/vta-release-announcement.html
index f91a995..0d49756 100644
--- a/2018/07/12/vta-release-announcement.html
+++ b/2018/07/12/vta-release-announcement.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>VTA: An Open, Customizable Deep Learning Acceleration Stack </h1>
<p class="post-meta">
- <time datetime="2018-07-12T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2018-07-12T00:00:00-05:00" itemprop="datePublished">
Jul 12, 2018
</time>
diff --git a/2018/08/10/DLPack-Bridge.html b/2018/08/10/DLPack-Bridge.html
index 8a9b36f..0a7c9e5 100644
--- a/2018/08/10/DLPack-Bridge.html
+++ b/2018/08/10/DLPack-Bridge.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Building a Cross-Framework Deep Learning Compiler via DLPack </h1>
<p class="post-meta">
- <time datetime="2018-08-10T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2018-08-10T00:00:00-05:00" itemprop="datePublished">
Aug 10, 2018
</time>
diff --git a/2018/10/03/auto-opt-all.html b/2018/10/03/auto-opt-all.html
index 6207719..a1c5108 100644
--- a/2018/10/03/auto-opt-all.html
+++ b/2018/10/03/auto-opt-all.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Automatic Kernel Optimization for Deep Learning on All Hardware
Platforms </h1>
<p class="post-meta">
- <time datetime="2018-10-03T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2018-10-03T00:00:00-05:00" itemprop="datePublished">
Oct 3, 2018
</time>
diff --git a/2018/10/09/ml-in-tees.html b/2018/10/09/ml-in-tees.html
index de9ce35..7b2d63d 100644
--- a/2018/10/09/ml-in-tees.html
+++ b/2018/10/09/ml-in-tees.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Efficient Privacy-Preserving ML Using TVM </h1>
<p class="post-meta">
- <time datetime="2018-10-09T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2018-10-09T00:00:00-05:00" itemprop="datePublished">
Oct 9, 2018
</time>
diff --git a/2018/12/18/lowprecision-conv.html
b/2018/12/18/lowprecision-conv.html
index 8081ff1..e2f8300 100644
--- a/2018/12/18/lowprecision-conv.html
+++ b/2018/12/18/lowprecision-conv.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Automating Generation of Low Precision Deep Learning Operators </h1>
<p class="post-meta">
- <time datetime="2018-12-18T00:00:00-08:00" itemprop="datePublished">
+ <time datetime="2018-12-18T00:00:00-06:00" itemprop="datePublished">
Dec 18, 2018
</time>
diff --git a/2019/01/19/Golang.html b/2019/01/19/Golang.html
index c042aeb..61905d1 100644
--- a/2019/01/19/Golang.html
+++ b/2019/01/19/Golang.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>TVM Golang Runtime for Deep Learning Deployment </h1>
<p class="post-meta">
- <time datetime="2019-01-19T00:00:00-08:00" itemprop="datePublished">
+ <time datetime="2019-01-19T00:00:00-06:00" itemprop="datePublished">
Jan 19, 2019
</time>
diff --git a/2019/03/18/tvm-apache-announcement.html
b/2019/03/18/tvm-apache-announcement.html
index c4b97c1..66bcf0a 100644
--- a/2019/03/18/tvm-apache-announcement.html
+++ b/2019/03/18/tvm-apache-announcement.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>TVM Deep Learning Compiler Joins Apache Software Foundation </h1>
<p class="post-meta">
- <time datetime="2019-03-18T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2019-03-18T00:00:00-05:00" itemprop="datePublished">
Mar 18, 2019
</time>
diff --git a/2019/04/29/opt-cuda-quantized.html
b/2019/04/29/opt-cuda-quantized.html
index 41e67c6..83b4846 100644
--- a/2019/04/29/opt-cuda-quantized.html
+++ b/2019/04/29/opt-cuda-quantized.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Automating Optimization of Quantized Deep Learning Models on CUDA
</h1>
<p class="post-meta">
- <time datetime="2019-04-29T09:00:00-07:00" itemprop="datePublished">
+ <time datetime="2019-04-29T11:00:00-05:00" itemprop="datePublished">
Apr 29, 2019
</time>
diff --git a/2019/05/30/pytorch-frontend.html b/2019/05/30/pytorch-frontend.html
index 0b0df54..a022294 100644
--- a/2019/05/30/pytorch-frontend.html
+++ b/2019/05/30/pytorch-frontend.html
@@ -141,7 +141,7 @@
<div class="span14">
<h1>Integrating TVM into PyTorch </h1>
<p class="post-meta">
- <time datetime="2019-05-30T00:00:00-07:00" itemprop="datePublished">
+ <time datetime="2019-05-30T00:00:00-05:00" itemprop="datePublished">
May 30, 2019
</time>
diff --git a/atom.xml b/atom.xml
index 443fad1..3207c6d 100644
--- a/atom.xml
+++ b/atom.xml
@@ -4,7 +4,7 @@
<title>TVM</title>
<link href="https://tvm.apache.org" rel="self"/>
<link href="https://tvm.apache.org"/>
- <updated>2020-02-26T20:06:51-08:00</updated>
+ <updated>2020-03-04T10:58:38-06:00</updated>
<id>https://tvm.apache.org</id>
<author>
<name></name>
@@ -15,7 +15,7 @@
<entry>
<title>Integrating TVM into PyTorch</title>
<link href="https://tvm.apache.org/2019/05/30/pytorch-frontend"/>
- <updated>2019-05-30T00:00:00-07:00</updated>
+ <updated>2019-05-30T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2019/05/30/pytorch-frontend</id>
<content type="html"><p>As TVM continuously demonstrates improvements
to the efficiency of deep learning execution,
it has become clear that PyTorch stands to benefit from directly leveraging
the compiler stack.
@@ -117,7 +117,7 @@ relay_graph = torch_tvm.to_relay(mul, inputs)
<entry>
<title>Automating Optimization of Quantized Deep Learning Models on
CUDA</title>
<link href="https://tvm.apache.org/2019/04/29/opt-cuda-quantized"/>
- <updated>2019-04-29T09:00:00-07:00</updated>
+ <updated>2019-04-29T11:00:00-05:00</updated>
<id>https://tvm.apache.org/2019/04/29/opt-cuda-quantized</id>
<content type="html"><p>Deep learning has been successfully applied
to a variety of tasks.
On real-time scenarios such as inference on autonomous vehicles, the inference
speed of the model is critical.
@@ -261,7 +261,7 @@ We show that automatic optimization in TVM makes it easy
and flexible to support
<entry>
<title>TVM Deep Learning Compiler Joins Apache Software Foundation</title>
<link href="https://tvm.apache.org/2019/03/18/tvm-apache-announcement"/>
- <updated>2019-03-18T00:00:00-07:00</updated>
+ <updated>2019-03-18T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2019/03/18/tvm-apache-announcement</id>
<content type="html"><p>There is an increasing need to bring machine
learning to a wide diversity of hardware devices. Current frameworks rely on
vendor-specific operator libraries and optimize for a narrow range of
server-class GPUs. Deploying workloads to new platforms – such as mobile
phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) – requires
significant manual effort.</p>
@@ -284,7 +284,7 @@ We show that automatic optimization in TVM makes it easy
and flexible to support
<entry>
<title>TVM Golang Runtime for Deep Learning Deployment</title>
<link href="https://tvm.apache.org/2019/01/19/Golang"/>
- <updated>2019-01-19T00:00:00-08:00</updated>
+ <updated>2019-01-19T00:00:00-06:00</updated>
<id>https://tvm.apache.org/2019/01/19/Golang</id>
<content type="html"><h2
id="introduction">Introduction</h2>
@@ -454,7 +454,7 @@ closure as TVM packed function and invoke the same across
programming language b
<entry>
<title>Automating Generation of Low Precision Deep Learning
Operators</title>
<link href="https://tvm.apache.org/2018/12/18/lowprecision-conv"/>
- <updated>2018-12-18T00:00:00-08:00</updated>
+ <updated>2018-12-18T00:00:00-06:00</updated>
<id>https://tvm.apache.org/2018/12/18/lowprecision-conv</id>
<content type="html"><p>As deep learning models grow larger and more
complex, deploying them on low powered phone and IoT
devices becomes challenging because of their limited compute and energy
budgets. A recent trend
@@ -615,7 +615,7 @@ Note: x86 doesn’t support a vectorized popcount for this
microarchitecture, so
<entry>
<title>Efficient Privacy-Preserving ML Using TVM</title>
<link href="https://tvm.apache.org/2018/10/09/ml-in-tees"/>
- <updated>2018-10-09T00:00:00-07:00</updated>
+ <updated>2018-10-09T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2018/10/09/ml-in-tees</id>
<content type="html"><p>This post describes Myelin, a framework for
privacy-preserving machine learning in trusted hardware enclaves, and how TVM
makes Myelin fast.
The key idea is that TVM, unlike other popular ML frameworks, compiles models
into lightweight, optimized, and dependency-free libraries which can fit into
resource constrained enclaves.</p>
@@ -731,7 +731,7 @@ His research interest is in the general domain of ML on
shared private data, but
<entry>
<title>Automatic Kernel Optimization for Deep Learning on All Hardware
Platforms</title>
<link href="https://tvm.apache.org/2018/10/03/auto-opt-all"/>
- <updated>2018-10-03T00:00:00-07:00</updated>
+ <updated>2018-10-03T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2018/10/03/auto-opt-all</id>
<content type="html"><p>Optimizing the performance of deep neural
network on a diverse range of hardware platforms is still a hard
problem for AI developers. In terms of system support, we are facing a
many-to-many problem here:
@@ -1125,7 +1125,7 @@ for inference deployment. TVM just provides such a
solution.</p>
<entry>
<title>Building a Cross-Framework Deep Learning Compiler via DLPack</title>
<link href="https://tvm.apache.org/2018/08/10/DLPack-Bridge"/>
- <updated>2018-08-10T00:00:00-07:00</updated>
+ <updated>2018-08-10T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2018/08/10/DLPack-Bridge</id>
<content type="html"><p>Deep learning frameworks such as Tensorflow,
PyTorch, and ApacheMxNet provide a
powerful toolbox for quickly prototyping and deploying deep learning models.
@@ -1264,7 +1264,7 @@ support, and can be used to implement convenient
converters, such as
<entry>
<title>VTA: An Open, Customizable Deep Learning Acceleration Stack </title>
<link href="https://tvm.apache.org/2018/07/12/vta-release-announcement"/>
- <updated>2018-07-12T00:00:00-07:00</updated>
+ <updated>2018-07-12T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2018/07/12/vta-release-announcement</id>
<content type="html"><p style="text-align: center">Thierry
Moreau(VTA architect), Tianqi Chen(TVM stack), Ziheng Jiang†(graph
compilation), Luis Vega(cloud deployment)</p>
<p style="text-align: center">Advisors: Luis Ceze, Carlos
Guestrin, Arvind Krishnamurthy</p>
@@ -1406,7 +1406,7 @@ This kind of high-level visibility is essential to system
designers who want to
<entry>
<title>Bringing TVM into TensorFlow for Optimizing Neural Machine
Translation on GPU</title>
<link href="https://tvm.apache.org/2018/03/23/nmt-transformer-optimize"/>
- <updated>2018-03-23T00:00:00-07:00</updated>
+ <updated>2018-03-23T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2018/03/23/nmt-transformer-optimize</id>
<content type="html"><h2 id="author">Author</h2>
@@ -1672,7 +1672,7 @@ C = tvm.compute(
<entry>
<title>Compiling Deep Learning Models to WebGL with TVM</title>
<link href="https://tvm.apache.org/2018/03/12/webgl"/>
- <updated>2018-03-12T00:00:00-07:00</updated>
+ <updated>2018-03-12T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2018/03/12/webgl</id>
<content type="html"><p>Now TVM comes with a brand-new OpenGL/WebGL
backend!
This blog post explains what it is, and what you can achieve with it.</p>
@@ -1788,7 +1788,7 @@ optimizations into the TVM stack.</p>
<entry>
<title>Optimizing Mobile Deep Learning on ARM GPU with TVM</title>
<link href="https://tvm.apache.org/2018/01/16/opt-mali-gpu"/>
- <updated>2018-01-16T00:00:00-08:00</updated>
+ <updated>2018-01-16T00:00:00-06:00</updated>
<id>https://tvm.apache.org/2018/01/16/opt-mali-gpu</id>
<content type="html"><p>With the great success of deep learning, the
demand for
deploying deep neural networks to mobile devices is growing rapidly.
@@ -2362,7 +2362,7 @@ advice and <a
href="https://github.com/yzhliu">Yizhi Liu</a&g
<entry>
<title>Remote Profile and Test Deep Learning Cross Compilation on Mobile
Phones with TVM RPC</title>
<link href="https://tvm.apache.org/2017/11/08/android-rpc-introduction"/>
- <updated>2017-11-08T00:00:00-08:00</updated>
+ <updated>2017-11-08T00:00:00-06:00</updated>
<id>https://tvm.apache.org/2017/11/08/android-rpc-introduction</id>
<content type="html"><p>TVM stack is an end to end compilation stack
to deploy deep learning workloads to all hardware backends.
Thanks to the NNVM compiler support of TVM stack, we can now directly compile
descriptions from deep learning frameworks and compile them to bare metal code.
@@ -2590,7 +2590,7 @@ make jvminstall
<entry>
<title>Bringing AMDGPUs to TVM Stack and NNVM Compiler with ROCm</title>
<link
href="https://tvm.apache.org/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm"/>
- <updated>2017-10-30T00:00:00-07:00</updated>
+ <updated>2017-10-30T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm</id>
<content type="html"><p style="text-align: center">Aditya
Atluri, Advanced Micro Devices, Inc.</p>
<p style="text-align: center">Masahiro Masuda, Ziosoft,
Inc.</p>
@@ -2816,7 +2816,7 @@ BB0_6:
<entry>
<title>NNVM Compiler: Open Compiler for AI Frameworks</title>
<link href="https://tvm.apache.org/2017/10/06/nnvm-compiler-announcement"/>
- <updated>2017-10-06T08:30:00-07:00</updated>
+ <updated>2017-10-06T10:30:00-05:00</updated>
<id>https://tvm.apache.org/2017/10/06/nnvm-compiler-announcement</id>
<content type="html"><p style="text-align: center">Paul G.
Allen School of Computer Science &amp; Engineering, University of
Washington</p>
<p style="text-align: center">Amazon Web Service AI
team</p>
@@ -2899,7 +2899,7 @@ We also learns from Halide when implementing the lowering
pipeline in TVM.</l
<entry>
<title>Optimize Deep Learning GPU Operators with TVM: A Depthwise
Convolution Example</title>
<link
href="https://tvm.apache.org/2017/08/22/Optimize-Deep-Learning-GPU-Operators-with-TVM-A-Depthwise-Convolution-Example"/>
- <updated>2017-08-22T00:00:00-07:00</updated>
+ <updated>2017-08-22T00:00:00-05:00</updated>
<id>https://tvm.apache.org/2017/08/22/Optimize-Deep-Learning-GPU-Operators-with-TVM-A-Depthwise-Convolution-Example</id>
<content type="html"><p>Efficient deep learning operators are at the
core of deep learning systems.
Usually these operators are hard to optimize and require great efforts of HPC
experts.
@@ -3454,7 +3454,7 @@ Below is the result with Input = [1, 256, 96, 96], Filter
= [256, 1, 3, 3], stri
<h2 id="show-me-the-code">Show me the code</h2>
<ul>
- <li>Declare: <a
href="https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/convolution.py">https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/convolution.py</a></li>
+ <li>Declare: <a
href="https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/depthwise_conv2d.py">https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/depthwise_conv2d.py</a></li>
<li>Schedule: <a
href="https://github.com/dmlc/tvm/blob/master/topi/python/topi/cuda/depthwise_conv2d.py">https://github.com/dmlc/tvm/blob/master/topi/python/topi/cuda/depthwise_conv2d.py</a></li>
<li>Test: <a
href="https://github.com/dmlc/tvm/blob/master/topi/recipe/conv/depthwise_conv2d_test.py">https://github.com/dmlc/tvm/blob/master/topi/recipe/conv/depthwise_conv2d_test.py</a></li>
</ul>
@@ -3478,7 +3478,7 @@ He is experiencing a gap year after obtaining a
bachelor’s degree in electrica
<entry>
<title>TVM: An End to End IR Stack for Deploying Deep Learning Workloads on
Hardware Platforms</title>
<link href="https://tvm.apache.org/2017/08/17/tvm-release-announcement"/>
- <updated>2017-08-17T12:00:00-07:00</updated>
+ <updated>2017-08-17T14:00:00-05:00</updated>
<id>https://tvm.apache.org/2017/08/17/tvm-release-announcement</id>
<content type="html"><p style="text-align: center">Tianqi
Chen(project lead), Thierry Moreau(hardware stack), Ziheng Jiang†(graph
compilation), Haichen Shen(gpu optimization)</p>
<p style="text-align: center">Advisors: Luis Ceze, Carlos
Guestrin, Arvind Krishnamurthy</p>
diff --git a/rss.xml b/rss.xml
index fe22371..f620099 100644
--- a/rss.xml
+++ b/rss.xml
@@ -5,8 +5,8 @@
<description>TVM - </description>
<link>https://tvm.apache.org</link>
<atom:link href="https://tvm.apache.org" rel="self"
type="application/rss+xml" />
- <lastBuildDate>Wed, 26 Feb 2020 20:06:51 -0800</lastBuildDate>
- <pubDate>Wed, 26 Feb 2020 20:06:51 -0800</pubDate>
+ <lastBuildDate>Wed, 04 Mar 2020 10:58:38 -0600</lastBuildDate>
+ <pubDate>Wed, 04 Mar 2020 10:58:38 -0600</pubDate>
<ttl>60</ttl>
@@ -109,7 +109,7 @@ relay_graph = torch_tvm.to_relay(mul, inputs)
</description>
<link>https://tvm.apache.org/2019/05/30/pytorch-frontend</link>
<guid>https://tvm.apache.org/2019/05/30/pytorch-frontend</guid>
- <pubDate>Thu, 30 May 2019 00:00:00 -0700</pubDate>
+ <pubDate>Thu, 30 May 2019 00:00:00 -0500</pubDate>
</item>
<item>
@@ -253,7 +253,7 @@ We show that automatic optimization in TVM makes it easy
and flexible to support
</description>
<link>https://tvm.apache.org/2019/04/29/opt-cuda-quantized</link>
<guid>https://tvm.apache.org/2019/04/29/opt-cuda-quantized</guid>
- <pubDate>Mon, 29 Apr 2019 09:00:00 -0700</pubDate>
+ <pubDate>Mon, 29 Apr 2019 11:00:00 -0500</pubDate>
</item>
<item>
@@ -276,7 +276,7 @@ We show that automatic optimization in TVM makes it easy
and flexible to support
</description>
<link>https://tvm.apache.org/2019/03/18/tvm-apache-announcement</link>
<guid>https://tvm.apache.org/2019/03/18/tvm-apache-announcement</guid>
- <pubDate>Mon, 18 Mar 2019 00:00:00 -0700</pubDate>
+ <pubDate>Mon, 18 Mar 2019 00:00:00 -0500</pubDate>
</item>
<item>
@@ -446,7 +446,7 @@ closure as TVM packed function and invoke the same across
programming language b
</description>
<link>https://tvm.apache.org/2019/01/19/Golang</link>
<guid>https://tvm.apache.org/2019/01/19/Golang</guid>
- <pubDate>Sat, 19 Jan 2019 00:00:00 -0800</pubDate>
+ <pubDate>Sat, 19 Jan 2019 00:00:00 -0600</pubDate>
</item>
<item>
@@ -607,7 +607,7 @@ Note: x86 doesn’t support a vectorized popcount for this
microarchitecture, so
</description>
<link>https://tvm.apache.org/2018/12/18/lowprecision-conv</link>
<guid>https://tvm.apache.org/2018/12/18/lowprecision-conv</guid>
- <pubDate>Tue, 18 Dec 2018 00:00:00 -0800</pubDate>
+ <pubDate>Tue, 18 Dec 2018 00:00:00 -0600</pubDate>
</item>
<item>
@@ -723,7 +723,7 @@ His research interest is in the general domain of ML on
shared private data, but
</description>
<link>https://tvm.apache.org/2018/10/09/ml-in-tees</link>
<guid>https://tvm.apache.org/2018/10/09/ml-in-tees</guid>
- <pubDate>Tue, 09 Oct 2018 00:00:00 -0700</pubDate>
+ <pubDate>Tue, 09 Oct 2018 00:00:00 -0500</pubDate>
</item>
<item>
@@ -1117,7 +1117,7 @@ for inference deployment. TVM just provides such a
solution.</p>
</description>
<link>https://tvm.apache.org/2018/10/03/auto-opt-all</link>
<guid>https://tvm.apache.org/2018/10/03/auto-opt-all</guid>
- <pubDate>Wed, 03 Oct 2018 00:00:00 -0700</pubDate>
+ <pubDate>Wed, 03 Oct 2018 00:00:00 -0500</pubDate>
</item>
<item>
@@ -1256,7 +1256,7 @@ support, and can be used to implement convenient
converters, such as
</description>
<link>https://tvm.apache.org/2018/08/10/DLPack-Bridge</link>
<guid>https://tvm.apache.org/2018/08/10/DLPack-Bridge</guid>
- <pubDate>Fri, 10 Aug 2018 00:00:00 -0700</pubDate>
+ <pubDate>Fri, 10 Aug 2018 00:00:00 -0500</pubDate>
</item>
<item>
@@ -1398,7 +1398,7 @@ This kind of high-level visibility is essential to system
designers who want to
</description>
<link>https://tvm.apache.org/2018/07/12/vta-release-announcement</link>
<guid>https://tvm.apache.org/2018/07/12/vta-release-announcement</guid>
- <pubDate>Thu, 12 Jul 2018 00:00:00 -0700</pubDate>
+ <pubDate>Thu, 12 Jul 2018 00:00:00 -0500</pubDate>
</item>
<item>
@@ -1664,7 +1664,7 @@ C = tvm.compute(
</description>
<link>https://tvm.apache.org/2018/03/23/nmt-transformer-optimize</link>
<guid>https://tvm.apache.org/2018/03/23/nmt-transformer-optimize</guid>
- <pubDate>Fri, 23 Mar 2018 00:00:00 -0700</pubDate>
+ <pubDate>Fri, 23 Mar 2018 00:00:00 -0500</pubDate>
</item>
<item>
@@ -1780,7 +1780,7 @@ optimizations into the TVM stack.</p>
</description>
<link>https://tvm.apache.org/2018/03/12/webgl</link>
<guid>https://tvm.apache.org/2018/03/12/webgl</guid>
- <pubDate>Mon, 12 Mar 2018 00:00:00 -0700</pubDate>
+ <pubDate>Mon, 12 Mar 2018 00:00:00 -0500</pubDate>
</item>
<item>
@@ -2354,7 +2354,7 @@ advice and <a
href="https://github.com/yzhliu">Yizhi Liu</a&g
</description>
<link>https://tvm.apache.org/2018/01/16/opt-mali-gpu</link>
<guid>https://tvm.apache.org/2018/01/16/opt-mali-gpu</guid>
- <pubDate>Tue, 16 Jan 2018 00:00:00 -0800</pubDate>
+ <pubDate>Tue, 16 Jan 2018 00:00:00 -0600</pubDate>
</item>
<item>
@@ -2582,7 +2582,7 @@ make jvminstall
</description>
<link>https://tvm.apache.org/2017/11/08/android-rpc-introduction</link>
<guid>https://tvm.apache.org/2017/11/08/android-rpc-introduction</guid>
- <pubDate>Wed, 08 Nov 2017 00:00:00 -0800</pubDate>
+ <pubDate>Wed, 08 Nov 2017 00:00:00 -0600</pubDate>
</item>
<item>
@@ -2808,7 +2808,7 @@ BB0_6:
</description>
<link>https://tvm.apache.org/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm</link>
<guid>https://tvm.apache.org/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm</guid>
- <pubDate>Mon, 30 Oct 2017 00:00:00 -0700</pubDate>
+ <pubDate>Mon, 30 Oct 2017 00:00:00 -0500</pubDate>
</item>
<item>
@@ -2891,7 +2891,7 @@ We also learns from Halide when implementing the lowering
pipeline in TVM.</l
</description>
<link>https://tvm.apache.org/2017/10/06/nnvm-compiler-announcement</link>
<guid>https://tvm.apache.org/2017/10/06/nnvm-compiler-announcement</guid>
- <pubDate>Fri, 06 Oct 2017 08:30:00 -0700</pubDate>
+ <pubDate>Fri, 06 Oct 2017 10:30:00 -0500</pubDate>
</item>
<item>
@@ -3449,7 +3449,7 @@ Below is the result with Input = [1, 256, 96, 96], Filter
= [256, 1, 3, 3], stri
<h2 id="show-me-the-code">Show me the code</h2>
<ul>
- <li>Declare: <a
href="https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/convolution.py">https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/convolution.py</a></li>
+ <li>Declare: <a
href="https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/depthwise_conv2d.py">https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/depthwise_conv2d.py</a></li>
<li>Schedule: <a
href="https://github.com/dmlc/tvm/blob/master/topi/python/topi/cuda/depthwise_conv2d.py">https://github.com/dmlc/tvm/blob/master/topi/python/topi/cuda/depthwise_conv2d.py</a></li>
<li>Test: <a
href="https://github.com/dmlc/tvm/blob/master/topi/recipe/conv/depthwise_conv2d_test.py">https://github.com/dmlc/tvm/blob/master/topi/recipe/conv/depthwise_conv2d_test.py</a></li>
</ul>
@@ -3470,7 +3470,7 @@ He is experiencing a gap year after obtaining a
bachelor’s degree in electrica
</description>
<link>https://tvm.apache.org/2017/08/22/Optimize-Deep-Learning-GPU-Operators-with-TVM-A-Depthwise-Convolution-Example</link>
<guid>https://tvm.apache.org/2017/08/22/Optimize-Deep-Learning-GPU-Operators-with-TVM-A-Depthwise-Convolution-Example</guid>
- <pubDate>Tue, 22 Aug 2017 00:00:00 -0700</pubDate>
+ <pubDate>Tue, 22 Aug 2017 00:00:00 -0500</pubDate>
</item>
<item>
@@ -3598,7 +3598,7 @@ that adopts the standard, such as MXNet, PyTorch, Caffe2
and tiny-dnn.</li>
</description>
<link>https://tvm.apache.org/2017/08/17/tvm-release-announcement</link>
<guid>https://tvm.apache.org/2017/08/17/tvm-release-announcement</guid>
- <pubDate>Thu, 17 Aug 2017 12:00:00 -0700</pubDate>
+ <pubDate>Thu, 17 Aug 2017 14:00:00 -0500</pubDate>
</item>