This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new ecf65d0  Publish triggered by CI
ecf65d0 is described below

commit ecf65d0bca438dc3eae10fafa5011ff53032a78b
Author: mxnet-ci <mxnet-ci>
AuthorDate: Thu Jul 2 18:41:10 2020 +0000

    Publish triggered by CI
---
 api/python/docs/tutorials/packages/gluon/image/info_gan.html |  4 ++--
 .../packages/gluon/training/normalization/index.html         | 12 ++++++------
 api/python/docs/tutorials/performance/backend/profiler.html  |  8 ++++----
 date.txt                                                     |  1 -
 feed.xml                                                     |  2 +-
 5 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/api/python/docs/tutorials/packages/gluon/image/info_gan.html 
b/api/python/docs/tutorials/packages/gluon/image/info_gan.html
index f8fe089..fd806fb 100644
--- a/api/python/docs/tutorials/packages/gluon/image/info_gan.html
+++ b/api/python/docs/tutorials/packages/gluon/image/info_gan.html
@@ -908,9 +908,9 @@ notebook uses the DCGAN example from the <a 
class="reference external" href="htt
 </pre></div>
 </div>
 <p>There are 2 differences between InfoGAN and DCGAN: the extra latent code 
and the Q network to estimate the code. The latent code is part of the 
Generator input and it contains mutliple variables (continuous, categorical) 
that can represent different distributions. In order to make sure that the 
Generator uses the latent code, mutual information is introduced into the GAN 
loss term. Mutual information measures how much X is known given Y or vice 
versa. It is defined as:</p>
-<p><img alt="gif" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/info_gan/loss.gif";
 /></p>
+<p><img alt="gif" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/info_gan/entropy.gif";
 /></p>
 <p>The InfoGAN loss is:</p>
-<p><img alt="gif" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/info_gan/loss.gif";
 /></p>
+<p><img alt="image1" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/info_gan/loss.gif";
 /></p>
 <p>where <code class="docutils literal notranslate"><span 
class="pre">V(D,G)</span></code> is the GAN loss and the mutual information 
<code class="docutils literal notranslate"><span class="pre">I(c,</span> <span 
class="pre">G(z,</span> <span class="pre">c))</span></code> goes in as 
regularization. The goal is to reach high mutual information, in order to learn 
meaningful codes for the data.</p>
 <p>Define the loss functions. <code class="docutils literal notranslate"><span 
class="pre">SoftmaxCrossEntropyLoss</span></code> for the categorical code, 
<code class="docutils literal notranslate"><span 
class="pre">L2Loss</span></code> for the continious code and <code 
class="docutils literal notranslate"><span 
class="pre">SigmoidBinaryCrossEntropyLoss</span></code> for the normal GAN 
loss.</p>
 <div class="highlight-python notranslate"><div 
class="highlight"><pre><span></span><span class="n">loss1</span> <span 
class="o">=</span> <span class="n">gluon</span><span class="o">.</span><span 
class="n">loss</span><span class="o">.</span><span 
class="n">SigmoidBinaryCrossEntropyLoss</span><span class="p">()</span>
diff --git 
a/api/python/docs/tutorials/packages/gluon/training/normalization/index.html 
b/api/python/docs/tutorials/packages/gluon/training/normalization/index.html
index 9fe193f..33532e1 100644
--- a/api/python/docs/tutorials/packages/gluon/training/normalization/index.html
+++ b/api/python/docs/tutorials/packages/gluon/training/normalization/index.html
@@ -772,8 +772,8 @@ shifting certain values towards a distribution with a mean 
of 0 (i.e. zero-cent
 </tr>
 </thead>
 <tbody>
-<tr class="row-even"><td><p><img alt="image0" 
src="tutorials/packages/gluon/training/normalization/imgs/NCHW_IN.png" 
/></p></td>
-<td><p><img alt="image1" 
src="tutorials/packages/gluon/training/normalization/imgs/NTC_IN.png" 
/></p></td>
+<tr class="row-even"><td><p><img alt="image1" 
src="tutorials/packages/gluon/training/normalization/imgs/NCHW_IN.png" 
/></p></td>
+<td><p><img alt="image2" 
src="tutorials/packages/gluon/training/normalization/imgs/NTC_IN.png" 
/></p></td>
 </tr>
 <tr class="row-odd"><td><p>(e.g. batch of images) using the default of <code 
class="docutils literal notranslate"><span 
class="pre">axis=1</span></code></p></td>
 <td><p>(e.g. batch of sequences) overriding the default with <code 
class="docutils literal notranslate"><span class="pre">axis=2</span></code> (or 
<code class="docutils literal notranslate"><span 
class="pre">axis=-1</span></code>)</p></td>
@@ -880,8 +880,8 @@ to adjust to shifts in the input distribution. Using the 
same batch another 100
 </tr>
 </thead>
 <tbody>
-<tr class="row-even"><td><p><img alt="image0" 
src="tutorials/packages/gluon/training/normalization/imgs/NCHW_IN.png" 
/></p></td>
-<td><p><img alt="image1" 
src="tutorials/packages/gluon/training/normalization/imgs/NTC_IN.png" 
/></p></td>
+<tr class="row-even"><td><p><img alt="image1" 
src="tutorials/packages/gluon/training/normalization/imgs/NCHW_IN.png" 
/></p></td>
+<td><p><img alt="image2" 
src="tutorials/packages/gluon/training/normalization/imgs/NTC_IN.png" 
/></p></td>
 </tr>
 <tr class="row-odd"><td><p>(e.g. batch of images) overriding the default with 
<code class="docutils literal notranslate"><span 
class="pre">axis=1</span></code></p></td>
 <td><p>(e.g. batch of sequences) using the default of <code class="docutils 
literal notranslate"><span class="pre">axis=-1</span></code></p></td>
@@ -922,8 +922,8 @@ to adjust to shifts in the input distribution. Using the 
same batch another 100
 </tr>
 </thead>
 <tbody>
-<tr class="row-even"><td><p><img alt="image0" 
src="tutorials/packages/gluon/training/normalization/imgs/NCHW_IN.png" 
/></p></td>
-<td><p><img alt="image1" 
src="tutorials/packages/gluon/training/normalization/imgs/NTC_IN.png" 
/></p></td>
+<tr class="row-even"><td><p><img alt="image1" 
src="tutorials/packages/gluon/training/normalization/imgs/NCHW_IN.png" 
/></p></td>
+<td><p><img alt="image2" 
src="tutorials/packages/gluon/training/normalization/imgs/NTC_IN.png" 
/></p></td>
 </tr>
 <tr class="row-odd"><td><p>(e.g. batch of images) using the default <code 
class="docutils literal notranslate"><span 
class="pre">axis=1</span></code></p></td>
 <td><p>(e.g. batch of sequences) overiding the default with <code 
class="docutils literal notranslate"><span class="pre">axis=2</span></code> (or 
<code class="docutils literal notranslate"><span 
class="pre">axis=-1</span></code> equivalently)</p></td>
diff --git a/api/python/docs/tutorials/performance/backend/profiler.html 
b/api/python/docs/tutorials/performance/backend/profiler.html
index d76a077..bf772ab 100644
--- a/api/python/docs/tutorials/performance/backend/profiler.html
+++ b/api/python/docs/tutorials/performance/backend/profiler.html
@@ -886,7 +886,7 @@ profiling jointly with vendor tools is recommended.</p>
 <p><code class="docutils literal notranslate"><span 
class="pre">dump()</span></code> creates a <code class="docutils literal 
notranslate"><span class="pre">json</span></code> file which can be viewed 
using a trace consumer like <code class="docutils literal notranslate"><span 
class="pre">chrome://tracing</span></code> in the Chrome browser. Here is a 
snapshot that shows the output of the profiling we did above. Note that setting 
the <code class="docutils literal notranslate"><span class= [...]
 <p><img alt="Tracing Screenshot" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_output_chrome.png";
 /></p>
 <p>Let’s zoom in to check the time taken by operators</p>
-<p><img alt="Operator profiling" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_winograd.png";
 /></p>
+<p><img alt="Operator profiling" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_nvprof.png";
 /></p>
 <p>The above picture visualizes the sequence in which the operators were 
executed and the time taken by each operator.</p>
 </div>
 </div>
@@ -993,11 +993,11 @@ scripts running MXNet. And you can use these in 
conjunction with the MXNet Profi
 <p><code class="docutils literal notranslate"><span 
class="pre">==11588==</span> <span class="pre">Generated</span> <span 
class="pre">result</span> <span class="pre">file:</span> <span 
class="pre">/home/user/Development/incubator-mxnet/ci/my_profile.nvvp</span></code></p>
 <p>We specified an output file called <code class="docutils literal 
notranslate"><span class="pre">my_profile.nvvp</span></code> and this will be 
annotated with NVTX ranges (for MXNet operations) that will be displayed 
alongside the standard NVProf timeline. This can be very useful when you’re 
trying to find patterns between operators run by MXNet, and their associated 
CUDA kernel calls.</p>
 <p>You can open this file in Visual Profiler to visualize the results.</p>
-<p><img alt="Operator profiling" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_winograd.png";
 /></p>
+<p><img alt="Operator profiling" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_nvprof.png";
 /></p>
 <p>At the top of the plot we have CPU tasks such as driver operations, memory 
copy calls, MXNet engine operator invocations, and imperative MXNet API calls. 
Below we see the kernels active on the GPU during the same time period.</p>
-<p><img alt="Operator profiling" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_winograd.png";
 /></p>
+<p><img alt="image1" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_nvprof_zoomed.png";
 /></p>
 <p>Zooming in on a backwards convolution operator we can see that it is in 
fact made up of a number of different GPU kernel calls, including a cuDNN 
winograd convolution call, and a fast-fourier transform call.</p>
-<p><img alt="Operator profiling" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_winograd.png";
 /></p>
+<p><img alt="image2" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_winograd.png";
 /></p>
 <p>Selecting any of these kernel calls (the winograd convolution call shown 
here) will get you some interesting GPU performance information such as 
occupancy rates (vs theoretical), shared memory usage and execution 
duration.</p>
 <p>Nsight Compute is available in CUDA 10 toolkit, but can be used to profile 
code running CUDA 9. You don’t get a timeline view, but you get many low level 
statistics about each individual kernel executed and can compare multiple runs 
(i.e. create a baseline).</p>
 <p><img alt="Nsight Compute" 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profile_nsight_compute.png";
 /></p>
diff --git a/date.txt b/date.txt
deleted file mode 100644
index 04e0f7d..0000000
--- a/date.txt
+++ /dev/null
@@ -1 +0,0 @@
-Thu Jul  2 06:44:32 UTC 2020
diff --git a/feed.xml b/feed.xml
index 487888c..a453885 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8"?><feed 
xmlns="http://www.w3.org/2005/Atom"; ><generator uri="https://jekyllrb.com/"; 
version="4.0.0">Jekyll</generator><link 
href="https://mxnet.apache.org/feed.xml"; rel="self" type="application/atom+xml" 
/><link href="https://mxnet.apache.org/"; rel="alternate" type="text/html" 
/><updated>2020-07-02T06:32:45+00:00</updated><id>https://mxnet.apache.org/feed.xml</id><title
 type="html">Apache MXNet</title><subtitle>A flexible and efficient library for 
deep [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8"?><feed 
xmlns="http://www.w3.org/2005/Atom"; ><generator uri="https://jekyllrb.com/"; 
version="4.0.0">Jekyll</generator><link 
href="https://mxnet.apache.org/feed.xml"; rel="self" type="application/atom+xml" 
/><link href="https://mxnet.apache.org/"; rel="alternate" type="text/html" 
/><updated>2020-07-02T18:30:41+00:00</updated><id>https://mxnet.apache.org/feed.xml</id><title
 type="html">Apache MXNet</title><subtitle>A flexible and efficient library for 
deep [...]
\ No newline at end of file

Reply via email to