This is an automated email from the ASF dual-hosted git repository.

lxn2 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new d777c9c  Fix broken links
d777c9c is described below

commit d777c9c8cb629d0e93ba61c7fbbaf73a5f5fc6ec
Author: Wang <wa...@9801a7a9c287.ant.amazon.com>
AuthorDate: Tue Aug 15 11:06:05 2017 -0700

    Fix broken links
---
 get_started/windows_setup.html                 |  2 +-
 model_zoo/index.html                           | 10 +++++-----
 versions/master/get_started/windows_setup.html |  2 +-
 versions/master/model_zoo/index.html           | 10 +++++-----
 4 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/get_started/windows_setup.html b/get_started/windows_setup.html
index 7645a4e..ff5d687 100644
--- a/get_started/windows_setup.html
+++ b/get_started/windows_setup.html
@@ -259,7 +259,7 @@ This produces a library called <code class="docutils 
literal"><span class="pre">
 <p>To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:</p>
 <ol class="simple">
 <li>If <a class="reference external" 
href="https://www.visualstudio.com/downloads/";>Microsoft Visual Studio 2013</a> 
is not already installed, download and install it. You can download and install 
the free community edition.</li>
-<li>Install <a class="reference external" 
href="https://www.microsoft.com/en-us/download/details.aspx?id=41151";>Visual 
C++ Compiler Nov 2013 CTP</a>.</li>
+<li>Install <a class="reference external" 
href="http://landinghub.visualstudio.com/visual-cpp-build-tools";>Visual C++ 
Compiler</a>.</li>
 <li>Back up all of the files in the <code class="docutils literal"><span 
class="pre">C:\Program</span> <span class="pre">Files</span> <span 
class="pre">(x86)\Microsoft</span> <span class="pre">Visual</span> <span 
class="pre">Studio</span> <span class="pre">12.0\VC</span></code> folder to a 
different location.</li>
 <li>Copy all of the files in the <code class="docutils literal"><span 
class="pre">C:\Program</span> <span class="pre">Files</span> <span 
class="pre">(x86)\Microsoft</span> <span class="pre">Visual</span> <span 
class="pre">C++</span> <span class="pre">Compiler</span> <span 
class="pre">Nov</span> <span class="pre">2013</span> <span 
class="pre">CTP</span></code> folder (or the folder where you extracted the zip 
archive) to the <code class="docutils literal"><span 
class="pre">C:\Program</spa [...]
 <li>Download and install <a class="reference external" 
href="http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download";>OpenCV</a>.</li>
diff --git a/model_zoo/index.html b/model_zoo/index.html
index 5b72005..7b69d56 100644
--- a/model_zoo/index.html
+++ b/model_zoo/index.html
@@ -269,7 +269,7 @@ ongoing project to collect complete models, with python 
scripts, pre-trained wei
 <li><a class="reference external" 
href="http://places2.csail.mit.edu/download.html";>Places2</a>: There are 1.6 
million train images from 365 scene categories in the Places365-Standard, which 
are used to train the Places365 CNNs. There are 50 images per category in the 
validation set and 900 images per category in the testing set. Compared to the 
train set of Places365-Standard, the train set of Places365-Challenge has 6.2 
million extra images, leading to totally 8 million train images fo [...]
 <li><a class="reference external" 
href="https://aws.amazon.com/public-datasets/multimedia-commons/";>Multimedia 
Commons</a>: YFCC100M (99.2 million images and 0.8 million videos from Flickr) 
and supplemental material (pre-extracted features, additional annotations).</li>
 </ul>
-<p>For instructions on using these models, see <a class="reference external" 
href="https://mxnet.incubator.apache.org/tutorials/python/predict_imagenet.html";>the
 python tutorial on using pre-trained ImageNet models</a>.</p>
+<p>For instructions on using these models, see <a class="reference external" 
href="https://mxnet.incubator.apache.org/tutorials/python/predict_image.html";>the
 python tutorial on using pre-trained ImageNet models</a>.</p>
 <table border="1" class="docutils">
 <colgroup>
 <col width="20%"></col>
@@ -364,12 +364,12 @@ ongoing project to collect complete models, with python 
scripts, pre-trained wei
 </div>
 <div class="section" id="recurrent-neural-networks-rnns-including-lstms">
 <span id="recurrent-neural-networks-rnns-including-lstms"></span><h2>Recurrent 
Neural Networks (RNNs) including LSTMs<a class="headerlink" 
href="#recurrent-neural-networks-rnns-including-lstms" title="Permalink to this 
headline">¶</a></h2>
-<p>MXNet supports many types of recurrent neural networks (RNNs), including 
Long Short-Term Memory (<a class="reference external" 
href="http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf";>LSTM</a>)
+<p>MXNet supports many types of recurrent neural networks (RNNs), including 
Long Short-Term Memory (<a class="reference external" 
href="http://www.bioinf.jku.at/publications/older/2604.pdf";>LSTM</a>)
 and Gated Recurrent Units (GRU) networks. Some available datasets include:</p>
 <ul class="simple">
-<li><a class="reference external" 
href="https://www.cis.upenn.edu/~treebank/";>Penn Treebank (PTB)</a>: Text 
corpus with ~1 million words. Vocabulary is limited to 10,000 words. The task 
is predicting downstream words/characters.</li>
+<li><a class="reference external" 
href="https://catalog.ldc.upenn.edu/LDC95T7";>Penn Treebank (PTB)</a>: Text 
corpus with ~1 million words. Vocabulary is limited to 10,000 words. The task 
is predicting downstream words/characters.</li>
 <li><a class="reference external" 
href="http://cs.stanford.edu/people/karpathy/char-rnn/";>Shakespeare</a>: 
Complete text from Shakespeare’s works.</li>
-<li><a class="reference external" 
href="https://s3.amazonaws.com/text-datasets";>IMDB reviews</a>: 25,000 movie 
reviews, labeled as positive or negative</li>
+<li><a class="reference external" 
href="https://getsatisfaction.com/imdb/topics/imdb-data-now-available-in-amazon-s3";>IMDB
 reviews</a>: 25,000 movie reviews, labeled as positive or negative</li>
 <li><a class="reference external" 
href="https://research.facebook.com/researchers/1543934539189348";>Facebook 
bAbI</a>: As a set of 20 question &amp; answer tasks, each with 1,000 training 
examples.</li>
 <li><a class="reference external" href="http://mscoco.org/";>Flickr8k, 
COCO</a>: Images with associated caption (sentences). Flickr8k consists of 
8,092 images captioned by AmazonTurkers with ~40,000 captions. COCO has 328,000 
images, each with 5 captions. The COCO images also come with labeled objects 
using segmentation algorithms.</li>
 </ul>
@@ -393,7 +393,7 @@ and Gated Recurrent Units (GRU) networks. Some available 
datasets include:</p>
 <tr class="row-even"><td>LSTM - Image Captioning</td>
 <td>Flickr8k, MS COCO</td>
 <td> </td>
-<td><a class="reference external" 
href="https://arxiv.org/pdf/%201411.4555v2.pdf";>Vinyals et al.., 2015</a></td>
+<td><a class="reference external" 
href="https://arxiv.org/pdf/1411.4555.pdf";>Vinyals et al.., 2015</a></td>
 <td>@...</td>
 </tr>
 <tr class="row-odd"><td>LSTM - Q&amp;A System</td>
diff --git a/versions/master/get_started/windows_setup.html 
b/versions/master/get_started/windows_setup.html
index f2f7c3e..bc91c0d 100644
--- a/versions/master/get_started/windows_setup.html
+++ b/versions/master/get_started/windows_setup.html
@@ -257,7 +257,7 @@ This produces a library called <code class="docutils 
literal"><span class="pre">
 <p>To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:</p>
 <ol class="simple">
 <li>If <a class="reference external" 
href="https://www.visualstudio.com/downloads/";>Microsoft Visual Studio 2013</a> 
is not already installed, download and install it. You can download and install 
the free community edition.</li>
-<li>Install <a class="reference external" 
href="https://www.microsoft.com/en-us/download/details.aspx?id=41151";>Visual 
C++ Compiler Nov 2013 CTP</a>.</li>
+<li>Install <a class="reference external" 
href="http://landinghub.visualstudio.com/visual-cpp-build-tools";>Visual C++ 
Compiler Nov 2013 CTP</a>.</li>
 <li>Back up all of the files in the <code class="docutils literal"><span 
class="pre">C:\Program</span> <span class="pre">Files</span> <span 
class="pre">(x86)\Microsoft</span> <span class="pre">Visual</span> <span 
class="pre">Studio</span> <span class="pre">12.0\VC</span></code> folder to a 
different location.</li>
 <li>Copy all of the files in the <code class="docutils literal"><span 
class="pre">C:\Program</span> <span class="pre">Files</span> <span 
class="pre">(x86)\Microsoft</span> <span class="pre">Visual</span> <span 
class="pre">C++</span> <span class="pre">Compiler</span> <span 
class="pre">Nov</span> <span class="pre">2013</span> <span 
class="pre">CTP</span></code> folder (or the folder where you extracted the zip 
archive) to the <code class="docutils literal"><span 
class="pre">C:\Program</spa [...]
 <li>Download and install <a class="reference external" 
href="http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download";>OpenCV</a>.</li>
diff --git a/versions/master/model_zoo/index.html 
b/versions/master/model_zoo/index.html
index 69eca73..3f8ca32 100644
--- a/versions/master/model_zoo/index.html
+++ b/versions/master/model_zoo/index.html
@@ -267,7 +267,7 @@ ongoing project to collect complete models, with python 
scripts, pre-trained wei
 <li><a class="reference external" 
href="http://places2.csail.mit.edu/download.html";>Places2</a>: There are 1.6 
million train images from 365 scene categories in the Places365-Standard, which 
are used to train the Places365 CNNs. There are 50 images per category in the 
validation set and 900 images per category in the testing set. Compared to the 
train set of Places365-Standard, the train set of Places365-Challenge has 6.2 
million extra images, leading to totally 8 million train images fo [...]
 <li><a class="reference external" 
href="https://aws.amazon.com/public-datasets/multimedia-commons/";>Multimedia 
Commons</a>: YFCC100M (99.2 million images and 0.8 million videos from Flickr) 
and supplemental material (pre-extracted features, additional annotations).</li>
 </ul>
-<p>For instructions on using these models, see <a class="reference external" 
href="https://mxnet.incubator.apache.org/versions/master/tutorials/python/predict_imagenet.html";>the
 python tutorial on using pre-trained ImageNet models</a>.</p>
+<p>For instructions on using these models, see <a class="reference external" 
href="https://mxnet.incubator.apache.org/tutorials/python/predict_image.html";>the
 python tutorial on using pre-trained ImageNet models</a>.</p>
 <table border="1" class="docutils">
 <colgroup>
 <col width="20%"/>
@@ -362,12 +362,12 @@ ongoing project to collect complete models, with python 
scripts, pre-trained wei
 </div>
 <div class="section" id="recurrent-neural-networks-rnns-including-lstms">
 <span id="recurrent-neural-networks-rnns-including-lstms"></span><h2>Recurrent 
Neural Networks (RNNs) including LSTMs<a class="headerlink" 
href="#recurrent-neural-networks-rnns-including-lstms" title="Permalink to this 
headline">¶</a></h2>
-<p>MXNet supports many types of recurrent neural networks (RNNs), including 
Long Short-Term Memory (<a class="reference external" 
href="http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf";>LSTM</a>)
+<p>MXNet supports many types of recurrent neural networks (RNNs), including 
Long Short-Term Memory (<a class="reference external" 
href="http://www.bioinf.jku.at/publications/older/2604.pdf";>LSTM</a>)
 and Gated Recurrent Units (GRU) networks. Some available datasets include:</p>
 <ul class="simple">
-<li><a class="reference external" 
href="https://www.cis.upenn.edu/~treebank/";>Penn Treebank (PTB)</a>: Text 
corpus with ~1 million words. Vocabulary is limited to 10,000 words. The task 
is predicting downstream words/characters.</li>
+<li><a class="reference external" 
href="https://catalog.ldc.upenn.edu/LDC95T7";>Penn Treebank (PTB)</a>: Text 
corpus with ~1 million words. Vocabulary is limited to 10,000 words. The task 
is predicting downstream words/characters.</li>
 <li><a class="reference external" 
href="http://cs.stanford.edu/people/karpathy/char-rnn/";>Shakespeare</a>: 
Complete text from Shakespeare’s works.</li>
-<li><a class="reference external" 
href="https://s3.amazonaws.com/text-datasets";>IMDB reviews</a>: 25,000 movie 
reviews, labeled as positive or negative</li>
+<li><a class="reference external" 
href="https://getsatisfaction.com/imdb/topics/imdb-data-now-available-in-amazon-s3";>IMDB
 reviews</a>: 25,000 movie reviews, labeled as positive or negative</li>
 <li><a class="reference external" 
href="https://research.facebook.com/researchers/1543934539189348";>Facebook 
bAbI</a>: As a set of 20 question &amp; answer tasks, each with 1,000 training 
examples.</li>
 <li><a class="reference external" href="http://mscoco.org/";>Flickr8k, 
COCO</a>: Images with associated caption (sentences). Flickr8k consists of 
8,092 images captioned by AmazonTurkers with ~40,000 captions. COCO has 328,000 
images, each with 5 captions. The COCO images also come with labeled objects 
using segmentation algorithms.</li>
 </ul>
@@ -391,7 +391,7 @@ and Gated Recurrent Units (GRU) networks. Some available 
datasets include:</p>
 <tr class="row-even"><td>LSTM - Image Captioning</td>
 <td>Flickr8k, MS COCO</td>
 <td> </td>
-<td><a class="reference external" 
href="https://arxiv.org/pdf/%201411.4555v2.pdf";>Vinyals et al.., 2015</a></td>
+<td><a class="reference external" 
href="https://arxiv.org/pdf/1411.4555.pdf";>Vinyals et al.., 2015</a></td>
 <td>@...</td>
 </tr>
 <tr class="row-odd"><td>LSTM - Q&amp;A System</td>

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" <comm...@mxnet.apache.org>'].

Reply via email to