Author: buildbot
Date: Wed Sep 2 12:26:55 2015
New Revision: 964029
Log:
Staging update by buildbot for singa
Modified:
websites/staging/singa/trunk/content/ (props changed)
websites/staging/singa/trunk/content/docs/cnn.html
websites/staging/singa/trunk/content/docs/data.html
websites/staging/singa/trunk/content/docs/layer.html
websites/staging/singa/trunk/content/docs/mlp.html
websites/staging/singa/trunk/content/docs/rbm.html
websites/staging/singa/trunk/content/docs/rnn.html
Propchange: websites/staging/singa/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Wed Sep 2 12:26:55 2015
@@ -1 +1 @@
-1700761
+1700786
Modified: websites/staging/singa/trunk/content/docs/cnn.html
==============================================================================
--- websites/staging/singa/trunk/content/docs/cnn.html (original)
+++ websites/staging/singa/trunk/content/docs/cnn.html Wed Sep 2 12:26:55 2015
@@ -475,7 +475,7 @@ E0817 06:58:12.518497 33849 trainer.cc:3
<p>Next we follow the guide in <a class="externalLink"
href="http://singa.incubator.apache.org/docs/neural-net">neural net page</a>
and <a class="externalLink"
href="http://singa.incubator.apache.org/docs/layer">layer page</a> to write the
neural net configuration.</p>
<div style="text-align: center">
-<img src="http://singa.incubator.apache.org/assets/image/cnn-example.png"
style="width: 200px" alt="" /> <br />
+<img src="http://singa.incubator.apache.org/images/cnn-example.png"
style="width: 200px" alt="" /> <br />
<b>Figure 1 - Net structure of the CNN example.</b></img>
</div>
Modified: websites/staging/singa/trunk/content/docs/data.html
==============================================================================
--- websites/staging/singa/trunk/content/docs/data.html (original)
+++ websites/staging/singa/trunk/content/docs/data.html Wed Sep 2 12:26:55 2015
@@ -450,7 +450,7 @@ extend Record {
</pre></div></div>
<p>Please refer to the <a class="externalLink"
href="https://developers.google.com/protocol-buffers/docs/reference/cpp-generated?hl=en#extension">Tutorial</a>
for extension of protocol messages.</p>
<p>The extended <tt>Record</tt> will be parsed by a parser layer to extract
features (e.g., label or pixel values). Users need to write their own <a
class="externalLink"
href="http://singa.incubator.apache.org/docs/layer#parser-layers">parser
layers</a> to parse the extended <tt>Record</tt>.</p>
-<p>{% comment %} <i>Note</i></p>
+<p><i>Note</i></p>
<p>There is an alternative way to define the proto extension. In this way, you
should be careful of the scope of fields and how to access the fields, which
are different from the above.</p>
<div class="source">
@@ -462,8 +462,7 @@ extend Record {
optional string userVAR2 = 2; // unique field id
...
}
-</pre></div></div>
-<p>{% endcomment %}</p></div>
+</pre></div></div></div>
<div class="section">
<h3><a name="DataShard_creation"></a>DataShard creation</h3>
<p>Users write code to convert their data into <tt>Record</tt>s and insert
them into shards following the subsequent steps.</p>
Modified: websites/staging/singa/trunk/content/docs/layer.html
==============================================================================
--- websites/staging/singa/trunk/content/docs/layer.html (original)
+++ websites/staging/singa/trunk/content/docs/layer.html Wed Sep 2 12:26:55
2015
@@ -551,8 +551,7 @@ rgbimage_conf {
mirror: bool # mirror the image by set image[i,j]=image[i,len-j]
meanfile: "Image_Mean_File_Path"
}
-</pre></div></div>
-<p>{% comment %}</p></div></div>
+</pre></div></div></div></div>
<div class="section">
<h4><a name="PrefetchLayer"></a>PrefetchLayer</h4>
<p><a class="externalLink"
href="http://singa.incubator.apache.org/api/classsinga_1_1PrefetchLayer.html">PrefetchLayer</a>
embeds data layers and parser layers to do data prefeching. It will launch a
thread to call the data layers and parser layers to load and extract features.
It ensures that the I/O task and computation task can work simultaneously. One
example PrefetchLayer configuration is,</p>
@@ -580,8 +579,7 @@ rgbimage_conf {
exclude:kTest
}
</pre></div></div>
-<p>The layers on top of the PrefetchLayer should use the name of the embedded
layers as their source layers. For example, the “rgb” and
“label” should be configured to the <tt>srclayers</tt> of other
layers.</p>
-<p>{% endcomment %}</p></div>
+<p>The layers on top of the PrefetchLayer should use the name of the embedded
layers as their source layers. For example, the “rgb” and
“label” should be configured to the <tt>srclayers</tt> of other
layers.</p></div>
<div class="section">
<h4><a name="Neuron_Layers"></a>Neuron Layers</h4>
<p>Neuron layers conduct feature transformations.</p>
@@ -665,7 +663,7 @@ lrn_conf {
beta: float // exponential number
}
</pre></div></div>
-<p><tt>local_size</tt> specifies the quantity of the adjoining channels which
will be summed up. {% comment %} For <tt>WITHIN_CHANNEL</tt>, it means the
side length of the space region which will be summed up. {% endcomment
%}</p></div></div>
+<p><tt>local_size</tt> specifies the quantity of the adjoining channels which
will be summed up. For <tt>WITHIN_CHANNEL</tt>, it means the side length of
the space region which will be summed up.</p></div></div>
<div class="section">
<h4><a name="Loss_Layers"></a>Loss Layers</h4>
<p>Loss layers measures the objective training loss.</p>
Modified: websites/staging/singa/trunk/content/docs/mlp.html
==============================================================================
--- websites/staging/singa/trunk/content/docs/mlp.html (original)
+++ websites/staging/singa/trunk/content/docs/mlp.html Wed Sep 2 12:26:55 2015
@@ -478,7 +478,7 @@ E0817 07:19:57.327133 34073 trainer.cc:3
<h3><a name="Neural_net"></a>Neural net</h3>
<div style="text-align: center">
-<img src="http://singa.incubator.apache.org/assets/image/mlp-example.png"
style="width: 230px" alt="" />
+<img src="http://singa.incubator.apache.org/images/mlp-example.png"
style="width: 230px" alt="" />
<br /><b>Figure 1 - Net structure of the MLP example. </b></img>
</div>
<p>Figure 1 shows the structure of the simple MLP model, which is constructed
following <a class="externalLink"
href="http://arxiv.org/abs/1003.0358">Ciresan’s paper</a>. The dashed
circle contains two layers which represent one feature transformation stage.
There are 6 such stages in total. They sizes of the <a class="externalLink"
href="http://singa.incubator.apache.org/docs/layer#innerproductlayer">InnerProductLayer</a>s
in these circles decrease from
2500->2000->1500->1000->500->10.</p>
Modified: websites/staging/singa/trunk/content/docs/rbm.html
==============================================================================
--- websites/staging/singa/trunk/content/docs/rbm.html (original)
+++ websites/staging/singa/trunk/content/docs/rbm.html Wed Sep 2 12:26:55 2015
@@ -456,7 +456,7 @@ $ ./bin/singa-run.sh -conf examples/rbm/
<h2><a name="Training_details"></a>Training details</h2>
<div class="section">
<h3><a name="RBM0"></a>RBM0</h3>
-<p><img src="http://singa.incubator.apache.org/assets/image/RBM0_new.PNG"
align="center" width="200px" alt="" /> <span><b>Figure 1 - RBM0.</b></span></p>
+<p><img src="http://singa.incubator.apache.org/images/RBM0_new.PNG"
align="center" width="200px" alt="" /> <span><b>Figure 1 - RBM0.</b></span></p>
<p>The neural net structure for training RBM0 is shown in Figure 1. The data
layer and parser layer provides features for training RBM0. The visible layer
(connected with parser layer) of RBM0 accepts the image feature (784
dimension). The hidden layer is set to have 1000 neurons (units). These two
layers are configured as,</p>
<div class="source">
@@ -530,7 +530,7 @@ updater{
<p>Then SINGA will <a class="externalLink"
href="http://singa.incubator.apache.org/docs/checkpoint">checkpoint the
parameters</a> into <i>SINGA_ROOT/rbm0/</i>.</p></div>
<div class="section">
<h3><a name="RBM1"></a>RBM1</h3>
-<p><img src="http://singa.incubator.apache.org/assets/image/RBM1_new.PNG"
align="center" width="200px" alt="" /> <span><b>Figure 2 - RBM1.</b></span></p>
+<p><img src="http://singa.incubator.apache.org/images/RBM1_new.PNG"
align="center" width="200px" alt="" /> <span><b>Figure 2 - RBM1.</b></span></p>
<p>Figure 2 shows the net structure of training RBM1. The visible units of
RBM1 accept the output from the Sigmoid1 layer. The Inner1 layer is a
<tt>InnerProductLayer</tt> whose parameters are set to the <tt>w0</tt> and
<tt>b1</tt> learned from RBM0. The neural net configuration is (with layers for
data layer and parser layer omitted).</p>
<div class="source">
@@ -611,11 +611,11 @@ cluster{
<p>The workspace is changed for checkpointing w1, b2 and b3 into
<i>SINGA_ROOT/rbm1/</i>.</p></div>
<div class="section">
<h3><a name="RBM2"></a>RBM2</h3>
-<p><img src="http://singa.incubator.apache.org/assets/image/RBM2_new.PNG"
align="center" width="200px" alt="" /> <span><b>Figure 3 - RBM2.</b></span></p>
+<p><img src="http://singa.incubator.apache.org/images/RBM2_new.PNG"
align="center" width="200px" alt="" /> <span><b>Figure 3 - RBM2.</b></span></p>
<p>Figure 3 shows the net structure of training RBM2. In this model, a layer
with 250 units is added as the hidden layer of RBM2. The visible units of RBM2
accepts output from Sigmoid2 layer. Parameters of Inner1 and inner2 are set to
<tt>w0,b1,w1,b2</tt> which can be load from the checkpoint file of RBM1, i.e.,
“SINGA_ROOT/rbm1/”.</p></div>
<div class="section">
<h3><a name="RBM3"></a>RBM3</h3>
-<p><img src="http://singa.incubator.apache.org/assets/image/RBM3_new.PNG"
align="center" width="200px" alt="" /> <span><b>Figure 4 - RBM3.</b></span></p>
+<p><img src="http://singa.incubator.apache.org/images/RBM3_new.PNG"
align="center" width="200px" alt="" /> <span><b>Figure 4 - RBM3.</b></span></p>
<p>Figure 4 shows the net structure of training RBM3. It is similar to Figure
3, but according to <a class="externalLink"
href="http://www.cs.toronto.edu/~hinton/science.pdf">Hinton’s science
paper</a>, the hidden units of the top RBM (RBM3) have stochastic real-valued
states drawn from a unit variance Gaussian whose mean is determined by the
input from the RBM’s logistic visible units. So we add a
<tt>gaussian</tt> field in the RBMHid layer to control the sampling
distribution (Gaussian or Bernoulli). In addition, this RBM has a much smaller
learning rate (0.001). The neural net configuration for the RBM3 and the
updating protocol is (with layers for data layer and parser layer omitted),</p>
<div class="source">
Modified: websites/staging/singa/trunk/content/docs/rnn.html
==============================================================================
--- websites/staging/singa/trunk/content/docs/rnn.html (original)
+++ websites/staging/singa/trunk/content/docs/rnn.html Wed Sep 2 12:26:55 2015
@@ -445,7 +445,7 @@ $ ./bin/singa-run.sh -conf SINGA_ROOT/ex
</pre></div></div></div>
<div class="section">
<h2><a name="Implementations"></a>Implementations</h2>
-<p><img src="http://singa.incubator.apache.org/assets/image/rnn-refine.png"
align="center" width="300px" alt="" /> <span><b>Figure 1 - Net structure of the
RNN model.</b></span></p>
+<p><img src="http://singa.incubator.apache.org/images/rnn-refine.png"
align="center" width="300px" alt="" /> <span><b>Figure 1 - Net structure of the
RNN model.</b></span></p>
<p>The neural net structure is shown Figure 1. Word records are loaded by
<tt>RnnlmDataLayer</tt> from <tt>WordShard</tt>. <tt>RnnlmWordparserLayer</tt>
parses word records to get word indexes (in the vocabulary). For every
iteration, <tt>window_size</tt> words are processed.
<tt>RnnlmWordinputLayer</tt> looks up a word embedding matrix to extract
feature vectors for words in the window. These features are transformed by
<tt>RnnlmInnerproductLayer</tt> layer and <tt>RnnlmSigmoidLayer</tt>.
<tt>RnnlmSigmoidLayer</tt> is a recurrent layer that forwards features from
previous words to next words. Finally, <tt>RnnlmComputationLayer</tt> computes
the perplexity loss with word class information from
<tt>RnnlmClassparserLayer</tt>. The word class is a cluster ID. Words are
clustered based on their frequency in the dataset, e.g., frequent words are
clustered together and less frequent words are clustered together. Clustering
is to improve the efficiency of the final prediction process.</p>
<div class="section">
<h3><a name="Data_preparation"></a>Data preparation</h3>