Author: buildbot
Date: Mon Aug 17 14:37:02 2015
New Revision: 962133

Log:
Staging update by buildbot for singa

Modified:
    websites/staging/singa/trunk/content/   (props changed)
    websites/staging/singa/trunk/content/docs/mlp.html

Propchange: websites/staging/singa/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Mon Aug 17 14:37:02 2015
@@ -1 +1 @@
-1696294
+1696296

Modified: websites/staging/singa/trunk/content/docs/mlp.html
==============================================================================
--- websites/staging/singa/trunk/content/docs/mlp.html (original)
+++ websites/staging/singa/trunk/content/docs/mlp.html Mon Aug 17 14:37:02 2015
@@ -433,28 +433,59 @@
 
 <ul>
   
-<li>Run the command <tt>make download</tt> and <tt>make create</tt> in the 
folder <tt>example/mnist/</tt> to download mnist dataset and prepare for the 
training and testing datashard. If you got the error no Makefile detected, 
rename Makefile.example to Makefile.</li>
+<li>First go to the <tt>example/mnist/</tt> folder for preparing the dataset. 
There should be a makefile example called Makefile.example in the folder. Run 
the command <tt>cp Makefile.example Makefile</tt> to generate the makefile. 
Then run the command <tt>make download</tt> and <tt>make create</tt> in the 
current folder to download mnist dataset and prepare for the training and 
testing datashard.</li>
 </ul></div>
 <div class="section">
 <h3><a name="Set_model_and_cluster_configuration."></a>Set model and cluster 
configuration.</h3>
 
 <ul>
   
-<li>If you just want to use the training model provided in this example, you 
can just use model.conf file in current directory.  In this example, we define 
a neurualnet that contains 5 hidden layer. fc+tanh is the hidden layer(fc is 
for the inner product part, and tanh is for the non-linear activation 
function), and the final softmax layer is represented as fc+loss (inner product 
and softmax). For each layer, we define its name, input layer(s), basic 
configurations (e.g. number of nodes, parameter initialization settings).</li>
-</ul></div>
+<li>If you just want to use the training model provided in this example, you 
can just use job.conf file in current directory. Fig. 1 gives an example of MLP 
struture. In this example, we define a neurualnet that contains 5 hidden layer. 
fc+tanh is the hidden layer(fc is for the inner product part, and tanh is for 
the non-linear activation function), and the final softmax layer is represented 
as fc+loss (inner product and softmax). For each layer, we define its name, 
input layer(s), basic configurations (e.g. number of nodes, parameter 
initialization settings). If you want to learn more about how it is configured, 
you can go to <a class="externalLink" 
href="http://singa.incubator.apache.org/docs/model-config.html";>Model 
Configuration</a> to get details.</li>
+</ul>
+
+<div style="text-align: center">
+<img src="../images/mlp_example.png" style="width: 280px" alt="" /> <br />Fig. 
1: MLP example </img>
+</div></div>
 <div class="section">
 <h3><a name="Run_SINGA"></a>Run SINGA</h3>
 
 <ul>
   
-<li>Run the command <tt>./bin/singa-run.sh -workspace=examples/mnist</tt> in 
the root folder of SINGA</li>
-</ul></div>
+<li>
+<p>All script of SINGA should be run in the root folder of SINGA. First you 
need to start the zookeeper service if zookeeper is not started. The command is 
<tt>./bin/zk-service start</tt>. Then you can run the command 
<tt>./bin/singa-run.sh -conf examples/mnist/job.conf</tt> to start a SINGA job 
using examples/mnist/job.conf as the job configuration. After it is started, 
you should get a screenshots like the following:</p>
+  
+<div class="source">
+<div class="source"><pre class="prettyprint">xxx@yyy:zzz/incubator-singa$ 
./bin/singa-run.sh -conf examples/mnist/job.conf
+Unique JOB_ID is 1
+Record job information to /tmp/singa-log/job-info/job-1-20150817-055231
+Executing : ./singa -conf /xxx/incubator-singa/examples/mnist/job.conf 
-singa_conf /xxx/incubator-singa/conf/singa.conf -singa_job 1
+E0817 07:15:09.211885 34073 cluster.cc:51] proc #0 -&gt; 192.168.5.128:49152 
(pid = 34073)
+E0817 07:15:14.972231 34114 server.cc:36] Server (group = 0, id = 0) start
+E0817 07:15:14.972520 34115 worker.cc:134] Worker (group = 0, id = 0) start
+E0817 07:15:24.462602 34073 trainer.cc:373] Test step-0, loss : 2.341021, 
accuracy : 0.109100
+E0817 07:15:47.341076 34073 trainer.cc:373] Train step-0, loss : 2.357269, 
accuracy : 0.099000
+E0817 07:16:07.173364 34073 trainer.cc:373] Train step-10, loss : 2.222740, 
accuracy : 0.201800
+E0817 07:16:26.714855 34073 trainer.cc:373] Train step-20, loss : 2.091030, 
accuracy : 0.327200
+E0817 07:16:46.590946 34073 trainer.cc:373] Train step-30, loss : 1.969412, 
accuracy : 0.442100
+E0817 07:17:06.207080 34073 trainer.cc:373] Train step-40, loss : 1.865466, 
accuracy : 0.514800
+E0817 07:17:25.890033 34073 trainer.cc:373] Train step-50, loss : 1.773849, 
accuracy : 0.569100
+E0817 07:17:51.208935 34073 trainer.cc:373] Test step-60, loss : 1.613709, 
accuracy : 0.662100
+E0817 07:17:53.176766 34073 trainer.cc:373] Train step-60, loss : 1.659150, 
accuracy : 0.652600
+E0817 07:18:12.783370 34073 trainer.cc:373] Train step-70, loss : 1.574024, 
accuracy : 0.666000
+E0817 07:18:32.904942 34073 trainer.cc:373] Train step-80, loss : 1.529380, 
accuracy : 0.670500
+E0817 07:18:52.608111 34073 trainer.cc:373] Train step-90, loss : 1.443911, 
accuracy : 0.703500
+E0817 07:19:12.168465 34073 trainer.cc:373] Train step-100, loss : 1.387759, 
accuracy : 0.721000
+E0817 07:19:31.855865 34073 trainer.cc:373] Train step-110, loss : 1.335246, 
accuracy : 0.736500
+E0817 07:19:57.327133 34073 trainer.cc:373] Test step-120, loss : 1.216652, 
accuracy : 0.769900
+</pre></div></div></li>
+</ul>
+<p>After the training of some steps (depends on the setting) or the job is 
finished, SINGA will checkpoint the current parameter. In the next time, you 
can train (or use for your application) by loading the checkpoint. Please refer 
to <a class="externalLink" 
href="http://singa.incubator.apache.org/docs/checkpoint.html";>Checkpoint</a> 
for the use of checkpoint.</p></div>
 <div class="section">
 <h3><a name="Build_your_own_model"></a>Build your own model</h3>
 
 <ul>
   
-<li>If you want to specify you own model, then you need to decribe it in the 
model.conf file. It should contain the neurualnet structure, training 
algorithm(backforward or contrastive divergence etc.), SGD update 
algorithm(e.g. Adagrad), number of training/test steps and training/test 
frequency, and display features and etc. SINGA will read model.conf as a Google 
protobuf class <a class="externalLink" 
href="https://github.com/apache/incubator-singa/blob/master/src/proto/model.proto";>ModelProto</a>.
 You can also refer to the <a class="externalLink" 
href="http://singa.incubator.apache.org/docs/programming-model.html";>programming
 model</a> to get details.</li>
+<li>If you want to specify you own model, then you need to decribe it in the 
job.conf file. It should contain the neurualnet structure, training 
algorithm(backforward or contrastive divergence etc.), SGD update 
algorithm(e.g. Adagrad), number of training/test steps and training/test 
frequency, and display features and etc. SINGA will read job.conf as a Google 
protobuf class <a href="../src/proto/job.proto">JobProto</a>. You can also 
refer to the <a class="externalLink" 
href="http://singa.incubator.apache.org/docs/programmer-guide.html";>Programmer 
Guide</a> to get details.</li>
 </ul></div></div>
                   </div>
             </div>


Reply via email to