Author: jinyang
Date: Mon Aug 17 14:36:44 2015
New Revision: 1696296

URL: http://svn.apache.org/r1696296
Log:
CMS commit to singa by jinyang

Modified:
    incubator/singa/site/trunk/content/markdown/docs/mlp.md

Modified: incubator/singa/site/trunk/content/markdown/docs/mlp.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/mlp.md?rev=1696296&r1=1696295&r2=1696296&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/mlp.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/mlp.md Mon Aug 17 14:36:44 
2015
@@ -19,23 +19,48 @@ Notice:    Licensed to the Apache Softwa
 This example will show you how to use SINGA to train a MLP model using mnist 
dataset.
 
 ### Prepare for the data
-* Run the command `make download` and `make create`  in the folder 
`example/mnist/` to download mnist dataset and prepare for the training and 
testing datashard. If you got the error no Makefile detected, rename 
Makefile.example to Makefile.
+* First go to the `example/mnist/` folder for preparing the dataset. There 
should be a makefile example called Makefile.example in the folder. Run the 
command `cp Makefile.example Makefile` to generate the makefile.
+Then run the command `make download` and `make create`  in the current folder 
to download mnist dataset and prepare for the training and testing datashard. 
 
 ### Set model and cluster configuration.
-* If you just want to use the training model provided in this example, you can 
just use model.conf file in current directory.
- In this example, we define a neurualnet that contains 5 hidden layer.
-fc+tanh is the hidden layer(fc is for the inner product part, and tanh is for 
the non-linear activation function),
-and the final softmax layer is represented as fc+loss (inner product and 
softmax).
-For each layer, we define its name, input layer(s), basic configurations (e.g. 
number of nodes, parameter initialization settings).
+* If you just want to use the training model provided in this example, you can 
just use job.conf file in current directory. Fig. 1 gives an example of MLP 
struture. In this example, we define a neurualnet that contains 5 hidden layer. 
fc+tanh is the hidden layer(fc is for the inner product part, and tanh is for 
the non-linear activation function), and the final softmax layer is represented 
as fc+loss (inner product and softmax). For each layer, we define its name, 
input layer(s), basic configurations (e.g. number of nodes, parameter 
initialization settings). If you want to learn more about how it is configured, 
you can go to [Model 
Configuration](http://singa.incubator.apache.org/docs/model-config.html) to get 
details. 
+
+<div style = "text-align: center">
+<img src = "../images/mlp_example.png" style = "width: 280px"> <br/>Fig. 1: 
MLP example </img>
+</div>
 
 ### Run SINGA
-* Run the command `./bin/singa-run.sh -workspace=examples/mnist`
-in the root folder of SINGA
 
+* All script of SINGA should be run in the root folder of SINGA.
+First you need to start the zookeeper service if zookeeper is not started. The 
command is `./bin/zk-service start`. 
+Then you can run the command `./bin/singa-run.sh -conf 
examples/mnist/job.conf` to start a SINGA job using examples/mnist/job.conf as 
the job configuration.
+After it is started, you should get a screenshots like the following:
+
+        xxx@yyy:zzz/incubator-singa$ ./bin/singa-run.sh -conf 
examples/mnist/job.conf
+        Unique JOB_ID is 1
+        Record job information to /tmp/singa-log/job-info/job-1-20150817-055231
+        Executing : ./singa -conf /xxx/incubator-singa/examples/mnist/job.conf 
-singa_conf /xxx/incubator-singa/conf/singa.conf -singa_job 1
+        E0817 07:15:09.211885 34073 cluster.cc:51] proc #0 -> 
192.168.5.128:49152 (pid = 34073)
+        E0817 07:15:14.972231 34114 server.cc:36] Server (group = 0, id = 0) 
start
+        E0817 07:15:14.972520 34115 worker.cc:134] Worker (group = 0, id = 0) 
start
+        E0817 07:15:24.462602 34073 trainer.cc:373] Test step-0, loss : 
2.341021, accuracy : 0.109100
+        E0817 07:15:47.341076 34073 trainer.cc:373] Train step-0, loss : 
2.357269, accuracy : 0.099000
+        E0817 07:16:07.173364 34073 trainer.cc:373] Train step-10, loss : 
2.222740, accuracy : 0.201800
+        E0817 07:16:26.714855 34073 trainer.cc:373] Train step-20, loss : 
2.091030, accuracy : 0.327200
+        E0817 07:16:46.590946 34073 trainer.cc:373] Train step-30, loss : 
1.969412, accuracy : 0.442100
+        E0817 07:17:06.207080 34073 trainer.cc:373] Train step-40, loss : 
1.865466, accuracy : 0.514800
+        E0817 07:17:25.890033 34073 trainer.cc:373] Train step-50, loss : 
1.773849, accuracy : 0.569100
+        E0817 07:17:51.208935 34073 trainer.cc:373] Test step-60, loss : 
1.613709, accuracy : 0.662100
+        E0817 07:17:53.176766 34073 trainer.cc:373] Train step-60, loss : 
1.659150, accuracy : 0.652600
+        E0817 07:18:12.783370 34073 trainer.cc:373] Train step-70, loss : 
1.574024, accuracy : 0.666000
+        E0817 07:18:32.904942 34073 trainer.cc:373] Train step-80, loss : 
1.529380, accuracy : 0.670500
+        E0817 07:18:52.608111 34073 trainer.cc:373] Train step-90, loss : 
1.443911, accuracy : 0.703500
+        E0817 07:19:12.168465 34073 trainer.cc:373] Train step-100, loss : 
1.387759, accuracy : 0.721000
+        E0817 07:19:31.855865 34073 trainer.cc:373] Train step-110, loss : 
1.335246, accuracy : 0.736500
+        E0817 07:19:57.327133 34073 trainer.cc:373] Test step-120, loss : 
1.216652, accuracy : 0.769900
+
+After the training of some steps (depends on the setting) or the job is 
finished, SINGA will checkpoint the current parameter. In the next time, you 
can train (or use for your application) by loading the checkpoint. Please refer 
to [Checkpoint](http://singa.incubator.apache.org/docs/checkpoint.html) for the 
use of checkpoint.
 
 ### Build your own model
-* If you want to specify you own model, then you need to decribe it in the 
model.conf file.
-It should contain the neurualnet structure, training algorithm(backforward or 
contrastive divergence etc.),
-SGD update algorithm(e.g. Adagrad), number of training/test steps and 
training/test frequency,
-and display features and etc. SINGA will read model.conf as a Google protobuf 
class 
[ModelProto](https://github.com/apache/incubator-singa/blob/master/src/proto/model.proto).
-You can also refer to the [programming 
model](http://singa.incubator.apache.org/docs/programming-model.html) to get 
details.
+* If you want to specify you own model, then you need to decribe  it in the 
job.conf file. It should contain the neurualnet structure, training 
algorithm(backforward or contrastive divergence etc.), SGD update 
algorithm(e.g. Adagrad), number of training/test steps and training/test 
frequency, and display features and etc. SINGA will read job.conf as a Google 
protobuf class [JobProto](../src/proto/job.proto). You can also refer to the 
[Programmer 
Guide](http://singa.incubator.apache.org/docs/programmer-guide.html) to get 
details. 
+


Reply via email to