Added: 
incubator/singa/site/trunk/content/markdown/v0.2.0/zh/programming-guide.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/v0.2.0/zh/programming-guide.md?rev=1738695&view=auto
==============================================================================
--- incubator/singa/site/trunk/content/markdown/v0.2.0/zh/programming-guide.md 
(added)
+++ incubator/singa/site/trunk/content/markdown/v0.2.0/zh/programming-guide.md 
Tue Apr 12 06:22:20 2016
@@ -0,0 +1,67 @@
+# 编程指南
+
+---
+
+要提交一个训练作业,用户需要提供图1中的四个部分的é…
ç½®ï¼š
+
+  * [NeuralNet](neural-net.html) :描述神经网络结构,包
括每层的具体设置和层与层的连接关系;
+  * [TrainOneBatch](train-one-batch.html) :该算法需要æ 
¹æ®ä¸åŒçš„æ¨¡åž‹ç±»åˆ«è€Œå®šåˆ¶;
+  * [Updater](updater.html) :定义服务器端更新参数的协议;
+  * [Cluster Topology](distributed-training.html) 
:指定服务器和工作者的分布式拓扑架构。
+
+*初级用户指南* 将介绍如何利用内
建层提交一个训练作业,而 *高级用户指南* 
将详细介绍如何编写用户自己的主函数并注册自己实现的组件。此外,高级用户和初级用户对训练数据集的[处理](data.html)方式是相同的。
+
+<img src="../../images/overview.png" align="center" width="400px"/>
+<span><strong>图 1 - SINGA 概览</strong></span>
+
+
+
+## 初级用户指南
+
+用户可以使用SINGA提供的主函数提交训练作业。对于这种情
况,用户必须在命令行中提供根据 
[JobProto](../api/classsinga_1_1JobProto.html) 设置的作业配置文件,
+
+    ./bin/singa-run.sh -conf <path to job conf> [-resume]
+
+`-resume` 
表示从上次的[检查点(checkpoint)](checkpoint.html)继续训练。
+[MLP](mlp.html) 模型和 [CNN](cnn.html) 模型使用内
建层提交训练作业。请阅读相关页面,查看它们的作业é…
ç½®æ–‡ä»¶ï¼Œè¿™äº›é¡µé¢ä¼šä»‹ç»æ¯ä¸ªç»„件配置的细节。
+
+## 高级用户指南
+
+如果用户的模型中包
含一些自己定义的组件,比如[Updater](updater.html),用户必
须自己编写主函数注册这些组件,跟Hadoop的主函数类似。一般地,主函数应该
+
+  * 初始化SINGA,如:设置日志;
+  * 注册用户自定义组件;
+  * 创建作业配置并传递给SINGA driver。
+
+主函数示例
+
+    #include "singa.h"
+    #include "user.h"  // header for user code
+
+    int main(int argc, char** argv) {
+      singa::Driver driver;
+      driver.Init(argc, argv);
+      bool resume;
+      // parse resume option from argv.
+
+      // register user defined layers
+      driver.RegisterLayer<FooLayer>(kFooLayer);
+      // register user defined updater
+      driver.RegisterUpdater<FooUpdater>(kFooUpdater);
+      ...
+      auto jobConf = driver.job_conf();
+      //  update jobConf
+
+      driver.Train(resume, jobConf);
+      return 0;
+    }
+
+driver 类' `Init` 方法加载用户在命令行参数中 (`-conf <job 
conf>`)提供的作业配置文件(至少包
含集群拓扑结构),并返回`jobConf`给用户,用户可更新和添åŠ
 ç¥žç»ç½‘络或者Updater的é…
ç½®ã€‚如果定义了Layer、Updater、Worker或者
Param的子类,用户需要通过driver为它们注册。最后,作业é…
ç½®ä¼šè¢«æäº¤åˆ°driver,由driver启动训练。
+
+将来我们会提供类似[keras](https://github.com/fchollet/keras) 
的帮助工具,使作业配置更加简单。
+
+用户需要使用SINGA库(*.libs/libsinga.so*)编译和链接自己的代ç 
ï¼ˆå¦‚:layer的实现和主函数),得到可执行文件,如名为*mysinga*
 的文件。执行以下命令启动该程序,用户需要将*mysinga* 
和作业配置文件的路径传给 *./bin/singa-run.sh* 。
+
+    ./bin/singa-run.sh -conf <path to job conf> -exec <path to mysinga> [other 
arguments]
+
+[RNN application](rnn.html) 
提供了一个完整的实现主函数训练特定RNN模型的例子。

Added: incubator/singa/site/trunk/content/markdown/v0.2.0/zh/rnn.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/v0.2.0/zh/rnn.md?rev=1738695&view=auto
==============================================================================
--- incubator/singa/site/trunk/content/markdown/v0.2.0/zh/rnn.md (added)
+++ incubator/singa/site/trunk/content/markdown/v0.2.0/zh/rnn.md Tue Apr 12 
06:22:20 2016
@@ -0,0 +1,420 @@
+# Recurrent Neural Networks for Language Modelling
+
+---
+
+Recurrent Neural Networks (RNN) are widely used for modelling sequential data,
+such as music and sentences.  In this example, we use SINGA to train a
+[RNN 
model](http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf)
+proposed by Tomas Mikolov for [language 
modeling](https://en.wikipedia.org/wiki/Language_model).
+The training objective (loss) is
+to minimize the [perplexity per 
word](https://en.wikipedia.org/wiki/Perplexity), which
+is equivalent to maximize the probability of predicting the next word given 
the current word in
+a sentence.
+
+Different to the [CNN](cnn.html), [MLP](mlp.html)
+and [RBM](rbm.html) examples which use built-in
+layers(layer) and records(data),
+none of the layers in this example are built-in. Hence users would learn to
+implement their own layers and data records through this example.
+
+## Running instructions
+
+In *SINGA_ROOT/examples/rnnlm/*, scripts are provided to run the training job.
+First, the data is prepared by
+
+    $ cp Makefile.example Makefile
+    $ make download
+    $ make create
+
+Second, to compile the source code under *examples/rnnlm/*, run
+
+    $ make rnnlm
+
+An executable file *rnnlm.bin* will be generated.
+
+Third, the training is started by passing *rnnlm.bin* and the job configuration
+to *singa-run.sh*,
+
+    # at SINGA_ROOT/
+    # export LD_LIBRARY_PATH=.libs:$LD_LIBRARY_PATH
+    $ ./bin/singa-run.sh -exec examples/rnnlm/rnnlm.bin -conf 
examples/rnnlm/job.conf
+
+## Implementations
+
+<img src="../images/rnnlm.png" align="center" width="400px"/>
+<span><strong>Figure 1 - Net structure of the RNN model.</strong></span>
+
+The neural net structure is shown Figure 1.  Word records are loaded by
+`DataLayer`. For every iteration, at most `max_window` word records are
+processed. If a sentence ending character is read, the `DataLayer` stops
+loading immediately. `EmbeddingLayer` looks up a word embedding matrix to 
extract
+feature vectors for words loaded by the `DataLayer`.  These features are 
transformed by the
+`HiddenLayer` which propagates the features from left to right. The
+output feature for word at position k is influenced by words from position 0 to
+k-1.  Finally, `LossLayer` computes the cross-entropy loss (see below)
+by predicting the next word of each word.
+The cross-entropy loss is computed as
+
+`$$L(w_t)=-log P(w_{t+1}|w_t)$$`
+
+Given `$w_t$` the above equation would compute over all words in the 
vocabulary,
+which is time consuming.
+[RNNLM 
Toolkit](https://f25ea9ccb7d3346ce6891573d543960492b92c30.googledrive.com/host/0ByxdPXuxLPS5RFM5dVNvWVhTd0U/rnnlm-0.4b.tgz)
+accelerates the computation as
+
+`$$P(w_{t+1}|w_t) = P(C_{w_{t+1}}|w_t) * P(w_{t+1}|C_{w_{t+1}})$$`
+
+Words from the vocabulary are partitioned into a user-defined number of 
classes.
+The first term on the left side predicts the class of the next word, and
+then predicts the next word given its class. Both the number of classes and
+the words from one class are much smaller than the vocabulary size. The 
probabilities
+can be calculated much faster.
+
+The perplexity per word is computed by,
+
+`$$PPL = 10^{- avg_t log_{10} P(w_{t+1}|w_t)}$$`
+
+### Data preparation
+
+We use a small dataset provided by the [RNNLM 
Toolkit](https://f25ea9ccb7d3346ce6891573d543960492b92c30.googledrive.com/host/0ByxdPXuxLPS5RFM5dVNvWVhTd0U/rnnlm-0.4b.tgz).
+It has 10,000 training sentences, with 71350 words in total and 3720 unique 
words.
+The subsequent steps follow the instructions in
+[Data Preparation](data.html) to convert the
+raw data into records and insert them into data stores.
+
+#### Download source data
+
+    # in SINGA_ROOT/examples/rnnlm/
+    cp Makefile.example Makefile
+    make download
+
+#### Define record format
+
+We define the word record as follows,
+
+    # in SINGA_ROOT/examples/rnnlm/rnnlm.proto
+    message WordRecord {
+      optional string word = 1;
+      optional int32 word_index = 2;
+      optional int32 class_index = 3;
+      optional int32 class_start = 4;
+      optional int32 class_end = 5;
+    }
+
+It includes the word string and its index in the vocabulary.
+Words in the vocabulary are sorted based on their frequency in the training 
dataset.
+The sorted list is cut into 100 sublists such that each sublist has 1/100 total
+word frequency. Each sublist is called a class.
+Hence each word has a `class_index` ([0,100)). The `class_start` is the index
+of the first word in the same class as `word`. The `class_end` is the index of
+the first word in the next class.
+
+#### Create data stores
+
+We use code from RNNLM Toolkit to read words, and sort them into classes.
+The main function in *create_store.cc* first creates word classes based on the 
training
+dataset. Second it calls the following function to create data store for the
+training, validation and test dataset.
+
+    int create_data(const char *input_file, const char *output_file);
+
+`input` is the path to training/validation/testing text file from the RNNLM 
Toolkit, `output` is output store file.
+This function starts with
+
+    singa::io::KVFile store;
+    store.Open(output, signa::io::kCreate);
+
+Then it reads the words one by one. For each word it creates a `WordRecord` 
instance,
+and inserts it into the store,
+
+    int wcnt = 0; // word count
+    WordRecord  wordRecord;
+    while(1) {
+      readWord(wordstr, fin);
+      if (feof(fin)) break;
+      ...// fill in the wordRecord;
+      string val;
+      wordRecord.SerializeToString(&val);
+      int length = snprintf(key, BUFFER_LEN, "%05d", wcnt++);
+      store.Write(string(key, length), val);
+    }
+
+Compilation and running commands are provided in the *Makefile.example*.
+After executing
+
+    make create
+
+*train_data.bin*, *test_data.bin* and *valid_data.bin* will be created.
+
+
+### Layer implementation
+
+4 user-defined layers are implemented for this application.
+Following the guide for implementing [new Layer 
subclasses](layer#implementing-a-new-layer-subclass),
+we extend the [LayerProto](../api/classsinga_1_1LayerProto.html)
+to include the configuration messages of user-defined layers as shown below
+(3 out of the 7 layers have specific configurations),
+
+
+    import "job.proto";     // Layer message for SINGA is defined
+
+    //For implementation of RNNLM application
+    extend singa.LayerProto {
+      optional EmbeddingProto embedding_conf = 101;
+      optional LossProto loss_conf = 102;
+      optional DataProto data_conf = 103;
+    }
+
+In the subsequent sections, we describe the implementation of each layer,
+including its configuration message.
+
+#### RNNLayer
+
+This is the base layer of all other layers for this applications. It is defined
+as follows,
+
+    class RNNLayer : virtual public Layer {
+    public:
+      inline int window() { return window_; }
+    protected:
+      int window_;
+    };
+
+For this application, two iterations may process different number of words.
+Because sentences have different lengths.
+The `DataLayer` decides the effective window size. All other layers call its 
source layers to get the
+effective window size and resets `window_` in `ComputeFeature` function.
+
+#### DataLayer
+
+DataLayer is for loading Records.
+
+    class DataLayer : public RNNLayer, singa::InputLayer {
+     public:
+      void Setup(const LayerProto& proto, const vector<Layer*>& srclayers) 
override;
+      void ComputeFeature(int flag, const vector<Layer*>& srclayers) override;
+      int max_window() const {
+        return max_window_;
+      }
+     private:
+      int max_window_;
+      singa::io::Store* store_;
+    };
+
+The Setup function gets the user configured max window size.
+
+    max_window_ = proto.GetExtension(input_conf).max_window();
+
+The `ComputeFeature` function loads at most max_window records. It could also
+stop when the sentence ending character is encountered.
+
+    ...// shift the last record to the first
+    window_ = max_window_;
+    for (int i = 1; i <= max_window_; i++) {
+      // load record; break if it is the ending character
+    }
+
+The configuration of `DataLayer` is like
+
+    name: "data"
+    user_type: "kData"
+    [data_conf] {
+      path: "examples/rnnlm/train_data.bin"
+      max_window: 10
+    }
+
+#### EmbeddingLayer
+
+This layer gets records from `DataLayer`. For each record, the word index is
+parsed and used to get the corresponding word feature vector from the embedding
+matrix.
+
+The class is declared as follows,
+
+    class EmbeddingLayer : public RNNLayer {
+      ...
+      const std::vector<Param*> GetParams() const override {
+        std::vector<Param*> params{embed_};
+        return params;
+      }
+     private:
+      int word_dim_, vocab_size_;
+      Param* embed_;
+    }
+
+The `embed_` field is a matrix whose values are parameter to be learned.
+The matrix size is `vocab_size_` x `word_dim_`.
+
+The Setup function reads configurations for `word_dim_` and `vocab_size_`. Then
+it allocates feature Blob for `max_window` words and setups `embed_`.
+
+    int max_window = srclayers[0]->data(this).shape()[0];
+    word_dim_ = proto.GetExtension(embedding_conf).word_dim();
+    data_.Reshape(vector<int>{max_window, word_dim_});
+    ...
+    embed_->Setup(vector<int>{vocab_size_, word_dim_});
+
+The `ComputeFeature` function simply copies the feature vector from the 
`embed_`
+matrix into the feature Blob.
+
+    # reset effective window size
+    window_ = datalayer->window();
+    auto records = datalayer->records();
+    ...
+    for (int t = 0; t < window_; t++) {
+      int idx  <- word index
+      Copy(words[t], embed[idx]);
+    }
+
+The `ComputeGradient` function copies back the gradients to the `embed_` 
matrix.
+
+The configuration for `EmbeddingLayer` is like,
+
+    user_type: "kEmbedding"
+    [embedding_conf] {
+      word_dim: 15
+      vocab_size: 3720
+    }
+    srclayers: "data"
+    param {
+      name: "w1"
+      init {
+        type: kUniform
+        low:-0.3
+        high:0.3
+      }
+    }
+
+#### HiddenLayer
+
+This layer unrolls the recurrent connections for at most max_window times.
+The feature for position k is computed based on the feature from the embedding 
layer (position k)
+and the feature at position k-1 of this layer. The formula is
+
+`$$f[k]=\sigma (f[t-1]*W+src[t])$$`
+
+where `$W$` is a matrix with `word_dim_` x `word_dim_` parameters.
+
+If you want to implement a recurrent neural network following our
+design, this layer is of vital importance for you to refer to.
+
+    class HiddenLayer : public RNNLayer {
+      ...
+      const std::vector<Param*> GetParams() const override {
+        std::vector<Param*> params{weight_};
+        return params;
+      }
+    private:
+      Param* weight_;
+    };
+
+The `Setup` function setups the weight matrix as
+
+    weight_->Setup(std::vector<int>{word_dim, word_dim});
+
+The `ComputeFeature` function gets the effective window size (`window_`) from 
its source layer
+i.e., the embedding layer. Then it propagates the feature from position 0 to 
position
+`window_` -1. The detailed descriptions for this process are illustrated as 
follows.
+
+    void HiddenLayer::ComputeFeature() {
+      for(int t = 0; t < window_size; t++){
+        if(t == 0)
+          Copy(data[t], src[t]);
+        else
+          data[t]=sigmoid(data[t-1]*W + src[t]);
+      }
+    }
+
+The `ComputeGradient` function computes the gradient of the loss w.r.t. W and 
the source layer.
+Particularly, for each position k, since data[k] contributes to data[k+1] and 
the feature
+at position k in its destination layer (the loss layer), grad[k] should 
contains the gradient
+from two parts. The destination layer has already computed the gradient from 
the loss layer into
+grad[k]; In the `ComputeGradient` function, we need to add the gradient from 
position k+1.
+
+    void HiddenLayer::ComputeGradient(){
+      ...
+      for (int k = window_ - 1; k >= 0; k--) {
+        if (k < window_ - 1) {
+          grad[k] += dot(grad[k + 1], weight.T()); // add gradient from 
position t+1.
+        }
+        grad[k] =... // compute gL/gy[t], y[t]=data[t-1]*W+src[t]
+      }
+      gweight = dot(data.Slice(0, window_-1).T(), grad.Slice(1, window_));
+      Copy(gsrc, grad);
+    }
+
+After the loop, we get the gradient of the loss w.r.t y[k], which is used to
+compute the gradient of W and the src[k].
+
+#### LossLayer
+
+This layer computes the cross-entropy loss and the `$log_{10}P(w_{t+1}|w_t)$` 
(which
+could be averaged over all words by users to get the PPL value).
+
+There are two configuration fields to be specified by users.
+
+    message LossProto {
+      optional int32 nclass = 1;
+      optional int32 vocab_size = 2;
+    }
+
+There are two weight matrices to be learned
+
+    class LossLayer : public RNNLayer {
+      ...
+     private:
+      Param* word_weight_, *class_weight_;
+    }
+
+The ComputeFeature function computes the two probabilities respectively.
+
+`$$P(C_{w_{t+1}}|w_t) = Softmax(w_t * class\_weight_)$$`
+`$$P(w_{t+1}|C_{w_{t+1}}) = Softmax(w_t * 
word\_weight[class\_start:class\_end])$$`
+
+`$w_t$` is the feature from the hidden layer for the k-th word, its ground 
truth
+next word is `$w_{t+1}$`.  The first equation computes the probability 
distribution over all
+classes for the next word. The second equation computes the
+probability distribution over the words in the ground truth class for the next 
word.
+
+The ComputeGradient function computes the gradient of the source layer
+(i.e., the hidden layer) and the two weight matrices.
+
+### Updater Configuration
+
+We employ kFixedStep type of the learning rate change method and the
+configuration is as follows. We decay the learning rate once the performance 
does
+not increase on the validation dataset.
+
+    updater{
+      type: kSGD
+      learning_rate {
+        type: kFixedStep
+        fixedstep_conf:{
+          step:0
+          step:48810
+          step:56945
+          step:65080
+          step:73215
+          step_lr:0.1
+          step_lr:0.05
+          step_lr:0.025
+          step_lr:0.0125
+          step_lr:0.00625
+        }
+      }
+    }
+
+### TrainOneBatch() Function
+
+We use BP (BackPropagation) algorithm to train the RNN model here. The
+corresponding configuration can be seen below.
+
+    # In job.conf file
+    train_one_batch {
+      alg: kBackPropagation
+    }
+
+### Cluster Configuration
+
+The default cluster configuration can be used, i.e., single worker and single 
server
+in a single process.

Added: incubator/singa/site/trunk/content/markdown/v0.2.0/zh/train-one-batch.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/v0.2.0/zh/train-one-batch.md?rev=1738695&view=auto
==============================================================================
--- incubator/singa/site/trunk/content/markdown/v0.2.0/zh/train-one-batch.md 
(added)
+++ incubator/singa/site/trunk/content/markdown/v0.2.0/zh/train-one-batch.md 
Tue Apr 12 06:22:20 2016
@@ -0,0 +1,179 @@
+# Train-One-Batch
+
+---
+
+For each SGD iteration, every worker calls the `TrainOneBatch` function to
+compute gradients of parameters associated with local layers (i.e., layers
+dispatched to it).  SINGA has implemented two algorithms for the
+`TrainOneBatch` function. Users select the corresponding algorithm for
+their model in the configuration.
+
+## Basic user guide
+
+### Back-propagation
+
+[BP algorithm](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) is used for
+computing gradients of feed-forward models, e.g., [CNN](cnn.html)
+and [MLP](mlp.html), and [RNN](rnn.html) models in SINGA.
+
+
+    # in job.conf
+    alg: kBP
+
+To use the BP algorithm for the `TrainOneBatch` function, users just simply
+configure the `alg` field with `kBP`. If a neural net contains user-defined
+layers, these layers must be implemented properly be to consistent with the
+implementation of the BP algorithm in SINGA (see below).
+
+
+### Contrastive Divergence
+
+[CD algorithm](http://www.cs.toronto.edu/~fritz/absps/nccd.pdf) is used for
+computing gradients of energy models like RBM.
+
+    # job.conf
+    alg: kCD
+    cd_conf {
+      cd_k: 2
+    }
+
+To use the CD algorithm for the `TrainOneBatch` function, users just configure
+the `alg` field to `kCD`. Uses can also configure the Gibbs sampling steps in
+the CD algorthm through the `cd_k` field. By default, it is set to 1.
+
+
+
+## Advanced user guide
+
+### Implementation of BP
+
+The BP algorithm is implemented in SINGA following the below pseudo code,
+
+    BPTrainOnebatch(step, net) {
+      // forward propagate
+      foreach layer in net.local_layers() {
+        if IsBridgeDstLayer(layer)
+          recv data from the src layer (i.e., BridgeSrcLayer)
+        foreach param in layer.params()
+          Collect(param) // recv response from servers for last update
+
+        layer.ComputeFeature(kForward)
+
+        if IsBridgeSrcLayer(layer)
+          send layer.data_ to dst layer
+      }
+      // backward propagate
+      foreach layer in reverse(net.local_layers) {
+        if IsBridgeSrcLayer(layer)
+          recv gradient from the dst layer (i.e., BridgeDstLayer)
+          recv response from servers for last update
+
+        layer.ComputeGradient()
+        foreach param in layer.params()
+          Update(step, param) // send param.grad_ to servers
+
+        if IsBridgeDstLayer(layer)
+          send layer.grad_ to src layer
+      }
+    }
+
+
+It forwards features through all local layers (can be checked by layer
+partition ID and worker ID) and backwards gradients in the reverse order.
+[BridgeSrcLayer](layer.html#bridgesrclayer--bridgedstlayer)
+(resp. `BridgeDstLayer`) will be blocked until the feature (resp.
+gradient) from the source (resp. destination) layer comes. Parameter gradients
+are sent to servers via `Update` function. Updated parameters are collected via
+`Collect` function, which will be blocked until the parameter is updated.
+[Param](param.html) objects have versions, which can be used to
+check whether the `Param` objects have been updated or not.
+
+Since RNN models are unrolled into feed-forward models, users need to implement
+the forward propagation in the recurrent layer's `ComputeFeature` function,
+and implement the backward propagation in the recurrent layer's 
`ComputeGradient`
+function. As a result, the whole `TrainOneBatch` runs
+[back-propagation through time 
(BPTT)](https://en.wikipedia.org/wiki/Backpropagation_through_time)  algorithm.
+
+### Implementation of CD
+
+The CD algorithm is implemented in SINGA following the below pseudo code,
+
+    CDTrainOneBatch(step, net) {
+      # positive phase
+      foreach layer in net.local_layers()
+        if IsBridgeDstLayer(layer)
+          recv positive phase data from the src layer (i.e., BridgeSrcLayer)
+        foreach param in layer.params()
+          Collect(param)  // recv response from servers for last update
+        layer.ComputeFeature(kPositive)
+        if IsBridgeSrcLayer(layer)
+          send positive phase data to dst layer
+
+      # negative phase
+      foreach gibbs in [0...layer_proto_.cd_k]
+        foreach layer in net.local_layers()
+          if IsBridgeDstLayer(layer)
+            recv negative phase data from the src layer (i.e., BridgeSrcLayer)
+          layer.ComputeFeature(kPositive)
+          if IsBridgeSrcLayer(layer)
+            send negative phase data to dst layer
+
+      foreach layer in net.local_layers()
+        layer.ComputeGradient()
+        foreach param in layer.params
+          Update(param)
+    }
+
+Parameter gradients are computed after the positive phase and negative phase.
+
+### Implementing a new algorithm
+
+SINGA implements BP and CD by creating two subclasses of
+the [Worker](../api/classsinga_1_1Worker.html) class:
+[BPWorker](../api/classsinga_1_1BPWorker.html)'s `TrainOneBatch` function 
implements the BP
+algorithm; [CDWorker](../api/classsinga_1_1CDWorker.html)'s `TrainOneBatch` 
function implements the CD
+algorithm. To implement a new algorithm for the `TrainOneBatch` function, users
+need to create a new subclass of the `Worker`, e.g.,
+
+    class FooWorker : public Worker {
+      void TrainOneBatch(int step, shared_ptr<NeuralNet> net, Metric* perf) 
override;
+      void TestOneBatch(int step, Phase phase, shared_ptr<NeuralNet> net, 
Metric* perf) override;
+    };
+
+The `FooWorker` must implement the above two functions for training one
+mini-batch and testing one mini-batch. The `perf` argument is for collecting
+training or testing performance, e.g., the objective loss or accuracy. It is
+passed to the `ComputeFeature` function of each layer.
+
+Users can define some fields for users to configure
+
+    # in user.proto
+    message FooWorkerProto {
+      optional int32 b = 1;
+    }
+
+    extend JobProto {
+      optional FooWorkerProto foo_conf = 101;
+    }
+
+    # in job.proto
+    JobProto {
+      ...
+      extension 101..max;
+    }
+
+It is similar as [adding configuration fields for a new 
layer](layer.html#implementing-a-new-layer-subclass).
+
+To use `FooWorker`, users need to register it in the 
[main.cc](programming-guide.html)
+and configure the `alg` and `foo_conf` fields,
+
+    # in main.cc
+    const int kFoo = 3; // worker ID, must be different to that of CDWorker 
and BPWorker
+    driver.RegisterWorker<FooWorker>(kFoo);
+
+    # in job.conf
+    ...
+    alg: 3
+    [foo_conf] {
+      b = 4;
+    }

Added: incubator/singa/site/trunk/content/markdown/v0.2.0/zh/updater.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/v0.2.0/zh/updater.md?rev=1738695&view=auto
==============================================================================
--- incubator/singa/site/trunk/content/markdown/v0.2.0/zh/updater.md (added)
+++ incubator/singa/site/trunk/content/markdown/v0.2.0/zh/updater.md Tue Apr 12 
06:22:20 2016
@@ -0,0 +1,284 @@
+# Updater
+
+---
+
+Every server in SINGA has an [Updater](../api/classsinga_1_1Updater.html)
+instance that updates parameters based on gradients.
+In this page, the *Basic user guide* describes the configuration of an updater.
+The *Advanced user guide* present details on how to implement a new updater 
and a new
+learning rate changing method.
+
+## Basic user guide
+
+There are many different parameter updating protocols (i.e., subclasses of
+`Updater`). They share some configuration fields like
+
+* `type`, an integer for identifying an updater;
+* `learning_rate`, configuration for the
+[LRGenerator](../api/classsinga_1_1LRGenerator.html) which controls the 
learning rate.
+* `weight_decay`, the co-efficient for [L2 * 
regularization](http://deeplearning.net/tutorial/gettingstarted.html#regularization).
+* 
[momentum](http://ufldl.stanford.edu/tutorial/supervised/OptimizationStochasticGradientDescent/).
+
+If you are not familiar with the above terms, you can get their meanings in
+[this page provided by 
Karpathy](http://cs231n.github.io/neural-networks-3/#update).
+
+### Configuration of built-in updater classes
+
+#### Updater
+The base `Updater` implements the [vanilla SGD 
algorithm](http://cs231n.github.io/neural-networks-3/#sgd).
+Its configuration type is `kSGD`.
+Users need to configure at least the `learning_rate` field.
+`momentum` and `weight_decay` are optional fields.
+
+    updater{
+      type: kSGD
+      momentum: float
+      weight_decay: float
+      learning_rate {
+        ...
+      }
+    }
+
+#### AdaGradUpdater
+
+It inherits the base `Updater` to implement the
+[AdaGrad](http://www.magicbroom.info/Papers/DuchiHaSi10.pdf) algorithm.
+Its type is `kAdaGrad`.
+`AdaGradUpdater` is configured similar to `Updater` except
+that `momentum` is not used.
+
+#### NesterovUpdater
+
+It inherits the base `Updater` to implements the
+[Nesterov](http://arxiv.org/pdf/1212.0901v2.pdf) (section 3.5) updating 
protocol.
+Its type is `kNesterov`.
+`learning_rate` and `momentum` must be configured. `weight_decay` is an
+optional configuration field.
+
+#### RMSPropUpdater
+
+It inherits the base `Updater` to implements the
+[RMSProp algorithm](http://cs231n.github.io/neural-networks-3/#sgd) proposed by
+[Hinton](http://www.cs.toronto.edu/%7Etijmen/csc321/slides/lecture_slides_lec6.pdf)(slide
 29).
+Its type is `kRMSProp`.
+
+    updater {
+      type: kRMSProp
+      rmsprop_conf {
+       rho: float # [0,1]
+      }
+    }
+
+
+### Configuration of learning rate
+
+The `learning_rate` field is configured as,
+
+    learning_rate {
+      type: ChangeMethod
+      base_lr: float  # base/initial learning rate
+      ... # fields to a specific changing method
+    }
+
+The common fields include `type` and `base_lr`. SINGA provides the following
+`ChangeMethod`s.
+
+#### kFixed
+
+The `base_lr` is used for all steps.
+
+#### kLinear
+
+The updater should be configured like
+
+    learning_rate {
+      base_lr:  float
+      linear_conf {
+        freq: int
+        final_lr: float
+      }
+    }
+
+Linear interpolation is used to change the learning rate,
+
+    lr = (1 - step / freq) * base_lr + (step / freq) * final_lr
+
+#### kExponential
+
+The udapter should be configured like
+
+    learning_rate {
+      base_lr: float
+      exponential_conf {
+        freq: int
+      }
+    }
+
+The learning rate for `step` is
+
+    lr = base_lr / 2^(step / freq)
+
+#### kInverseT
+
+The updater should be configured like
+
+    learning_rate {
+      base_lr: float
+      inverset_conf {
+        final_lr: float
+      }
+    }
+
+The learning rate for `step` is
+
+    lr = base_lr / (1 + step / final_lr)
+
+#### kInverse
+
+The updater should be configured like
+
+    learning_rate {
+      base_lr: float
+      inverse_conf {
+        gamma: float
+        pow: float
+      }
+    }
+
+
+The learning rate for `step` is
+
+    lr = base_lr * (1 + gamma * setp)^(-pow)
+
+
+#### kStep
+
+The updater should be configured like
+
+    learning_rate {
+      base_lr : float
+      step_conf {
+        change_freq: int
+        gamma: float
+      }
+    }
+
+
+The learning rate for `step` is
+
+    lr = base_lr * gamma^ (step / change_freq)
+
+#### kFixedStep
+
+The updater should be configured like
+
+    learning_rate {
+      fixedstep_conf {
+        step: int
+        step_lr: float
+
+        step: int
+        step_lr: float
+
+        ...
+      }
+    }
+
+Denote the i-th tuple as (step[i], step_lr[i]), then the learning rate for
+`step` is,
+
+    step_lr[k]
+
+where step[k] is the smallest number that is larger than `step`.
+
+
+## Advanced user guide
+
+### Implementing a new Updater subclass
+
+The base Updater class has one virtual function,
+
+    class Updater{
+     public:
+      virtual void Update(int step, Param* param, float grad_scale = 1.0f) = 0;
+
+     protected:
+      UpdaterProto proto_;
+      LRGenerator lr_gen_;
+    };
+
+It updates the values of the `param` based on its gradients. The `step`
+argument is for deciding the learning rate which may change through time
+(step). `grad_scale` scales the original gradient values. This function is
+called by servers once it receives all gradients for the same `Param` object.
+
+To implement a new Updater subclass, users must override the `Update` function.
+
+    class FooUpdater : public Updater {
+      void Update(int step, Param* param, float grad_scale = 1.0f) override;
+    };
+
+Configuration of this new updater can be declared similar to that of a new
+layer,
+
+    # in user.proto
+    FooUpdaterProto {
+      optional int32 c = 1;
+    }
+
+    extend UpdaterProto {
+      optional FooUpdaterProto fooupdater_conf= 101;
+    }
+
+The new updater should be registered in the
+[main function](programming-guide.html)
+
+    driver.RegisterUpdater<FooUpdater>("FooUpdater");
+
+Users can then configure the job as
+
+    # in job.conf
+    updater {
+      user_type: "FooUpdater"  # must use user_type with the same string 
identifier as the one used for registration
+      fooupdater_conf {
+        c : 20;
+      }
+    }
+
+### Implementing a new LRGenerator subclass
+
+The base `LRGenerator` is declared as,
+
+    virtual float Get(int step);
+
+To implement a subclass, e.g., `FooLRGen`, users should declare it like
+
+    class FooLRGen : public LRGenerator {
+     public:
+      float Get(int step) override;
+    };
+
+Configuration of `FooLRGen` can be defined using a protocol message,
+
+    # in user.proto
+    message FooLRProto {
+     ...
+    }
+
+    extend LRGenProto {
+      optional FooLRProto foolr_conf = 101;
+    }
+
+The configuration is then like,
+
+    learning_rate {
+      user_type : "FooLR" # must use user_type with the same string identifier 
as the one used for registration
+      base_lr: float
+      foolr_conf {
+        ...
+      }
+    }
+
+Users have to register this subclass in the main function,
+
+      driver.RegisterLRGenerator<FooLRGen, std::string>("FooLR")

Modified: incubator/singa/site/trunk/content/site.xml
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/site.xml?rev=1738695&r1=1738694&r2=1738695&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/site.xml (original)
+++ incubator/singa/site/trunk/content/site.xml Tue Apr 12 06:22:20 2016
@@ -65,12 +65,13 @@
     </head>
 
     <menu name="Documentaion">
-      <item name = "v0.2.0" href="docs/index.html" collapse="true" >
+      <item name = "v0.3.0" href="docs/index.html" collapse="true" >
         <item name="English" href="docs/index.html"/>
         <item name="中文" href="docs/zh/index.html"/>
         <item name="日本語" href="docs/jp/index.html"/>
         <item name="한국어" href="docs/kr/index.html"/>
       </item>
+      <item name = "v0.2.0" href="v0.2.0/index.html"/>
       <item name = "v0.1.0" href="v0.1.0/index.html"/>
     </menu>
 


Reply via email to