Author: jinyang
Date: Sat Sep 26 12:04:23 2015
New Revision: 1705429

URL: http://svn.apache.org/viewvc?rev=1705429&view=rev
Log:
fixed incorrect links and removed global links

Modified:
    incubator/singa/site/trunk/content/markdown/docs/architecture.md
    incubator/singa/site/trunk/content/markdown/docs/checkpoint.md
    incubator/singa/site/trunk/content/markdown/docs/cnn.md
    incubator/singa/site/trunk/content/markdown/docs/communication.md
    incubator/singa/site/trunk/content/markdown/docs/data.md
    incubator/singa/site/trunk/content/markdown/docs/distributed-training.md
    incubator/singa/site/trunk/content/markdown/docs/frameworks.md
    incubator/singa/site/trunk/content/markdown/docs/layer.md
    incubator/singa/site/trunk/content/markdown/docs/mlp.md
    incubator/singa/site/trunk/content/markdown/docs/model-config.md
    incubator/singa/site/trunk/content/markdown/docs/neural-net.md
    incubator/singa/site/trunk/content/markdown/docs/overview.md
    incubator/singa/site/trunk/content/markdown/docs/param.md
    incubator/singa/site/trunk/content/markdown/docs/programming-guide.md
    incubator/singa/site/trunk/content/markdown/docs/quick-start.md
    incubator/singa/site/trunk/content/markdown/docs/train-one-batch.md
    incubator/singa/site/trunk/content/markdown/docs/updater.md

Modified: incubator/singa/site/trunk/content/markdown/docs/architecture.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/architecture.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/architecture.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/architecture.md Sat Sep 26 
12:04:23 2015
@@ -4,11 +4,11 @@
 
 ## Logical Architecture
 
-<img src="http://singa.incubator.apache.org/assets/image/logical.png"; 
style="width: 550px"/>
+<img src="../images/logical.png" style="width: 550px"/>
 <p><strong> Fig.1 - Logical system architecture</strong></p>
 
 SINGA has flexible architecture to support different distributed
-[training frameworks](http://singa.incubator.apache.org/docs/frameworks.html) 
(both synchronous and asynchronous).
+[training frameworks](frameworks.html) (both synchronous and asynchronous).
 The logical system architecture is shown in Fig.1.
 The architecture consists of multiple server groups and worker groups:
 
@@ -40,7 +40,7 @@ within a group:
 
 ## Implementation
 In SINGA, servers and workers are execution units running in separate threads.
-They communicate through 
[messages](http://singa.incubator.apache.org/docs/communication.html).
+They communicate through [messages](communication.html).
 Every process runs the main thread as a stub that aggregates local messages
 and forwards them to corresponding (remote) receivers.
 

Modified: incubator/singa/site/trunk/content/markdown/docs/checkpoint.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/checkpoint.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/checkpoint.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/checkpoint.md Sat Sep 26 
12:04:23 2015
@@ -11,7 +11,7 @@ configured frequency. By checkpointing m
 
   2. use them to initialize a similar model. For example, the
     parameters from training a RBM model can be used to initialize
-    a [deep auto-encoder](http://singa.incubator.apache.org/docs/rbm) model.
+    a [deep auto-encoder](rbm.html) model.
 
 ## Configuration
 

Modified: incubator/singa/site/trunk/content/markdown/docs/cnn.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/cnn.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/cnn.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/cnn.md Sat Sep 26 12:04:23 
2015
@@ -45,7 +45,7 @@ You should see output like
     E0817 06:58:12.518497 33849 trainer.cc:373] Train step-240, loss : 
2.295912, accuracy : 0.185417
 
 After training some steps (depends on the setting) or the job is
-finished, SINGA will 
[checkpoint](http://singa.incubator.apache.org/docs/checkpoint) the model 
parameters.
+finished, SINGA will [checkpoint](checkpoint.html) the model parameters.
 
 ## Details
 

Modified: incubator/singa/site/trunk/content/markdown/docs/communication.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/communication.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/communication.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/communication.md Sat Sep 
26 12:04:23 2015
@@ -22,7 +22,7 @@ example architecture.
 <p><strong> Fig.1 - Example physical architecture and network 
connection</strong></p>
 
 Fig.1 shows an example physical architecture and its network connection.
-[Section-partition server side 
ParamShard](http://singa.incubator.apache.org/docs/architecture.html}) has a 
detailed description of the
+[Section-partition server side ParamShard](architecture.html}) has a detailed 
description of the
 architecture. Each process consists of one main thread running the stub and 
multiple
 background threads running the worker and server tasks. The stub of the main
 thread forwards messages among threads . The worker and

Modified: incubator/singa/site/trunk/content/markdown/docs/data.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/data.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/data.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/data.md Sat Sep 26 
12:04:23 2015
@@ -1,25 +1,25 @@
 # Data Preparation
 
 To submit a training job, users need to convert raw data (e.g., images, text
-documents) into SINGA recognizable [Record](api/classsinga_1_1Record.html)s.
-SINGA uses [data 
layers](http://singa.incubator.apache.org/docs/layer#data-layers)
+documents) into SINGA recognizable [Record](../api/classsinga_1_1Record.html)s.
+SINGA uses [data layers](layer#data-layers)
 to load these records into memory and uses
-[parser layers](http://singa.incubator.apache.org/docs/layer#parser-layers) to 
parse features (e.g.,
+[parser layers](layer#parser-layers) to parse features (e.g.,
 image pixels and labels) from these `Record`s. `Record`s could be
 stored in a file, a database, or HDFS, as
 long as there is a corresponding
-[DataLayer](http://singa.incubator.apache.org/api/classsinga_1_1DataLayer.html).
+[DataLayer](../api/classsinga_1_1DataLayer.html).
 
 ## DataShard
 
-SINGA comes with a light-weight database named 
[DataShard](http://singa.incubator.apache.org/api/classsinga_1_1DataShard.html).
+SINGA comes with a light-weight database named 
[DataShard](../api/classsinga_1_1DataShard.html).
 It provides operations for inserting `Record`,
 and read `Record` in sequential order.
 `Record`s are flushed once the maximum cache size is reached. It
 loads `Record`s in batch and returns them to users one by one through the
-[Next](http://singa.incubator.apache.org/api/classsinga_1_1DataShard.html) 
function.
+[Next](../api/classsinga_1_1DataShard.html) function.
 The disk folder in which the `Record`s are stored, is called a (data) shard. 
The
-[ShardDataLayer](http://singa.incubator.apache.org/api/classsinga_1_1ShardDataLayer.html)
 is a built-in
+[ShardDataLayer](../api/classsinga_1_1ShardDataLayer.html) is a built-in
 layer for loading `Record`s from `DataShard`.
 
 To create data shards for users' own data, they can follow the subsequent 
sections.
@@ -27,7 +27,7 @@ To create data shards for users' own dat
 ###  User record definition
 
 Users define their own record for storing their data. E.g., the built-in
-[SingleLabelImageRecord](http://singa.incubator.apache.org/api/classsinga_1_1SingleLabelImageRecord.html)
+[SingleLabelImageRecord](../api/classsinga_1_1SingleLabelImageRecord.html)
 has an int field for image label, and a pixel array for image RGB values.
 The code below shows an example of defining a new record `UserRecord`, and 
extending the
 base `Record` to include `UserRecord`.
@@ -53,7 +53,7 @@ for extension of protocol messages.
 
 The extended `Record` will be parsed by a parser layer to extract features
 (e.g., label or pixel values). Users need to write
-their own [parser 
layers](http://singa.incubator.apache.org/docs/layer#parser-layers) to parse the
+their own [parser layers](layer#parser-layers) to parse the
 extended `Record`.
 
 

Modified: 
incubator/singa/site/trunk/content/markdown/docs/distributed-training.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/distributed-training.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/distributed-training.md 
(original)
+++ incubator/singa/site/trunk/content/markdown/docs/distributed-training.md 
Sat Sep 26 12:04:23 2015
@@ -5,8 +5,8 @@ huge amount of training data.
 
 Here we introduce distrbuted SINGA in following aspects:
 
-* [System Architecture](http://singa.incubator.apache.org/docs/architecture)
+* [System Architecture](architecture.html)
 
-* [Training Frameworks](http://singa.incubator.apache.org/docs/frameworks)
+* [Training Frameworks](frameworks.html)
 
-* [System Communication](http://singa.incubator.apache.org/docs/communication)
+* [System Communication](communication.html)

Modified: incubator/singa/site/trunk/content/markdown/docs/frameworks.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/frameworks.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/frameworks.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/frameworks.md Sat Sep 26 
12:04:23 2015
@@ -41,7 +41,7 @@ both **synchronous** and **asynchronous*
 Here we illustrate how to configure
 popular distributed training frameworks in SINGA.
 
-<img src="http://singa.incubator.apache.org/assets/image/frameworks.png"; 
style="width: 800px"/>
+<img src="../images/frameworks.png" style="width: 800px"/>
 <p><strong> Fig.1 - Training frameworks in SINGA</strong></p>
 
 ####Sandblaster

Modified: incubator/singa/site/trunk/content/markdown/docs/layer.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/layer.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/layer.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/layer.md Sat Sep 26 
12:04:23 2015
@@ -81,7 +81,7 @@ into memory.
 
 ##### DataLayer
 
-DataLayer loads training/testing data as 
[Record](http://singa.incubator.apache.org/docs/data)s, which
+DataLayer loads training/testing data as [Record](data.html)s, which
 are parsed by parser layers.
 
 ##### ShardDataLayer

Modified: incubator/singa/site/trunk/content/markdown/docs/mlp.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/mlp.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/mlp.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/mlp.md Sat Sep 26 12:04:23 
2015
@@ -189,7 +189,7 @@ The learning rate shrinks by 0.997 every
 ### TrainOneBatch algorithm
 
 The MLP model is a feed-forward model, hence
-[Back-propagation algorithm]({{ 
BASE_PATH}}/docs/train-one-batch#back-propagation)
+[Back-propagation algorithm](train-one-batch#back-propagation)
 is selected.
 
     train_one_batch {

Modified: incubator/singa/site/trunk/content/markdown/docs/model-config.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/model-config.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/model-config.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/model-config.md Sat Sep 26 
12:04:23 2015
@@ -2,7 +2,7 @@
 
 SINGA uses the stochastic gradient descent (SGD) algorithm to train parameters
 of deep learning models.  For each SGD iteration, there is a
-[Worker](docs/architecture.html) computing
+[Worker](architecture.html) computing
 gradients of parameters from the NeuralNet and a [Updater]() updating parameter
 values based on gradients. Hence the model configuration mainly consists these
 three parts. We will introduce the NeuralNet, Worker and Updater in the
@@ -302,5 +302,5 @@ listed:
     // checkpoint path
     optional bool resume = 36 [default = false];
 
-The pages of [checkpoint and restore](checkpoint.html), [validation and 
test]() have more details
+The pages of [checkpoint and restore](checkpoint.html) has more details
 on related fields.

Modified: incubator/singa/site/trunk/content/markdown/docs/neural-net.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/neural-net.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/neural-net.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/neural-net.md Sat Sep 26 
12:04:23 2015
@@ -4,11 +4,11 @@
 
 `NeuralNet` in SINGA represents an instance of user's neural net model. As the
 neural net typically consists of a set of layers, `NeuralNet` comprises
-a set of unidirectionally connected 
[Layer](http://singa.incubator.apache.org/docs/layer)s.
+a set of unidirectionally connected [Layer](layer.html)s.
 This page describes how to convert an user's neural net into
 the configuration of `NeuralNet`.
 
-<img src="http://singa.incubator.apache.org/images/model-category.png"; 
align="center" width="200px"/>
+<img src="../images/model-category.png" align="center" width="200px"/>
 <span><strong>Figure 1 - Categorization of popular deep learning 
models.</strong></span>
 
 ## Net structure configuration
@@ -21,7 +21,7 @@ category.
 ### Feed-forward models
 
 <div align = "left">
-<img src="http://singa.incubator.apache.org/images/mlp-net.png"; align="center" 
width="200px"/>
+<img src="../images/mlp-net.png" align="center" width="200px"/>
 <span><strong>Figure 2 - Net structure of a MLP model.</strong></span>
 </div>
 
@@ -59,7 +59,7 @@ configuration for the MLP model shown in
 
 ### Energy models
 
-<img src="http://singa.incubator.apache.org/images/rbm-rnn.png"; align="center" 
width="500px"/>
+<img src="../images/rbm-rnn.png" align="center" width="500px"/>
 <span><strong>Figure 3 - Convert connections in RBM and RNN.</strong></span>
 
 
@@ -68,7 +68,7 @@ etc., their connections are undirected (
 `NeuralNet`, users can simply replace each connection with two directed
 connections, as shown in Figure 3a. In other words, for each pair of connected 
layers, their source
 layer field should include each other's name.
-The full [RBM example](http://singa.incubator.apache.org/docs/rbm) has
+The full [RBM example](rbm.html) has
 detailed neural net configuration for a RBM model, which looks like
 
     net {
@@ -97,7 +97,7 @@ For recurrent neural networks (RNN), use
 by unrolling the recurrent layer.  For example, in Figure 3b, the original
 layer is unrolled into a new layer with 4 internal layers. In this way, the
 model is like a normal feed-forward model, thus can be configured similarly.
-The [RNN example](http://singa.incubator.apache.org/docs/rnn}) has a full 
neural net
+The [RNN example](rnn.html) has a full neural net
 configuration for a RNN model.
 
 
@@ -125,7 +125,7 @@ over multiple workers.
 
 ### Batch and feature dimension
 
-<img src="http://singa.incubator.apache.org/images/partition_fc.png"; 
align="center" width="400px"/>
+<img src="../images/partition_fc.png" align="center" width="400px"/>
 <span><strong>Figure 4 - Partitioning of a fully connected 
layer.</strong></span>
 
 
@@ -135,11 +135,11 @@ dimension 0 (also called batch dimension
 For instance, if the mini-batch size is 256 and the layer is partitioned into 2
 sub-layers, each sub-layer would have 128 feature vectors in its feature blob.
 Partitioning on this dimension has no effect on the parameters, as every
-[Param](http://singa.incubator.apache.org/docs/param) object is replicated in 
the sub-layers. Partitioning on dimension
+[Param](param.html) object is replicated in the sub-layers. Partitioning on 
dimension
 1 (also called feature dimension) slices the feature matrix by columns. For
 example, suppose the original feature vector has 50 units, after partitioning
 into 2 sub-layers, each sub-layer would have 25 units. This partitioning may
-result in [Param](http://singa.incubator.apache.org/docs/param) object being 
split, as shown in
+result in [Param](param.html) object being split, as shown in
 Figure 4. Both the bias vector and weight matrix are
 partitioned into two sub-layers.
 
@@ -222,10 +222,8 @@ Parameters can be shared in two cases,
 If the shared `Param` instances resident in the same process (may in different
 threads), they use the same chunk of memory space for their values. But they
 would have different memory spaces for their gradients. In fact, their
-gradients will be averaged by the [stub]() or [server]().
+gradients will be averaged by the stub or server.
 
-
-{% comment %}
 ## Advanced user guide
 
 ### Creation
@@ -241,7 +239,7 @@ layers except the data layer, loss layer
 function takes in the full net configuration including layers for training,
 validation and test.  It removes layers for phases other than the specified
 phase based on the `exclude` field in
-[layer configuration](http://singa.incubator.apache.org/docs/layer):
+[layer configuration](layer.html):
 
     layer {
       ...

Modified: incubator/singa/site/trunk/content/markdown/docs/overview.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/overview.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/overview.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/overview.md Sat Sep 26 
12:04:23 2015
@@ -47,7 +47,7 @@ popular deep learning models can be expr
 
 ## System overview
 
-<img src="http://singa.incubator.apache.org/images/sgd.png"; align="center" 
width="400px"/>
+<img src="../images/sgd.png" align="center" width="400px"/>
 <span><strong>Figure 1 - SGD flow.</strong></span>
 
 Training a deep learning model is to find the optimal parameters involved in
@@ -60,7 +60,7 @@ closed form solution. Typically, people
 initializes the parameters and then iteratively updates them to reduce the loss
 as shown in Figure 1.
 
-<img src="http://singa.incubator.apache.org/images/overview.png"; 
align="center" width="400px"/>
+<img src="../images/overview.png" align="center" width="400px"/>
 <span><strong>Figure 2 - SINGA overview.</strong></span>
 
 SGD is used in SINGA to train
@@ -79,13 +79,13 @@ iteration.
 
 To submit a job in SINGA (i.e., training a deep learning model),
 users pass the job configuration to SINGA driver in the
-[main function](http://singa.incubator.apache.org/docs/programming-guide). The 
job configuration
+[main function](programming-guide.html). The job configuration
 specifies the four major components in Figure 2,
 
-  * a [NeuralNet](http://singa.incubator.apache.org/docs/neural-net) 
describing the neural net structure with the detailed layer setting and their 
connections;
-  * a [TrainOneBatch](http://singa.incubator.apache.org/docs/train-one-batch) 
algorithm which is tailored for different model categories;
-  * an [Updater](http://singa.incubator.apache.org/docs/updater) defining the 
protocol for updating parameters at the server side;
-  * a [Cluster 
Topology](http://singa.incubator.apache.org/docs/distributed-training) 
specifying the distributed architecture of workers and servers.
+  * a [NeuralNet](neural-net.html) describing the neural net structure with 
the detailed layer setting and their connections;
+  * a [TrainOneBatch](train-one-batch.html) algorithm which is tailored for 
different model categories;
+  * an [Updater](updater.html) defining the protocol for updating parameters 
at the server side;
+  * a [Cluster Topology](distributed-training.html) specifying the distributed 
architecture of workers and servers.
 
 This process is like the job submission in Hadoop, where users configure their
 jobs in the main function to set the mapper, reducer, etc.

Modified: incubator/singa/site/trunk/content/markdown/docs/param.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/param.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/param.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/param.md Sat Sep 26 
12:04:23 2015
@@ -23,11 +23,11 @@ The configuration of a Param object is i
       }
     }
 
-The [SGD algorithm](http://singa.incubator.apache.org/docs/overview) starts 
with initializing all
+The [SGD algorithm](overview.html) starts with initializing all
 parameters according to user specified initialization method (the `init` 
field).
 For the above example,
 all parameters in `Param` "p1" will be initialized to constant value 1. The
-configuration fields of a Param object is defined in 
[ParamProto](http://singa.incubator.apache.org/api/classsinga_1_1ParamProto.html):
+configuration fields of a Param object is defined in 
[ParamProto](../api/classsinga_1_1ParamProto.html):
 
   * name, an identifier string. It is an optional field. If not provided, SINGA
   will generate one based on layer name and its order in the layer.
@@ -35,9 +35,9 @@ configuration fields of a Param object i
   * share_from, name of another `Param` object, from which this `Param` will 
share
   configurations and values.
   * lr_scale, float value to be multiplied with the learning rate when
-  [updating the parameters](http://singa.incubator.apache.org/docs/updater)
+  [updating the parameters](updater.html)
   * wd_scale, float value to be multiplied with the weight decay when
-  [updating the parameters](http://singa.incubator.apache.org/docs/updater)
+  [updating the parameters](updater.html)
 
 There are some other fields that are specific to initialization methods.
 
@@ -140,7 +140,7 @@ Users can access the configuration field
     int x = proto_.GetExtension(fooparam_conf).x();
 
 To use the new initialization method, users need to register it in the
-[main function](http://singa.incubator.apache.org/docs/programming-guide).
+[main function](programming-guide.html).
 
     driver.RegisterParamGenerator<FooParamGen>("FooParam")  # must be 
consistent with the user_type in configuration
 
@@ -161,7 +161,7 @@ Each Param object has a local version an
 Blob). These two versions are used for synchronization. If multiple Param
 objects share the same values, they would have the same `data_` field.
 Consequently, their global version is the same. The global version is updated
-by [the stub thread](http://singa.incubator.apache.org/docs/communication). 
The local version is
+by [the stub thread](communication.html). The local version is
 updated in `Worker::Update` function which assigns the global version to the
 local version. The `Worker::Collect` function is blocked until the global
 version is larger than the local version, i.e., when `data_` is updated. In
@@ -180,7 +180,7 @@ Each Param object has a `grad_` field fo
 this Blob although they may share `data_`.  Because each layer containing a
 Param object would contribute gradients. E.g., in RNN, the recurrent layers
 share parameters values, and the gradients used for updating are averaged from 
all recurrent
-these recurrent layers. In SINGA, the [stub thread] will aggregate local
+these recurrent layers. In SINGA, the stub thread will aggregate local
 gradients for the same Param object. The server will do a global aggregation
 of gradients for the same Param object.
 

Modified: incubator/singa/site/trunk/content/markdown/docs/programming-guide.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/programming-guide.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/programming-guide.md 
(original)
+++ incubator/singa/site/trunk/content/markdown/docs/programming-guide.md Sat 
Sep 26 12:04:23 2015
@@ -4,18 +4,18 @@
 To submit a training job, users must provide the configuration of the
 four components shown in Figure 1:
 
-  * a [NeuralNet](http://singa.incubator.apache.org/docs/neural-net) 
describing the neural net structure with the detailed layer setting and their 
connections;
-  * a [TrainOneBatch](http://singa.incubator.apache.org/docs/train-one-batch) 
algorithm which is tailored for different model categories;
-  * an [Updater](http://singa.incubator.apache.org/docs/updater) defining the 
protocol for updating parameters at the server side;
-  * a [Cluster 
Topology](http://singa.incubator.apache.org/docs/distributed-training) 
specifying the distributed architecture of workers and servers.
+  * a [NeuralNet](neural-net.html) describing the neural net structure with 
the detailed layer setting and their connections;
+  * a [TrainOneBatch](train-one-batch.html) algorithm which is tailored for 
different model categories;
+  * an [Updater](updater.html) defining the protocol for updating parameters 
at the server side;
+  * a [Cluster Topology](distributed-training.html) specifying the distributed 
architecture of workers and servers.
 
 The *Basic user guide* section describes how to submit a training job using
 built-in components; while the *Advanced user guide* section presents details
 on writing user's own main function to register components implemented by
 themselves. In addition, the training data must be prepared, which has the same
-[process](http://singa.incubator.apache.org/docs/data) for both advanced users 
and basic users.
+[process](data.html) for both advanced users and basic users.
 
-<img src="http://singa.incubator.apache.org/assets/image/overview.png"; 
align="center" width="400px"/>
+<img src="../images/overview.png" align="center" width="400px"/>
 <span><strong>Figure 1 - SINGA overview.</strong></span>
 
 
@@ -24,13 +24,13 @@ themselves. In addition, the training da
 
 Users can use the default main function provided SINGA to submit the training
 job. For this case, a job configuration file written as a google protocol
-buffer message for the 
[JobProto](http://singa.incubator.apache.org/api/classsinga_1_1JobProto.html) 
must be provided in the command line,
+buffer message for the [JobProto](../api/classsinga_1_1JobProto.html) must be 
provided in the command line,
 
     ./bin/singa-run.sh -conf <path to job conf> [-resume]
 
 `-resume` is for continuing the training from last
-[checkpoint](http://singa.incubator.apache.org/docs/checkpoint).
-The [MLP](http://singa.incubator.apache.org/docs/mlp) and 
[CNN](http://singa.incubator.apache.org/docs/cnn)
+[checkpoint](checkpoint.html).
+The [MLP](mlp.html) and [CNN](cnn.html)
 examples use built-in components. Please read the corresponding pages for their
 job configuration files. The subsequent pages will illustrate the details on
 each component of the configuration.
@@ -38,7 +38,7 @@ each component of the configuration.
 ## Advanced user guide
 
 If a user's model contains some user-defined components, e.g.,
-[Updater](http://singa.incubator.apache.org/docs/updater), he has to write a 
main function to
+[Updater](updater.html), he has to write a main function to
 register these components. It is similar to Hadoop's main function. Generally,
 the main function should
 
@@ -90,5 +90,5 @@ path of the *mysinga* and base job confi
 
     ./bin/singa-run.sh -conf <path to job conf> -exec <path to mysinga> [other 
arguments]
 
-The [RNN application](http://singa.incubator.apache.org/docs/rnn) provides a 
full example of
+The [RNN application](rnn.html) provides a full example of
 implementing the main function for training a specific RNN model.

Modified: incubator/singa/site/trunk/content/markdown/docs/quick-start.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/quick-start.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/quick-start.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/quick-start.md Sat Sep 26 
12:04:23 2015
@@ -89,13 +89,6 @@ Jobs can be killed by,
 Logs and job information are available in */tmp/singa-log* folder, which can be
 changed to other folders by setting `log-dir` in *conf/singa.conf*.
 
-{% comment %}
-One worker group trains against one partition of the training dataset. If
-*nworker_groups* is set to 1, then there is no data partitioning. One worker
-runs over a partition of the model. If *nworkers_per_group* is set to 1, then
-there is no model partitioning. More details on the cluster configuration are
-described in the [System Architecture]() page.
-{% endcomment %}
 
 #### Asynchronous parallel training
 

Modified: incubator/singa/site/trunk/content/markdown/docs/train-one-batch.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/train-one-batch.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/train-one-batch.md 
(original)
+++ incubator/singa/site/trunk/content/markdown/docs/train-one-batch.md Sat Sep 
26 12:04:23 2015
@@ -11,8 +11,8 @@ their model in the configuration.
 ### Back-propagation
 
 [BP algorithm](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) is used for
-computing gradients of feed-forward models, e.g., 
[CNN](http://singa.incubator.apache.org/docs/cnn)
-and [MLP](http://singa.incubator.apache.org/docs/mlp), and 
[RNN](http://singa.incubator.apache.org/docs/rnn) models in SINGA.
+computing gradients of feed-forward models, e.g., [CNN](cnn.html)
+and [MLP](mlp.html), and [RNN](rnn.html) models in SINGA.
 
 
     # in job.conf
@@ -78,12 +78,12 @@ The BP algorithm is implemented in SINGA
 
 It forwards features through all local layers (can be checked by layer
 partition ID and worker ID) and backwards gradients in the reverse order.
-[BridgeSrcLayer](http://singa.incubator.apache.org/docs/layer/#bridgesrclayer--bridgedstlayer)
+[BridgeSrcLayer](layer.html#bridgesrclayer--bridgedstlayer)
 (resp. `BridgeDstLayer`) will be blocked until the feature (resp.
 gradient) from the source (resp. destination) layer comes. Parameter gradients
 are sent to servers via `Update` function. Updated parameters are collected via
 `Collect` function, which will be blocked until the parameter is updated.
-[Param](http://singa.incubator.apache.org/docs/param) objects have versions, 
which can be used to
+[Param](param.html) objects have versions, which can be used to
 check whether the `Param` objects have been updated or not.
 
 Since RNN models are unrolled into feed-forward models, users need to implement
@@ -127,9 +127,9 @@ Parameter gradients are computed after t
 ### Implementing a new algorithm
 
 SINGA implements BP and CD by creating two subclasses of
-the [Worker](api/classsinga_1_1Worker.html) class:
-[BPWorker](api/classsinga_1_1BPWorker.html)'s `TrainOneBatch` function 
implements the BP
-algorithm; [CDWorker](api/classsinga_1_1CDWorker.html)'s `TrainOneBatch` 
function implements the CD
+the [Worker](../api/classsinga_1_1Worker.html) class:
+[BPWorker](../api/classsinga_1_1BPWorker.html)'s `TrainOneBatch` function 
implements the BP
+algorithm; [CDWorker](../api/classsinga_1_1CDWorker.html)'s `TrainOneBatch` 
function implements the CD
 algorithm. To implement a new algorithm for the `TrainOneBatch` function, users
 need to create a new subclass of the `Worker`, e.g.,
 
@@ -160,9 +160,9 @@ Users can define some fields for users t
       extension 101..max;
     }
 
-It is similar as [adding configuration fields for a new 
layer](http://singa.incubator.apache.org/docs/layer/#implementing-a-new-layer-subclass).
+It is similar as [adding configuration fields for a new 
layer](layer.html#implementing-a-new-layer-subclass).
 
-To use `FooWorker`, users need to register it in the 
[main.cc](http://singa.incubator.apache.org/docs/programming-guide)
+To use `FooWorker`, users need to register it in the 
[main.cc](programming-guide.html)
 and configure the `alg` and `foo_conf` fields,
 
     # in main.cc

Modified: incubator/singa/site/trunk/content/markdown/docs/updater.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/updater.md?rev=1705429&r1=1705428&r2=1705429&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/updater.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/updater.md Sat Sep 26 
12:04:23 2015
@@ -2,7 +2,7 @@
 
 ---
 
-Every server in SINGA has an [Updater](api/classsinga_1_1Updater.html)
+Every server in SINGA has an [Updater](../api/classsinga_1_1Updater.html)
 instance that updates parameters based on gradients.
 In this page, the *Basic user guide* describes the configuration of an updater.
 The *Advanced user guide* present details on how to implement a new updater 
and a new
@@ -15,7 +15,7 @@ There are many different parameter updat
 
 * `type`, an integer for identifying an updater;
 * `learning_rate`, configuration for the
-[LRGenerator](http://singa.incubator.apache.org/api/classsinga_1_1LRGenerator.html)
 which controls the learning rate.
+[LRGenerator](../api/classsinga_1_1LRGenerator.html) which controls the 
learning rate.
 * `weight_decay`, the co-efficient for [L2 * 
regularization](http://deeplearning.net/tutorial/gettingstarted.html#regularization).
 * 
[momentum](http://ufldl.stanford.edu/tutorial/supervised/OptimizationStochasticGradientDescent/).
 
@@ -231,7 +231,7 @@ layer,
     }
 
 The new updater should be registered in the
-[main function](http://singa.incubator.apache.org/docs/programming-guide)
+[main function](programming-guide.html)
 
     driver.RegisterUpdater<FooUpdater>("FooUpdater");
 


Reply via email to