http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/dev-rest-api.md
----------------------------------------------------------------------
diff --git a/docs/dev-rest-api.md b/docs/dev-rest-api.md
deleted file mode 100644
index 161a09f..0000000
--- a/docs/dev-rest-api.md
+++ /dev/null
@@ -1,1103 +0,0 @@
----
-layout: global
-title: Gearpump RESTful API reference
----
-
-## Authentication.
-
-For all REST API calls, We need authentication by default. If you don't want 
authentication, you can disable them.
-
-### How to disable Authentication
-To disable Authentication, you can set 
`gearpump-ui.gearpump.ui-security.authentication-enabled = false`
-in gear.conf, please check [UI 
Authentication](deployment-ui-authentication.html) for details.
-
-### How to authenticate if Authentication is enabled.
-
-#### For User-Password based authentication
-
-If Authentication is enabled, then you need to login before calling REST API.
-
-```
-curl  -X POST  --data username=admin --data password=admin --cookie-jar 
outputAuthenticationCookie.txt http://127.0.0.1:8090/login
-```
-
-This will use default user "admin:admin" to login, and store the 
authentication cookie to file outputAuthenticationCookie.txt.
-
-In All subsequent Rest API calls, you need to add the authentication cookie. 
For example
-
-```
-curl --cookie outputAuthenticationCookie.txt http://127.0.0.1/api/v1.0/master
-```
-
-for more information, please check [UI 
Authentication](deployment-ui-authentication.html).
-
-#### For OAuth2 based authentication
-
-For OAuth2 based authentication, it requires you to have an access token in 
place.
-
-Different OAuth2 service provider have different way to return an access token.
-
-**For Google**, you can refer to [OAuth 
Doc](https://developers.google.com/identity/protocols/OAuth2).
-
-**For CloudFoundry UAA**, you can use the uaac command to get the access token.
-
-```
-$ uaac target http://login.gearpump.gotapaas.eu/
-$ uaac token get <user_email_address>
-
-### Find access token
-$ uaac context
-
-[0]*[http://login.gearpump.gotapaas.eu]
-
-  [0]*[<user_email_address>]
-      user_id: 34e33a79-42c6-479b-a8c1-8c471ff027fb
-      client_id: cf
-      token_type: bearer
-      access_token: eyJhbGciOiJSUzI1NiJ9.eyJqdGkiOiI
-      expires_in: 599
-      scope: password.write openid cloud_controller.write cloud_controller.read
-      jti: 74ea49e4-1001-4757-9f8d-a66e52a27557
-```
-
-For more information on uaac, please check [UAAC 
guide](https://docs.cloudfoundry.org/adminguide/uaa-user-management.html)
-
-Now, we have the access token, then let's login to Gearpump UI server with 
this access token:
-
-```
-## Please replace cloudfoundryuaa with actual OAuth2 service name you have 
configured in gear.conf
-curl  -X POST  --data accesstoken=eyJhbGciOiJSUzI1NiJ9.eyJqdGkiOiI 
--cookie-jar outputAuthenticationCookie.txt 
http://127.0.0.1:8090/login/oauth2/cloudfoundryuaa/accesstoken
-```
-
-This will use user  `user_email_address` to login, and store the 
authentication cookie to file outputAuthenticationCookie.txt.
-
-In All subsequent Rest API calls, you need to add the authentication cookie. 
For example
-
-```
-curl --cookie outputAuthenticationCookie.txt http://127.0.0.1/api/v1.0/master
-```
-
-**NOTE:** You can default the default permission level for OAuth2 user. for 
more information,
-please check [UI Authentication](deployment-ui-authentication.html).
-
-## Query version
-
-### GET version
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/version
-```
-
-Sample Response:
-
-```
-0.7.1-SNAPSHOT
-```
-
-## Master Service
-
-### GET api/v1.0/master
-Get information of masters
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/master
-```
-
-Sample Response:
-
-```
-{
-  "masterDescription": {
-    "leader":{"host":"[email protected]","port":3000},
-    "cluster":[{"host":"127.0.0.1","port":3000}]
-    "aliveFor": "642941",
-    "logFile": "/Users/foobar/gearpump/logs",
-    "jarStore": "jarstore/",
-    "masterStatus": "synced",
-    "homeDirectory": "/Users/foobar/gearpump"
-  }
-}
-```
-
-### GET api/v1.0/master/applist
-Query information of all applications
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/master/applist
-```
-
-Sample Response:
-
-```
-{
-  "appMasters": [
-    {
-      "status": "active",
-      "appId": 1,
-      "appName": "wordCount",
-      "appMasterPath": 
"akka.tcp://[email protected]:52212/user/daemon/appdaemon1/$c",
-      "workerPath": "akka.tcp://[email protected]:3000/user/Worker0",
-      "submissionTime": "1450758114766",
-      "startTime": "1450758117294",
-      "user": "lisa"
-    }
-  ]
-}
-```
-
-### GET api/v1.0/master/workerlist
-Query information of all workers
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/master/workerlist
-```
-
-Sample Response:
-
-```
-[
-  {
-    "workerId": "1",
-    "state": "active",
-    "actorPath": "akka.tcp://[email protected]:3000/user/Worker0",
-    "aliveFor": "431565",
-    "logFile": "logs/",
-    "executors": [
-      {
-        "appId": 1,
-        "executorId": -1,
-        "slots": 1
-      },
-      {
-        "appId": 1,
-        "executorId": 0,
-        "slots": 1
-      }
-    ],
-    "totalSlots": 1000,
-    "availableSlots": 998,
-    "homeDirectory": "/usr/lisa/gearpump/",
-    "jvmName": "11788@lisa"
-  },
-  {
-    "workerId": "0",
-    "state": "active",
-    "actorPath": "akka.tcp://[email protected]:3000/user/Worker1",
-    "aliveFor": "431546",
-    "logFile": "logs/",
-    "executors": [
-      {
-        "appId": 1,
-        "executorId": 1,
-        "slots": 1
-      }
-    ],
-    "totalSlots": 1000,
-    "availableSlots": 999,
-    "homeDirectory": "/usr/lisa/gearpump/",
-    "jvmName": "11788@lisa"
-  }
-]
-```
-
-### GET api/v1.0/master/config
-Get the configuration of all masters
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/master/config
-```
-
-Sample Response:
-
-```
-{
-  "extensions": [
-    "akka.contrib.datareplication.DataReplication$"
-  ]
-  "akka": {
-    "loglevel": "INFO"
-    "log-dead-letters": "off"
-    "log-dead-letters-during-shutdown": "off"
-    "actor": {
-      ## Master forms a akka cluster
-      "provider": "akka.cluster.ClusterActorRefProvider"
-    }
-    "cluster": {
-      "roles": ["master"]
-      "auto-down-unreachable-after": "15s"
-    }
-    "remote": {
-      "log-remote-lifecycle-events": "off"
-    }
-  }
-}
-```
-
-### GET 
api/v1.0/master/metrics/&lt;query_path&gt;?readLatest=&lt;true|false&gt;
-Get the master node metrics.
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/master/metrics/master?readLatest=true
-```
-
-Sample Response:
-
-```
-{
-    "path"
-:
-    "master", "metrics"
-:
-    [{
-        "time": "1450758725070",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "master:memory.heap.used", "value": "59764272"}
-    }, {
-        "time": "1450758725070",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "master:thread.daemon.count", "value": "18"}
-    }, {
-        "time": "1450758725070",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "master:memory.total.committed",
-            "value": "210239488"
-        }
-    }, {
-        "time": "1450758725070",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "master:memory.heap.max", "value": "880017408"}
-    }, {
-        "time": "1450758725070",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "master:memory.total.max", "value": "997457920"}
-    }, {
-        "time": "1450758725070",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "master:memory.heap.committed",
-            "value": "179830784"
-        }
-    }, {
-        "time": "1450758725070",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "master:memory.total.used", "value": "89117352"}
-    }, {
-        "time": "1450758725070",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "master:thread.count", "value": "28"}
-    }]
-}
-```
-
-### POST api/v1.0/master/submitapp
-Submit a streaming job jar to Gearpump cluster. It functions like command line
-```
-gear app -jar xx.jar -conf yy.conf -executors 1 <command line arguments>
-```
-
-Required MIME type: "multipart/form-data"
-
-Required post form fields:
-
-1. field name "jar", job jar file.
-
-Optional post form fields:
-
-1. "configfile", configuration file, in UTF8 format.
-2. "configstring", text body of configuration file, in UTF8 format.
-3. "executorcount", The count of JVM process to start across the cluster for 
this application job
-4. "args", command line arguments for this job jar.
-
-Example html:
-
-```bash
-<form id="submitapp" action="http://127.0.0.1:8090/api/v1.0/master/submitapp";
-method="POST" enctype="multipart/form-data">
- 
-Job Jar (*.jar) [Required]:  <br/>
-<input type="file" name="jar"/> <br/> <br/>
- 
-Config file (*.conf) [Optional]:  <br/>
-<input type="file" name="configfile"/> <br/>  <br/>
- 
-Config String, Config File in string format. [Optional]: <br/>
-<input type="text" name="configstring" value="a.b.c.d=1"/> <br/><br/>
- 
-Executor count (integer, how many process to start for this streaming job) 
[Optional]: <br/>
-<input type="text" name="executorcount" value="1"/> <br/><br/>
- 
-Application arguments (String) [Optional]: <br/>
-<input type="text" name="args" value=""/> <br/><br/>
- 
-<input type="submit" value="Submit"/>
- 
-</table>
- 
-</form>
-```
-
-### POST api/v1.0/master/submitstormapp
-Submit a storm jar to Gearpump cluster. It functions like command line
-```
-storm app -jar xx.jar -conf yy.yaml <command line arguments>
-```
-
-Required MIME type: "multipart/form-data"
-
-Required post form fields:
-
-1. field name "jar", job jar file.
-
-Optional post form fields:
-
-1. "configfile", .yaml configuration file, in UTF8 format.
-2. "args", command line arguments for this job jar.
-
-Example html:
-
-```bash
-<form id="submitstormapp" 
action="http://127.0.0.1:8090/api/v1.0/master/submitstormapp";
-method="POST" enctype="multipart/form-data">
- 
-Job Jar (*.jar) [Required]:  <br/>
-<input type="file" name="jar"/> <br/> <br/>
- 
-Config file (*.yaml) [Optional]:  <br/>
-<input type="file" name="configfile"/> <br/>  <br/>
-
-Application arguments (String) [Optional]: <br/>
-<input type="text" name="args" value=""/> <br/><br/>
- 
-<input type="submit" value="Submit"/>
- 
-</table>
- 
-</form>
-```
-
-## Worker service
-
-### GET api/v1.0/worker/&lt;workerId&gt;
-Query worker information.
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/worker/0
-```
-
-Sample Response:
-
-```
-{
-  "workerId": "0",
-  "state": "active",
-  "actorPath": "akka.tcp://[email protected]:3000/user/Worker1",
-  "aliveFor": "831069",
-  "logFile": "logs/",
-  "executors": [
-    {
-      "appId": 1,
-      "executorId": 1,
-      "slots": 1
-    }
-  ],
-  "totalSlots": 1000,
-  "availableSlots": 999,
-  "homeDirectory": "/usr/lisa/gearpump/",
-  "jvmName": "11788@lisa"
-}
-```
-
-### GET api/v1.0/worker/&lt;workerId&gt;/config
-Query worker config
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/worker/0/config
-```
-
-Sample Response:
-
-```
-{
-  "extensions": [
-    "akka.contrib.datareplication.DataReplication$"
-  ]
-  "akka": {
-    "loglevel": "INFO"
-    "log-dead-letters": "off"
-    "log-dead-letters-during-shutdown": "off"
-    "actor": {
-      ## Master forms a akka cluster
-      "provider": "akka.cluster.ClusterActorRefProvider"
-    }
-    "cluster": {
-      "roles": ["master"]
-      "auto-down-unreachable-after": "15s"
-    }
-    "remote": {
-      "log-remote-lifecycle-events": "off"
-    }
-  }
-}
-```
-
-### GET 
api/v1.0/worker/&lt;workerId&gt;/metrics/&lt;query_path&gt;?readLatest=&lt;true|false&gt;
-Get the worker node metrics.
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/worker/0/metrics/worker?readLatest=true
-```
-
-Sample Response:
-
-```
-{
-    "path"
-:
-    "worker", "metrics"
-:
-    [{
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.total.used",
-            "value": "152931440"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker1:thread.daemon.count", "value": "18"}
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.heap.used",
-            "value": "123139640"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.total.max",
-            "value": "997457920"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.heap.committed",
-            "value": "179830784"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker0:thread.count", "value": "28"}
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker0:memory.heap.max", "value": "880017408"}
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker1:memory.heap.max", "value": "880017408"}
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.total.committed",
-            "value": "210239488"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.total.used",
-            "value": "152931440"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker1:thread.count", "value": "28"}
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.total.max",
-            "value": "997457920"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.heap.committed",
-            "value": "179830784"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.total.committed",
-            "value": "210239488"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker0:thread.daemon.count", "value": "18"}
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.heap.used",
-            "value": "123139640"
-        }
-    }]
-}
-```
-
-## Supervisor Service
-
-Supervisor service allows user to add or remove a worker machine.
-
-### POST api/v1.0/supervisor/status
-Query whether the supervisor service is enabled. If Supervisor service is 
disabled, you are not allowed to use API like addworker/removeworker.
-
-Example:
-
-```bash
-curl -X POST [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/supervisor/status
-```
-
-Sample Response:
-
-```
-{"enabled":true}
-```
-
-### GET api/v1.0/supervisor
-Get the supervisor path
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/supervisor
-```
-
-Sample Response:
-
-```
-{path: "supervisor actor path"}
-```
-
-### POST api/v1.0/supervisor/addworker/&lt;worker-count&gt;
-Add workerCount new workers in the cluster. It will use the low level resource 
scheduler like
-YARN to start new containers and then boot Gearpump worker process.
-
-Example:
-
-```bash
-curl -X POST [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/supervisor/addworker/2
-
-```
-
-Sample Response:
-
-```
-{success: true}
-```
-
-### POST api/v1.0/supervisor/removeworker/&lt;worker-id&gt;
-Remove single worker instance by specifying a worker Id.
-
-**NOTE:* Use with caution!
-
-**NOTE:** All executors JVMs under this worker JVM will also be destroyed. It 
will trigger failover for all
-applications that have executor started under this worker.
-
-Example:
-
-```bash
-curl -X POST [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/supervisor/removeworker/3
-
-```
-
-Sample Response:
-
-```
-{success: true}
-```
-
-## Application service
-
-### GET api/v1.0/appmaster/&lt;appId&gt;?detail=&lt;true|false&gt;
-Query information of an specific application of Id appId
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/appmaster/1?detail=true
-```
-
-Sample Response:
-
-```
-{
-  "appId": 1,
-  "appName": "wordCount",
-  "processors": [
-    [
-      0,
-      {
-        "id": 0,
-        "taskClass": "org.apache.gearpump.streaming.examples.wordcount.Split",
-        "parallelism": 1,
-        "description": "",
-        "taskConf": {
-          "_config": {}
-        },
-        "life": {
-          "birth": "0",
-          "death": "9223372036854775807"
-        },
-        "executors": [
-          1
-        ],
-        "taskCount": [
-          [
-            1,
-            {
-              "count": 1
-            }
-          ]
-        ]
-      }
-    ],
-    [
-      1,
-      {
-        "id": 1,
-        "taskClass": "org.apache.gearpump.streaming.examples.wordcount.Sum",
-        "parallelism": 1,
-        "description": "",
-        "taskConf": {
-          "_config": {}
-        },
-        "life": {
-          "birth": "0",
-          "death": "9223372036854775807"
-        },
-        "executors": [
-          0
-        ],
-        "taskCount": [
-          [
-            0,
-            {
-              "count": 1
-            }
-          ]
-        ]
-      }
-    ]
-  ],
-  "processorLevels": [
-    [
-      0,
-      0
-    ],
-    [
-      1,
-      1
-    ]
-  ],
-  "dag": {
-    "vertexList": [
-      0,
-      1
-    ],
-    "edgeList": [
-      [
-        0,
-        "org.apache.gearpump.partitioner.HashPartitioner",
-        1
-      ]
-    ]
-  },
-  "actorPath": 
"akka.tcp://[email protected]:52212/user/daemon/appdaemon1/$c/appmaster",
-  "clock": "1450759382430",
-  "executors": [
-    {
-      "executorId": 0,
-      "executor": 
"akka.tcp://[email protected]:52240/remote/akka.tcp/[email protected]:52212/user/daemon/appdaemon1/$c/appmaster/executors/0#-1554950276",
-      "workerId": "1",
-      "status": "active"
-    },
-    {
-      "executorId": 1,
-      "executor": 
"akka.tcp://[email protected]:52241/remote/akka.tcp/[email protected]:52212/user/daemon/appdaemon1/$c/appmaster/executors/1#928082134",
-      "workerId": "0",
-      "status": "active"
-    },
-    {
-      "executorId": -1,
-      "executor": "akka://app1-executor-1/user/daemon/appdaemon1/$c/appmaster",
-      "workerId": "1",
-      "status": "active"
-    }
-  ],
-  "startTime": "1450758117306",
-  "uptime": "1268472",
-  "user": "lisa",
-  "homeDirectory": "/usr/lisa/gearpump/",
-  "logFile": "logs/",
-  "historyMetricsConfig": {
-    "retainHistoryDataHours": 72,
-    "retainHistoryDataIntervalMs": 3600000,
-    "retainRecentDataSeconds": 300,
-    "retainRecentDataIntervalMs": 15000
-  }
-}
-```
-
-### DELETE api/v1.0/appmaster/&lt;appId&gt;
-shutdown application appId
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/stallingtasks
-Query list of unhealthy tasks of an specific application of Id appId
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/appmaster/2/stallingtasks
-```
-
-Sample Response:
-
-```
-{
-  "tasks": [
-    {
-      "processorId": 0,
-      "index": 0
-    }
-  ]
-}
-```
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/config
-Query the configuration of specific application appId
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/appmaster/1/config
-```
-
-Sample Response:
-
-```
-{
-    "gearpump" : {
-        "appmaster" : {
-            "extraClasspath" : "",
-            "vmargs" : "-server -Xms512M -Xmx1024M -Xss1M 
-XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC 
-XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC -XX:NewRatio=3"
-        },
-        "cluster" : {
-            "masters" : [
-                "127.0.0.1:3000"
-            ]
-        },
-        "executor" : {
-            "extraClasspath" : "",
-            "vmargs" : "-server -Xms512M -Xmx1024M -Xss1M 
-XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC 
-XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC -XX:NewRatio=3"
-        },
-        "jarstore" : {
-            "rootpath" : "jarstore/"
-        },
-        "log" : {
-            "application" : {
-                "dir" : "logs"
-            },
-            "daemon" : {
-                "dir" : "logs"
-            }
-        },
-        "metrics" : {
-            "enabled" : true,
-            "graphite" : {
-                "host" : "127.0.0.1",
-                "port" : 2003
-            },
-            "logfile" : {},
-            "report-interval-ms" : 15000,
-            "reporter" : "akka",
-            "retainHistoryData" : {
-                "hours" : 72,
-                "intervalMs" : 3600000
-            },
-            "retainRecentData" : {
-                "intervalMs" : 15000,
-                "seconds" : 300
-            },
-            "sample-rate" : 10
-        },
-        "netty" : {
-            "base-sleep-ms" : 100,
-            "buffer-size" : 5242880,
-            "flush-check-interval" : 10,
-            "max-retries" : 30,
-            "max-sleep-ms" : 1000,
-            "message-batch-size" : 262144
-        },
-        "netty-dispatcher" : "akka.actor.default-dispatcher",
-        "scheduling" : {
-            "scheduler-class" : 
"org.apache.gearpump.cluster.scheduler.PriorityScheduler"
-        },
-        "serializers" : {
-            "[B" : "",
-            "[C" : "",
-            "[D" : "",
-            "[F" : "",
-            "[I" : "",
-            "[J" : "",
-            "[Ljava.lang.String;" : "",
-            "[S" : "",
-            "[Z" : "",
-            "org.apache.gearpump.Message" : 
"org.apache.gearpump.streaming.MessageSerializer",
-            "org.apache.gearpump.streaming.task.Ack" : 
"org.apache.gearpump.streaming.AckSerializer",
-            "org.apache.gearpump.streaming.task.AckRequest" : 
"org.apache.gearpump.streaming.AckRequestSerializer",
-            "org.apache.gearpump.streaming.task.LatencyProbe" : 
"org.apache.gearpump.streaming.LatencyProbeSerializer",
-            "org.apache.gearpump.streaming.task.TaskId" : 
"org.apache.gearpump.streaming.TaskIdSerializer",
-            "scala.Tuple1" : "",
-            "scala.Tuple2" : "",
-            "scala.Tuple3" : "",
-            "scala.Tuple4" : "",
-            "scala.Tuple5" : "",
-            "scala.Tuple6" : "",
-            "scala.collection.immutable.$colon$colon" : "",
-            "scala.collection.immutable.List" : ""
-        },
-        "services" : {
-            # gear.conf: 112
-            "host" : "127.0.0.1",
-            # gear.conf: 113
-            "http" : 8090,
-            # gear.conf: 114
-            "ws" : 8091
-        },
-        "task-dispatcher" : "akka.actor.pined-dispatcher",
-        "worker" : {
-            # reference.conf: 100
-            # # How many slots each worker contains
-            "slots" : 100
-        }
-    }
-}
-
-```
-
-### GET 
api/v1.0/appmaster/&lt;appId&gt;/metrics/&lt;query_path&gt;?readLatest=&lt;true|false&gt;&aggregator=&lt;aggregator_class&gt;
-Query metrics information of a specific application appId
-Filter metrics with path metrics path
-
-aggregator points to a aggregator class, which will aggregate on the current 
metrics, and return a smaller set.
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/appmaster/1/metrics/app1?readLatest=true&aggregator=org.apache.gearpump.streaming.metrics.ProcessorAggregator
-```
-
-Sample Response:
-
-```
-{
-    "path"
-:
-    "worker", "metrics"
-:
-    [{
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.total.used",
-            "value": "152931440"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker1:thread.daemon.count", "value": "18"}
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.heap.used",
-            "value": "123139640"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.total.max",
-            "value": "997457920"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.heap.committed",
-            "value": "179830784"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker0:thread.count", "value": "28"}
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker0:memory.heap.max", "value": "880017408"}
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker1:memory.heap.max", "value": "880017408"}
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.total.committed",
-            "value": "210239488"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker0:memory.total.used",
-            "value": "152931440"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker1:thread.count", "value": "28"}
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.total.max",
-            "value": "997457920"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.heap.committed",
-            "value": "179830784"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.total.committed",
-            "value": "210239488"
-        }
-    }, {
-        "time": "1450759137860",
-        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", 
"name": "worker0:thread.daemon.count", "value": "18"}
-    }, {
-        "time": "1450759137860",
-        "value": {
-            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-            "name": "worker1:memory.heap.used",
-            "value": "123139640"
-        }
-    }]
-}
-```
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/errors
-Get task error messages
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/appmaster/1/errors
-```
-
-Sample Response:
-
-```
-{"time":"0","error":null}
-```
-
-### POST api/v1.0/appmaster/&lt;appId&gt;/restart
-Restart the application
-
-## Executor Service
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/executor/&lt;executorid&gt;/config
-Get executor config
-
-Example:
-
-```bash
-curl http://127.0.0.1:8090/api/v1.0/appmaster/1/executor/1/config
-```
-
-Sample Response:
-
-```
-{
-  "extensions": [
-    "akka.contrib.datareplication.DataReplication$"
-  ]
-  "akka": {
-    "loglevel": "INFO"
-    "log-dead-letters": "off"
-    "log-dead-letters-during-shutdown": "off"
-    "actor": {
-      ## Master forms a akka cluster
-      "provider": "akka.cluster.ClusterActorRefProvider"
-    }
-    "cluster": {
-      "roles": ["master"]
-      "auto-down-unreachable-after": "15s"
-    }
-    "remote": {
-      "log-remote-lifecycle-events": "off"
-    }
-  }
-}
-```
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/executor/&lt;executorid&gt;
-Get executor information.
-
-Example:
-
-```bash
-curl [--cookie outputAuthenticationCookie.txt] 
http://127.0.0.1:8090/api/v1.0/appmaster/1/executor/1
-```
-
-Sample Response:
-
-```
-{
-  "id": 1,
-  "workerId": "0",
-  "actorPath": 
"akka.tcp://[email protected]:52241/remote/akka.tcp/[email protected]:52212/user/daemon/appdaemon1/$c/appmaster/executors/1",
-  "logFile": "logs/",
-  "status": "active",
-  "taskCount": 1,
-  "tasks": [
-    [
-      0,
-      [
-        {
-          "processorId": 0,
-          "index": 0
-        }
-      ]
-    ]
-  ],
-  "jvmName": "21304@lisa"
-}
-```

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/dev-storm.md
----------------------------------------------------------------------
diff --git a/docs/dev-storm.md b/docs/dev-storm.md
deleted file mode 100644
index ea5ecbd..0000000
--- a/docs/dev-storm.md
+++ /dev/null
@@ -1,222 +0,0 @@
----
-layout: global
-title: Storm Compatibility
----
-
-Gearpump provides **binary compatibility** for Apache Storm applications. That 
is to say, users could easily grab an existing Storm jar and run it 
-on Gearpump. This documentation illustrates Gearpump's compatibility with 
Storm.  
-
-## What Storm features are supported on Gearpump 
-
-### Storm 0.9.x
-
-| Feature | Support |
-| ------- | ------- |
-| basic topology | yes |
-| DRPC | yes |
-| multi-lang | yes |
-| storm-kafka | yes |
-| Trident | no |
-
-### Storm 0.10.x
-
-| Feature | Support |
-| ----------- | -------------|
-| basic topology | yes | 
-| DRPC | yes |
-| multi-lang | yes |
-| storm-kafka | yes |
-| storm-hdfs| yes | 
-| storm-hbase | yes |
-| storm-hive | yes |
-| storm-jdbc | yes |
-| storm-redis | yes |
-| flux | yes |
-| storm-eventhubs | not verified |
-| Trident | no |
-
-### At Least Once support
-
-With Ackers enabled, there are two kinds of At Least Once support in both 
Storm 0.9.x and Storm 0.10.x.
-
-1. spout will replay messages on message loss as long as spout is alive
-2. If `KafkaSpout` is used, messages could be replayed from Kafka even if the 
spout crashes. 
-
-Gearpump supports the second for both Storm versions. 
-
-### Security support 
-
-Storm 0.10.x adds security support for following connectors 
-
-* 
[storm-hdfs](https://github.com/apache/storm/blob/0.10.x-branch/external/storm-hdfs/README.md)
-* 
[storm-hive](https://github.com/apache/storm/blob/0.10.x-branch/external/storm-hive/README.md)
-* 
[storm-hbase](https://github.com/apache/storm/blob/0.10.x-branch/external/storm-hbase/README.md)
-
-That means users could access kerberos enabled HDFS, Hive and HBase with these 
connectors. Generally, Storm provides two approaches (please refer to above 
links for more information)
-
-1. configure nimbus to automatically get delegation tokens on behalf of the 
topology submitter user
-2. kerberos keytabs are already distributed on worker hosts; users configure 
keytab path and principal
-
-Gearpump supports the second approach and users needs to add classpath of 
HDFS/Hive/HBase to `gearpump.executor.extraClasspath` in `gear.conf` on each 
node. For example, 
-
-```
-  ###################
-  ### Executor argument configuration
-  ### Executor JVM can contains multiple tasks
-  ###################
-  executor {
-    vmargs = "-server -Xms512M -Xmx1024M -Xss1M 
-XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC 
-XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC -XX:NewRatio=3  
-Djava.rmi.server.hostname=localhost"
-    extraClasspath = "/etc/hadoop/conf"
-  }
-```
-
-## How to run a Storm application on Gearpump
-
-This section shows how to run an existing Storm jar in a local Gearpump 
cluster.
-
-1. launch a local cluster
-  
-   ```
-   bin/local
-   ```
-
-2. start a Gearpump Nimbus server 
-
-   Users need server's address(`nimbus.host` and `nimbus.thrift.port`) to 
submit topologies later. The address is written to a yaml config file set with 
`-output` option. 
-   Users can provide an existing config file where only the address will be 
overwritten. If not provided, a new file `app.yaml` is created with the config.
-
-   ```
-   bin/storm nimbus -output [conf <custom yaml config>]
-   ```
-   
-3. submit Storm applications
-  
-   Users can either submit Storm applications through command line or UI. 
-   
-   a. submit Storm applications through command line
-
-     ```
-     bin/storm app -verbose -config app.yaml -jar 
storm-starter-${STORM_VERSION}.jar storm.starter.ExclamationTopology 
exclamation 
-     ```
-  
-     Users are able to configure their applications through following options
-   
-     * `jar` - set the path of a Storm application jar
-     * `config` - submit the custom configuration file generated when 
launching Nimbus
-  
-   b. submit Storm application through UI
-   
-     1. Click on the "Create" button on the applications page on UI. 
-     2. Click on the "Submit Storm Application" item in the pull down menu.
-     3. In the popup console, upload the Storm application jar and the 
configuration file generated when launching Nimbus,
-         and fill in `storm.starter.ExclamationTopology exclamation` as 
arguments.
-     4. Click on the "Submit" button   
-
-   Either way, check the dashboard and you should see data flowing through 
your topology. 
-  
-## How is it different from running on Storm
-
-### Topology submission
-
-When a client submits a Storm topology, Gearpump launches locally a simplified 
version of Storm's  Nimbus server `GearpumpNimbus`. `GearpumpNimbus` then 
translates topology to a directed acyclic graph (DAG) of Gearpump, which is 
submitted to Gearpump master and deployed as a Gearpump application. 
-
-![storm_gearpump_cluster](img/storm_gearpump_cluster.png)
-
-`GearpumpNimbus` supports the following methods
-  
-* `submitTopology` / `submitTopologyWithOpts`
-* `killTopology` / `killTopologyWithOpts`
-* `getTopology` / `getUserTopology`
-* `getClusterInfo`
-
-### Topology translation
-
-Here's an example of `WordCountTopology` with acker bolts (ackers) being 
translated into a Gearpump DAG.
-
-![storm_gearpump_dag](img/storm_gearpump_dag.png)
-
-Gearpump creates a `StormProducer` for each Storm spout and a `StormProcessor` 
for each Storm bolt (except for ackers) with the same parallelism, and wires 
them together using the same grouping strategy (partitioning in Gearpump) as in 
Storm. 
-
-At runtime, spouts and bolts are running inside `StormProducer` tasks and 
`StormProcessor` tasks respectively. Messages emitted by spout are passed to 
`StormProducer`, transferred to `StormProcessor` and passed down to bolt.  
Messages are serialized / de-serialized with Storm serializers.
-
-Storm ackers are dropped since Gearpump has a different mechanism of message 
tracking and flow control. 
-
-### Task execution
-
-Each Storm task is executed by a dedicated thread while all Gearpump tasks of 
an executor share a thread pool. Generally, we can achieve better performance 
with a shared thread pool. It's possible, however, some tasks block and take up 
all the threads. In that case, we can 
-fall back to the Storm way by setting `gearpump.task-dispatcher` to 
`"gearpump.single-thread-dispatcher"` in `gear.conf`.
-
-### Message tracking 
-
-Storm tracks the lineage of each message with ackers to guarantee 
at-least-once message delivery. Failed messages are re-sent from spout.
-
-Gearpump [tracks messages between a sender and receiver in an efficient 
way](gearpump-internals.html#how-do-we-detect-message-loss). Message loss 
causes the whole application to replay from the [minimum timestamp of all 
pending messages in the 
system](gearpump-internals.html#application-clock-and-global-clock-service). 
-
-### Flow control
-
-Storm throttles flow rate at spout, which stops sending messages if the number 
of unacked messages exceeds `topology.max.spout.pending`. 
-
-Gearpump has flow control between tasks such that [sender cannot flood 
receiver](gearpump-internals.html#how-do-we-do-flow-control), which is 
backpressured till the source.
-
-### Configurations
-
-All Storm configurations are respected with the following priority order 
-
-```
-defaults.yaml < custom file config < application config < component config
-```
-
-where
-
-* application config is submit from Storm application along with the topology 
-* component config is set in spout / bolt with `getComponentConfiguration`
-* custom file config is specified with the `-config` option when submitting 
Storm application from command line or uploaded from UI
-
-## StreamCQL Support
-
-[StreamCQL](https://github.com/HuaweiBigData/StreamCQL) is a Continuous Query 
Language on RealTime Computation System open sourced by Huawei.
-Since StreamCQL already supports Storm, it's straightforward to run StreamCQL 
over Gearpump.
-
-1. Install StreamCQL as in the official 
[README](https://github.com/HuaweiBigData/StreamCQL#install-streamcql)
-
-2. Launch Gearpump Nimbus Server as before 
-
-3. Go to the installed stream-cql-binary, and change following settings in 
`conf/streaming-site.xml` with the output Nimbus configs in Step 2.
-
-   ```xml
-    <property>
-      <name>streaming.storm.nimbus.host</name>
-      <value>${nimbus.host}</value>
-    </property>
-    <property>
-      <name>streaming.storm.nimbus.port</name>
-      <value>${nimbus.thrift.port}</value>
-    </property>
-   ```
- 
-4. Open CQL client shell and execute a simple cql example
-   
-   ```
-   bin/cql
-   ```
-   
-   ```sql
-   Streaming> CREATE INPUT STREAM s
-       (id INT, name STRING, type INT)
-   SOURCE randomgen
-       PROPERTIES ( timeUnit = "SECONDS", period = "1",
-           eventNumPerperiod = "1", isSchedule = "true" );
-   
-   CREATE OUTPUT STREAM rs
-       (type INT, cc INT)
-   SINK consoleOutput;
-   
-   INSERT INTO STREAM rs SELECT type, COUNT(id) as cc
-       FROM s[RANGE 20 SECONDS BATCH]
-       WHERE id > 5 GROUP BY type;
-   
-   SUBMIT APPLICATION example;    
-   ```
-   
-5. Check the dashboard and you should see data flowing through a topology of 3 
components
-

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/dev-write-1st-app.md
----------------------------------------------------------------------
diff --git a/docs/dev-write-1st-app.md b/docs/dev-write-1st-app.md
deleted file mode 100644
index 2c640d3..0000000
--- a/docs/dev-write-1st-app.md
+++ /dev/null
@@ -1,411 +0,0 @@
----
-layout: global
-displayTitle: Write Your 1st Gearpump App
-title: Write Your 1st Gearpump App
-description: Write Your 1st Gearpump App
----
-
-We'll use 
[wordcount](https://github.com/apache/incubator-gearpump/tree/master/examples/streaming/wordcount/src/main/scala/org/apache/gearpump/streaming/examples/wordcount)
 as an example to illustrate how to write Gearpump applications.
-
-### Maven/Sbt Settings
-
-Repository and library dependencies can be found at [Maven 
Setting](maven-setting.html).
-
-### IDE Setup (Optional)
-You can get your preferred IDE ready for Gearpump by following [this 
guide](dev-ide-setup.html).
-
-### Decide which language and API to use for writing 
-Gearpump supports two level APIs:
-
-1. Low level API, which is more similar to Akka programming, operating on each 
event. The API document can be found at [Low Level API 
Doc](http://gearpump.apache.org/releases/latest/api/scala/index.html#org.apache.gearpump.streaming.package).
-
-2. High level API (aka DSL), which is operating on streaming instead of 
individual event. The API document can be found at [DSL API 
Doc](http://gearpump.apache.org/releases/latest/api/scala/index.html#org.apache.gearpump.streaming.dsl.package).
-
-And both APIs have their Java version and Scala version.
-
-So, before you writing your first Gearpump application, you need to decide 
which API to use and which language to use. 
-
-## DSL version for Wordcount
-
-The easiest way to write your streaming application is to write it with 
Gearpump DSL. 
-Below will demostrate how to write WordCount application via Gearpump DSL.
-
-
-<div class="codetabs">
-<div data-lang="scala"  markdown="1" >
-
-
-```scala
-/** WordCount with High level DSL */
-object WordCount extends AkkaApp with ArgumentsParser {
-
-  override val options: Array[(String, CLIOption[Any])] = Array.empty
-
-  override def main(akkaConf: Config, args: Array[String]): Unit = {
-    val context = ClientContext(akkaConf)
-    val app = StreamApp("dsl", context)
-    val data = "This is a good start, bingo!! bingo!!"
-
-    //count for each word and output to log
-    app.source(data.lines.toList, 1, "source").
-      // word => (word, count)
-      flatMap(line => line.split("[\\s]+")).map((_, 1)).
-      // (word, count1), (word, count2) => (word, count1 + count2)
-      groupByKey().sum.log
-
-    val appId = context.submit(app)
-    context.close()
-  }
-}
-```
-
-</div>
-
-<div data-lang="java" markdown="1">
-
-```java
-
-/** Java version of WordCount with high level DSL API */
-public class WordCount {
-
-  public static void main(String[] args) throws InterruptedException {
-    main(ClusterConfig.defaultConfig(), args);
-  }
-
-  public static void main(Config akkaConf, String[] args) throws 
InterruptedException {
-    ClientContext context = new ClientContext(akkaConf);
-    JavaStreamApp app = new JavaStreamApp("JavaDSL", context, 
UserConfig.empty());
-    List<String> source = Lists.newArrayList("This is a good start, bingo!! 
bingo!!");
-
-    //create a stream from the string list.
-    JavaStream<String> sentence = app.source(source, 1, UserConfig.empty(), 
"source");
-
-    //tokenize the strings and create a new stream
-    JavaStream<String> words = sentence.flatMap(new FlatMapFunction<String, 
String>() {
-      @Override
-      public Iterator<String> apply(String s) {
-        return Lists.newArrayList(s.split("\\s+")).iterator();
-      }
-    }, "flatMap");
-
-    //map each string as (string, 1) pair
-    JavaStream<Tuple2<String, Integer>> ones = words.map(new 
MapFunction<String, Tuple2<String, Integer>>() {
-      @Override
-      public Tuple2<String, Integer> apply(String s) {
-        return new Tuple2<String, Integer>(s, 1);
-      }
-    }, "map");
-
-    //group by according to string
-    JavaStream<Tuple2<String, Integer>> groupedOnes = ones.groupBy(new 
GroupByFunction<Tuple2<String, Integer>, String>() {
-      @Override
-      public String apply(Tuple2<String, Integer> tuple) {
-        return tuple._1();
-      }
-    }, 1, "groupBy");
-
-    //for each group, make the sum
-    JavaStream<Tuple2<String, Integer>> wordcount = groupedOnes.reduce(new 
ReduceFunction<Tuple2<String, Integer>>() {
-      @Override
-      public Tuple2<String, Integer> apply(Tuple2<String, Integer> t1, 
Tuple2<String, Integer> t2) {
-        return new Tuple2<String, Integer>(t1._1(), t1._2() + t2._2());
-      }
-    }, "reduce");
-
-    //output result using log
-    wordcount.log();
-
-    app.run();
-    context.close();
-  }
-}
-```
-
-</div>
-
-</div>
-
-## Low level API based Wordcount
-
-### Define Processor(Task) class and Partitioner class
-
-An application is a Directed Acyclic Graph (DAG) of processors. In the 
wordcount example, We will firstly define two processors `Split` and `Sum`, and 
then weave them together.
-
-
-#### Split processor
-
-In the `Split` processor, we simply split a predefined text (the content is 
simplified for conciseness) and send out each split word to `Sum`.
-
-<div class="codetabs">
-<div data-lang="scala"  markdown="1" >
-
-
-```scala
-class Split(taskContext : TaskContext, conf: UserConfig) extends 
Task(taskContext, conf) {
-  import taskContext.output
-
-  override def onStart(startTime : StartTime) : Unit = {
-    self ! Message("start")
-  }
-
-  override def onNext(msg : Message) : Unit = {
-    Split.TEXT_TO_SPLIT.lines.foreach { line =>
-      line.split("[\\s]+").filter(_.nonEmpty).foreach { msg =>
-        output(new Message(msg, System.currentTimeMillis()))
-      }
-    }
-    self ! Message("continue", System.currentTimeMillis())
-  }
-}
-
-object Split {
-  val TEXT_TO_SPLIT = "some text"
-}
-```
-
-</div>
-
-<div data-lang="java" markdown="1">
-```java
-public class Split extends Task {
-
-  public static String TEXT = "This is a good start for java! bingo! bingo! ";
-
-  public Split(TaskContext taskContext, UserConfig userConf) {
-    super(taskContext, userConf);
-  }
-
-  private Long now() {
-    return System.currentTimeMillis();
-  }
-
-  @Override
-  public void onStart(StartTime startTime) {
-    self().tell(new Message("start", now()), self());
-  }
-
-  @Override
-  public void onNext(Message msg) {
-
-    // Split the TEXT to words
-    String[] words = TEXT.split(" ");
-    for (int i = 0; i < words.length; i++) {
-      context.output(new Message(words[i], now()));
-    }
-    self().tell(new Message("next", now()), self());
-  }
-}
-```
-
-</div>
-
-</div>
-
-Essentially, each processor consists of two descriptions:
-
-1. A `Task` to define the operation.
-
-2. A parallelism level to define the number of tasks of this processor in 
parallel. 
- 
-Just like `Split`, every processor extends `Task`.  The `onStart` method is 
called once before any message comes in; `onNext` method is called to process 
every incoming message. Note that Gearpump employs the message-driven model and 
that's why Split sends itself a message at the end of `onStart` and `onNext` to 
trigger next message processing.
-
-#### Sum Processor
-
-The structure of `Sum` processor looks much alike. `Sum` does not need to send 
messages to itself since it receives messages from `Split`.
-
-<div class="codetabs">
-<div data-lang="scala"  markdown="1" >
-
-```scala
-class Sum (taskContext : TaskContext, conf: UserConfig) extends 
Task(taskContext, conf) {
-  private[wordcount] val map : mutable.HashMap[String, Long] = new 
mutable.HashMap[String, Long]()
-
-  private[wordcount] var wordCount : Long = 0
-  private var snapShotTime : Long = System.currentTimeMillis()
-  private var snapShotWordCount : Long = 0
-
-  private var scheduler : Cancellable = null
-
-  override def onStart(startTime : StartTime) : Unit = {
-    scheduler = taskContext.schedule(new FiniteDuration(5, TimeUnit.SECONDS),
-      new FiniteDuration(5, TimeUnit.SECONDS))(reportWordCount)
-  }
-
-  override def onNext(msg : Message) : Unit = {
-    if (null == msg) {
-      return
-    }
-    val current = map.getOrElse(msg.msg.asInstanceOf[String], 0L)
-    wordCount += 1
-    map.put(msg.msg.asInstanceOf[String], current + 1)
-  }
-
-  override def onStop() : Unit = {
-    if (scheduler != null) {
-      scheduler.cancel()
-    }
-  }
-
-  def reportWordCount() : Unit = {
-    val current : Long = System.currentTimeMillis()
-    LOG.info(s"Task ${taskContext.taskId} Throughput: ${(wordCount - 
snapShotWordCount, (current - snapShotTime) / 1000)} (words, second)")
-    snapShotWordCount = wordCount
-    snapShotTime = current
-  }
-}
-```
-
-</div>
-<div data-lang="java" markdown="1">
-
-```java
-public class Sum extends Task {
-
-  private Logger LOG = super.LOG();
-  private HashMap<String, Integer> wordCount = new HashMap<String, Integer>();
-
-  public Sum(TaskContext taskContext, UserConfig userConf) {
-    super(taskContext, userConf);
-  }
-
-  @Override
-  public void onStart(StartTime startTime) {
-    //skip
-  }
-
-  @Override
-  public void onNext(Message messagePayLoad) {
-    String word = (String) (messagePayLoad.msg());
-    Integer current = wordCount.get(word);
-    if (current == null) {
-      current = 0;
-    }
-    Integer newCount = current + 1;
-    wordCount.put(word, newCount);
-  }
-}
-```
-
-</div>
-
-</div>
-
-Besides counting the sum, in Scala version, we also define a scheduler to 
report throughput every 5 seconds. The scheduler should be cancelled when the 
computation completes, which could be accomplished overriding the `onStop` 
method. The default implementation of `onStop` is a no-op.
-
-#### Partitioner
-
-A processor could be parallelized to a list of tasks. A `Partitioner` defines 
how the data is shuffled among tasks of Split and Sum. Gearpump has already 
provided two partitioners
-
-* `HashPartitioner`: partitions data based on the message's hashcode
-* `ShufflePartitioner`: partitions data in a round-robin way.
-
-You could define your own partitioner by extending the `Partitioner` 
trait/interface and overriding the `getPartition` method.
-
-
-```scala
-trait Partitioner extends Serializable {
-  def getPartition(msg : Message, partitionNum : Int) : Int
-}
-```
-
-### Wrap up as an application 
-
-Now, we are able to write our application class, weaving the above components 
together.
-
-The application class extends `App` and `ArgumentsParser which make it easier 
to parse arguments and run main functions.
-
-<div class="codetabs">
-<div data-lang="scala"  markdown="1" >
-
-```scala
-object WordCount extends App with ArgumentsParser {
-  private val LOG: Logger = LogUtil.getLogger(getClass)
-  val RUN_FOR_EVER = -1
-
-  override val options: Array[(String, CLIOption[Any])] = Array(
-    "split" -> CLIOption[Int]("<how many split tasks>", required = false, 
defaultValue = Some(1)),
-    "sum" -> CLIOption[Int]("<how many sum tasks>", required = false, 
defaultValue = Some(1))
-  )
-
-  def application(config: ParseResult) : StreamApplication = {
-    val splitNum = config.getInt("split")
-    val sumNum = config.getInt("sum")
-    val partitioner = new HashPartitioner()
-    val split = Processor[Split](splitNum)
-    val sum = Processor[Sum](sumNum)
-    val app = StreamApplication("wordCount", Graph[Processor[_ <: Task], 
Partitioner](split ~ partitioner ~> sum), UserConfig.empty)
-    app
-  }
-
-  val config = parse(args)
-  val context = ClientContext()
-  val appId = context.submit(application(config))
-  context.close()
-}
-
-```
-
-We override `options` value and define an array of command line arguments to 
parse. We want application users to pass in masters' hosts and ports, the 
parallelism of split and sum tasks, and how long to run the example. We also 
specify whether an option is `required` and provide `defaultValue` for some 
arguments.
-
-</div>
-
-<div data-lang="java" markdown="1">
-
-```java
-
-/** Java version of WordCount with Processor Graph API */
-public class WordCount {
-
-  public static void main(String[] args) throws InterruptedException {
-    main(ClusterConfig.defaultConfig(), args);
-  }
-
-  public static void main(Config akkaConf, String[] args) throws 
InterruptedException {
-
-    // For split task, we config to create two tasks
-    int splitTaskNumber = 2;
-    Processor split = new 
Processor(Split.class).withParallelism(splitTaskNumber);
-
-    // For sum task, we have two summer.
-    int sumTaskNumber = 2;
-    Processor sum = new Processor(Sum.class).withParallelism(sumTaskNumber);
-
-    // construct the graph
-    Graph graph = new Graph();
-    graph.addVertex(split);
-    graph.addVertex(sum);
-
-    Partitioner partitioner = new HashPartitioner();
-    graph.addEdge(split, partitioner, sum);
-
-    UserConfig conf = UserConfig.empty();
-    StreamApplication app = new StreamApplication("wordcountJava", conf, 
graph);
-
-    // create master client
-    // It will read the master settings under gearpump.cluster.masters
-    ClientContext masterClient = new ClientContext(akkaConf);
-
-    masterClient.submit(app);
-
-    masterClient.close();
-  }
-}
-```
-
-</div>
-
-</div>
-
-
-
-## Submit application
-
-After all these, you need to package everything into a uber jar and submit the 
jar to Gearpump Cluster. Please check [Application submission 
tool](commandline.html) to command line tool syntax.
-
-## Advanced topic
-For a real application, you definitely need to define your own customized 
message passing between processors.
-Customized message needs customized serializer to help message passing over 
wire.
-Check [this guide](dev-custom-serializer.html) for how to customize serializer.
-
-### Gearpump for Non-Streaming Usage
-Gearpump is also able to as a base platform to develop non-streaming 
applications. See [this guide](dev-non-streaming-example.html) on how to use 
Gearpump to develop a distributed shell.

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/api/java.md
----------------------------------------------------------------------
diff --git a/docs/docs/api/java.md b/docs/docs/api/java.md
new file mode 100644
index 0000000..3b94f91
--- /dev/null
+++ b/docs/docs/api/java.md
@@ -0,0 +1 @@
+Placeholder

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/api/scala.md
----------------------------------------------------------------------
diff --git a/docs/docs/api/scala.md b/docs/docs/api/scala.md
new file mode 100644
index 0000000..3b94f91
--- /dev/null
+++ b/docs/docs/api/scala.md
@@ -0,0 +1 @@
+Placeholder

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/deployment/deployment-configuration.md
----------------------------------------------------------------------
diff --git a/docs/docs/deployment/deployment-configuration.md 
b/docs/docs/deployment/deployment-configuration.md
new file mode 100644
index 0000000..1dadbd7
--- /dev/null
+++ b/docs/docs/deployment/deployment-configuration.md
@@ -0,0 +1,84 @@
+## Master and Worker configuration
+
+Master and Worker daemons will only read configuration from `conf/gear.conf`.
+
+Master reads configuration from section master and gearpump:
+
+       :::bash
+       master {
+       }
+       gearpump{
+       }
+
+
+Worker reads configuration from section worker and gearpump:
+
+       :::bash
+       worker {
+       }
+       gearpump{
+       }
+       
+
+## Configuration for user submitted application job
+
+For user application job, it will read configuration file `gear.conf` and 
`application.conf` from classpath, while `application.conf` has higher priority.
+The default classpath contains:
+
+1. `conf/`
+2. current working directory.
+
+For example, you can put a `application.conf` on your working directory, and 
then it will be effective when you submit a new job application.
+
+## Logging
+
+To change the log level, you need to change both `gear.conf`, and 
`log4j.properties`.
+
+### To change the log level for master and worker daemon
+
+Please change `log4j.rootLevel` in `log4j.properties`, 
`gearpump-master.akka.loglevel` and `gearpump-worker.akka.loglevel` in 
`gear.conf`.
+
+### To change the log level for application job
+
+Please change `log4j.rootLevel` in `log4j.properties`, and `akka.loglevel` in 
`gear.conf` or `application.conf`.
+
+## Gearpump Default Configuration
+
+This is the default configuration for `gear.conf`.
+
+| config item    | default value  | description      |
+| -------------- | -------------- | ---------------- |
+| gearpump.hostname | "127.0.0.1" | hostname of current machine. If you are 
using local mode, then set this to 127.0.0.1. If you are using cluster mode, 
make sure this hostname can be accessed by other machines. |
+| gearpump.cluster.masters | ["127.0.0.1:3000"] | Config to set the master 
nodes of the cluster. If there are multiple master in the list, then the master 
nodes runs in HA mode. For example, you may start three master, on node1: 
`bin/master -ip node1 -port 3000`, on node2: `bin/master -ip node2 -port 3000`, 
on node3: `bin/master -ip node3 -port 3000`, then you need to set  
`gearpump.cluster.masters = ["node1:3000","node2:3000","node3:3000"]` |
+| gearpump.task-dispatcher | "gearpump.shared-thread-pool-dispatcher" | 
default dispatcher for task actor |
+| gearpump.metrics.enabled | true | flag to enable the metrics system |
+| gearpump.metrics.sample-rate | 1 | We will take one sample every 
`gearpump.metrics.sample-rate` data points. Note it may have impact that the 
statistics on UI portal is not accurate. Change it to 1 if you want accurate 
metrics in UI |
+| gearpump.metrics.report-interval-ms | 15000 | we will report once every 15 
seconds |
+| gearpump.metrics.reporter  | "akka" | available value: "graphite", "akka", 
"logfile" which write the metrics data to different places. |
+| gearpump.retainHistoryData.hours | 72 | max hours of history data to retain, 
Note: Due to implementation limitation(we store all history in memory), please 
don't set this to too big which may exhaust memory. |
+| gearpump.retainHistoryData.intervalMs | 3600000 |  time interval between two 
data points for history data (unit: ms). Usually this is set to a big value so 
that we only store coarse-grain data |
+| gearpump.retainRecentData.seconds | 300 | max seconds of recent data to 
retain. This is for the fine-grain data |
+| gearpump.retainRecentData.intervalMs | 15000 | time interval between two 
data points for recent data (unit: ms) |
+| gearpump.log.daemon.dir | "logs" | The log directory for daemon 
processes(relative to current working directory) |
+| gearpump.log.application.dir | "logs" | The log directory for 
applications(relative to current working directory) |
+| gearpump.serializers | a map | custom serializer for streaming application, 
e.g. `"scala.Array" = ""` |
+| gearpump.worker.slots | 1000 | How many slots each worker contains |
+| gearpump.appmaster.vmargs | "-server  -Xss1M -XX:+HeapDumpOnOutOfMemoryError 
-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC 
-XX:NewRatio=3 -Djava.rmi.server.hostname=localhost" | JVM arguments for 
AppMaster |
+| gearpump.appmaster.extraClasspath | "" | JVM default class path for 
AppMaster |
+| gearpump.executor.vmargs | "-server -Xss1M -XX:+HeapDumpOnOutOfMemoryError 
-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC 
-XX:NewRatio=3  -Djava.rmi.server.hostname=localhost" | JVM arguments for 
executor |
+| gearpump.executor.extraClasspath | "" | JVM default class path for executor |
+| gearpump.jarstore.rootpath | "jarstore/" |   Define where the submitted jar 
file will be stored. This path follows the hadoop path schema. For HDFS, use 
`hdfs://host:port/path/`, and HDFS HA, `hdfs://namespace/path/`; if you want to 
store on master nodes, then use local directory. `jarstore.rootpath = 
"jarstore/"` will point to relative directory where master is started. 
`jarstore.rootpath = "/jarstore/"` will point to absolute directory on master 
server |
+| gearpump.scheduling.scheduler-class 
|"org.apache.gearpump.cluster.scheduler.PriorityScheduler" | Class to schedule 
the applications. |
+| gearpump.services.host | "127.0.0.1" | dashboard UI host address |
+| gearpump.services.port | 8090 | dashboard UI host port |
+| gearpump.netty.buffer-size | 5242880 | netty connection buffer size |
+| gearpump.netty.max-retries | 30 | maximum number of retries for a netty 
client to connect to remote host |
+| gearpump.netty.base-sleep-ms | 100 | base sleep time for a netty client to 
retry a connection. Actual sleep time is a multiple of this value |
+| gearpump.netty.max-sleep-ms | 1000 | maximum sleep time for a netty client 
to retry a connection |
+| gearpump.netty.message-batch-size | 262144 | netty max batch size |
+| gearpump.netty.flush-check-interval | 10 | max flush interval for the netty 
layer, in milliseconds |
+| gearpump.netty.dispatcher | "gearpump.shared-thread-pool-dispatcher" | 
default dispatcher for netty client and server |
+| gearpump.shared-thread-pool-dispatcher | default Dispatcher with 
"fork-join-executor" | default shared thread pool dispatcher |
+| gearpump.single-thread-dispatcher | PinnedDispatcher | default single thread 
dispatcher |
+| gearpump.serialization-framework | 
"org.apache.gearpump.serializer.FastKryoSerializationFramework" | Gearpump has 
built-in serialization framework using Kryo. Users are allowed to use a 
different serialization framework, like Protobuf. See 
`org.apache.gearpump.serializer.FastKryoSerializationFramework` to find how a 
custom serialization framework can be defined |
+| worker.executor-share-same-jvm-as-worker | false | whether the executor 
actor is started in the same jvm(process) from which running the worker actor, 
the intention of this setting is for the convenience of single machine 
debugging, however, the app jar need to be added to the worker's classpath when 
you set it true and have a 'real' worker in the cluster |
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/deployment/deployment-docker.md
----------------------------------------------------------------------
diff --git a/docs/docs/deployment/deployment-docker.md 
b/docs/docs/deployment/deployment-docker.md
new file mode 100644
index 0000000..c71ed9d
--- /dev/null
+++ b/docs/docs/deployment/deployment-docker.md
@@ -0,0 +1,5 @@
+## Gearpump Docker Container
+
+There is pre-built docker container available at [Docker 
Repo](https://hub.docker.com/r/gearpump/gearpump/)
+
+Check the documents there to find how to launch a Gearpump cluster in one line.

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/deployment/deployment-ha.md
----------------------------------------------------------------------
diff --git a/docs/docs/deployment/deployment-ha.md 
b/docs/docs/deployment/deployment-ha.md
new file mode 100644
index 0000000..9e907c0
--- /dev/null
+++ b/docs/docs/deployment/deployment-ha.md
@@ -0,0 +1,75 @@
+To support HA, we allow to start master on multiple nodes. They will form a 
quorum to decide consistency. For example, if we start master on 5 nodes and 2 
nodes are down, then the cluster is still consistent and functional.
+
+Here are the steps to enable the HA mode:
+
+### 1. Configure.
+
+#### Select master machines
+
+Distribute the package to all nodes. Modify `conf/gear.conf` on all nodes. You 
MUST configure
+
+       :::bash
+       gearpump.hostname
+
+to make it point to your hostname(or ip), and
+
+       :::bash
+       gearpump.cluster.masters
+       
+to a list of master nodes. For example, if I have 3 master nodes (node1, 
node2, and node3),  then the `gearpump.cluster.masters` can be set as
+
+       :::bash
+       gearpump.cluster {
+         masters = ["node1:3000", "node2:3000", "node3:3000"]
+       }
+       
+
+#### Configure distributed storage to store application jars.
+In `conf/gear.conf`, For entry `gearpump.jarstore.rootpath`, please choose the 
storage folder for application jars. You need to make sure this jar storage is 
highly available. We support two storage systems:
+
+  1). HDFS
+  
+  You need to configure the `gearpump.jarstore.rootpath` like this
+
+       :::bash
+    hdfs://host:port/path/
+
+
+  For HDFS HA,
+  
+       :::bash
+   hdfs://namespace/path/
+
+
+  2). Shared NFS folder
+  
+  First you need to map the NFS directory to local directory(same path) on all 
machines of master nodes.
+Then you need to set the `gearpump.jarstore.rootpath` like this:
+
+       :::bash
+       file:///your_nfs_mapping_directory
+
+
+  3). If you don't set this value, we will use the local directory of master 
node.
+  NOTE! There is no HA guarantee in this case, which means we are unable to 
recover running applications when master goes down.
+
+### 2. Start Daemon.
+
+On node1, node2, node3, Start Master
+
+       :::bash
+       ## on node1
+       bin/master -ip node1 -port 3000
+
+       ## on node2
+       bin/master -ip node2 -port 3000
+
+       ## on node3
+       bin/master -ip node3 -port 3000
+  
+
+### 3. Done!
+
+Now you have a highly available HA cluster. You can kill any node, the master 
HA will take effect.
+
+**NOTE**: It can take up to 15 seconds for master node to fail-over. You can 
change the fail-over timeout time by adding config in `gear.conf` 
`gearpump-master.akka.cluster.auto-down-unreachable-after=10s` or set it to a 
smaller value

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/deployment/deployment-local.md
----------------------------------------------------------------------
diff --git a/docs/docs/deployment/deployment-local.md 
b/docs/docs/deployment/deployment-local.md
new file mode 100644
index 0000000..81e6029
--- /dev/null
+++ b/docs/docs/deployment/deployment-local.md
@@ -0,0 +1,34 @@
+You can start the Gearpump service in a single JVM(local mode), or in a 
distributed cluster(cluster mode). To start the cluster in local mode, you can 
use the local /local.bat helper scripts, it is very useful for developing or 
troubleshooting.
+
+Below are the steps to start a Gearpump service in **Local** mode:
+
+### Step 1: Get your Gearpump binary ready
+To get your Gearpump service running in local mode, you first need to have a 
Gearpump distribution binary ready.
+Please follow [this guide](get-gearpump-distribution) to have the binary.  
+
+### Step 2: Start the cluster
+You can start a local mode cluster in single line
+
+       :::bash
+       ## start the master and 2 workers in single JVM. The master will listen 
on 3000
+       ## you can Ctrl+C to kill the local cluster after you finished the 
startup tutorial.
+       bin/local
+       
+
+**NOTE:** You may need to execute `chmod +x bin/*` in shell to make the script 
file `local` executable.
+
+**NOTE:** You can change the default port by changing config 
`gearpump.cluster.masters` in `conf/gear.conf`.
+
+**NOTE: Change the working directory**. Log files by default will be generated 
under current working directory. So, please "cd" to required working directly 
before running the shell commands.
+
+**NOTE: Run as Daemon**. You can run it as a background process. For example, 
use [nohup](http://linux.die.net/man/1/nohup) on Linux.
+
+### Step 3: Start the Web UI server
+Open another shell,
+
+       :::bash
+       bin/services
+       
+You can manage the applications in UI 
[http://127.0.0.1:8090](http://127.0.0.1:8090) or by [Command Line 
tool](../introduction/commandline).
+The default username and password is "admin:admin", you can check
+[UI Authentication](../deployment/deployment-ui-authentication) to find how to 
manage users.

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/deployment/deployment-msg-delivery.md
----------------------------------------------------------------------
diff --git a/docs/docs/deployment/deployment-msg-delivery.md 
b/docs/docs/deployment/deployment-msg-delivery.md
new file mode 100644
index 0000000..53e8c2e
--- /dev/null
+++ b/docs/docs/deployment/deployment-msg-delivery.md
@@ -0,0 +1,60 @@
+## How to deploy for At Least Once Message Delivery?
+
+As introduced in the [What is At Least Once Message 
Delivery](../introduction/message-delivery#what-is-at-least-once-message-delivery),
 Gearpump has a built in KafkaSource. To get at least once message delivery, 
users should deploy a Kafka cluster as the offset store along with the Gearpump 
cluster. 
+
+Here's an example to deploy a local Kafka cluster. 
+
+1. download the latest Kafka from the official website and extract to a local 
directory (`$KAFKA_HOME`)
+
+2. Boot up the single-node Zookeeper instance packaged with Kafka. 
+
+       :::bash
+       $KAFKA_HOME/bin/zookeeper-server-start.sh 
$KAFKA_HOME/config/zookeeper.properties
+    
+ 
+3. Start a Kafka broker
+
+           :::bash
+           $KAFKA_HOME/bin/kafka-server-start.sh 
$KAFKA_HOME/config/kafka.properties
+             
+
+4. When creating a offset store for `KafkaSource`, set the zookeeper connect 
string to `localhost:2181` and broker list to `localhost:9092` in 
`KafkaStorageFactory`.
+
+           :::scala
+           val offsetStorageFactory = new 
KafkaStorageFactory("localhost:2181", "localhost:9092")
+           val source = new KafkaSource("topic1", "localhost:2181", 
offsetStorageFactory)
+           
+
+## How to deploy for Exactly Once Message Delivery?
+
+Exactly Once Message Delivery requires both an offset store and a checkpoint 
store. For the offset store, a Kafka cluster should be deployed as in the 
previous section. As for the checkpoint store, Gearpump has built-in support 
for Hadoop file systems, like HDFS. Hence, users should deploy a HDFS cluster 
alongside the Gearpump cluster. 
+
+Here's an example to deploy a local HDFS cluster.
+
+1. download Hadoop 2.6 from the official website and extracts it to a local 
directory `HADOOP_HOME`
+
+2. add following configuration to `$HADOOP_HOME/etc/core-site.xml`
+
+           :::xml
+           <configuration>
+             <property>
+               <name>fs.defaultFS</name>
+               <value>hdfs://localhost:9000</value>
+             </property>
+           </configuration>
+           
+
+3. start HDFS
+
+           :::bash
+           $HADOOP_HOME/sbin/start-dfs.sh
+           
+   
+4. When creating a `HadoopCheckpointStore`, set the hadoop configuration as in 
the `core-site.xml`
+
+               :::scala   
+       val hadoopConfig = new Configuration
+       hadoopConfig.set("fs.defaultFS", "hdfs://localhost:9000")
+       val checkpointStoreFactory = new 
HadoopCheckpointStoreFactory("MessageCount", hadoopConfig, new 
FileSizeRotation(1000))
+
+    
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/deployment/deployment-resource-isolation.md
----------------------------------------------------------------------
diff --git a/docs/docs/deployment/deployment-resource-isolation.md 
b/docs/docs/deployment/deployment-resource-isolation.md
new file mode 100644
index 0000000..ee47802
--- /dev/null
+++ b/docs/docs/deployment/deployment-resource-isolation.md
@@ -0,0 +1,112 @@
+CGroup (abbreviated from control groups) is a Linux kernel feature to limit, 
account, and isolate resource usage (CPU, memory, disk I/O, etc.) of process 
groups.In Gearpump, we use cgroup to manage CPU resources.
+
+## Start CGroup Service 
+
+CGroup feature is only supported by Linux whose kernel version is larger than 
2.6.18. Please also make sure the SELinux is disabled before start CGroup.
+
+The following steps are supposed to be executed by root user.
+
+1. Check `/etc/cgconfig.conf` exist or not. If not exists, please `yum install 
libcgroup`.
+
+2. Run following command to see whether the **cpu** subsystem is already 
mounted to the file system.
+ 
+               :::bash
+               lssubsys -m
+               
+    Each subsystem in CGroup will have a corresponding mount file path in 
local file system. For example, the following output shows that **cpu** 
subsystem is mounted to file path `/sys/fs/cgroup/cpu`
+   
+               :::bash
+               cpu /sys/fs/cgroup/cpu
+               net_cls /sys/fs/cgroup/net_cls
+               blkio /sys/fs/cgroup/blkio
+               perf_event /sys/fs/cgroup/perf_event
+  
+   
+3. If you want to assign permission to user **gear** to launch Gearpump Worker 
and applications with resource isolation enabled, you need to check gear's uid 
and gid in `/etc/passwd` file, let's take **500** for example.
+
+4. Add following content to `/etc/cgconfig.conf`
+    
+               
+               # The mount point of cpu subsystem.
+               # If your system already mounted it, this segment should be 
eliminated.
+               mount {    
+                 cpu = /cgroup/cpu;
+               }
+                   
+               # Here the group name "gearpump" represents a node in CGroup's 
hierarchy tree.
+               # When the CGroup service is started, there will be a folder 
generated under the mount point of cpu subsystem,
+               # whose name is "gearpump".
+                   
+               group gearpump {
+                  perm {
+                      task {
+                          uid = 500;
+                          gid = 500;
+                       }
+                      admin {
+                          uid = 500;
+                          gid = 500;
+                      }
+                  }
+                  cpu {
+                  }
+               }
+          
+   
+   Please note that if the output of step 2 shows that **cpu** subsystem is 
already mounted, then the `mount` segment should not be included.
+   
+4. Then Start cgroup service
+   
+               :::bash
+               sudo service cgconfig restart 
+   
+   
+5. There should be a folder **gearpump** generated under the mount point of 
cpu subsystem and its owner is **gear:gear**.  
+  
+6. Repeat the above-mentioned steps on each machine where you want to launch 
Gearpump.   
+
+## Enable Cgroups in Gearpump 
+1. Login into the machine which has CGroup prepared with user **gear**.
+
+               :::bash
+               ssh gear@node
+   
+
+2. Enter into Gearpump's home folder, edit gear.conf under folder 
`${GEARPUMP_HOME}/conf/`
+
+               :::bash
+               gearpump.worker.executor-process-launcher = 
"org.apache.gearpump.cluster.worker.CGroupProcessLauncher"
+   
+               gearpump.cgroup.root = "gearpump"
+   
+
+   Please note the gearpump.cgroup.root **gearpump** must be consistent with 
the group name in /etc/cgconfig.conf.
+
+3. Repeat the above-mentioned steps on each machine where you want to launch 
Gearpump
+
+4. Start the Gearpump cluster, please refer to [Deploy Gearpump in Standalone 
Mode](deployment-standalone)
+
+## Launch Application From Command Line
+1. Login into the machine which has Gearpump distribution.
+
+2. Enter into Gearpump's home folder, edit gear.conf under folder 
`${GEARPUMP_HOME}/conf/`
+   
+               :::bash
+               gearpump.cgroup.cpu-core-limit-per-executor = 
${your_preferred_int_num}
+   
+  
+   Here the configuration is the number of CPU cores per executor can use and 
-1 means no limitation
+
+3. Submit application
+
+               :::bash
+               bin/gear app -jar 
examples/sol-{{SCALA_BINARY_VERSION}}-{{GEARPUMP_VERSION}}-assembly.jar 
-streamProducer 10 -streamProcessor 10 
+   
+
+4. Then you can run command `top` to monitor the cpu usage.
+
+## Launch Application From Dashboard
+If you want to submit the application from dashboard, by default the 
`gearpump.cgroup.cpu-core-limit-per-executor` is inherited from Worker's 
configuration. You can provide your own conf file to override it.
+
+## Limitations
+Windows and Mac OS X don't support CGroup, so the resource isolation will not 
work even if you turn it on. There will not be any limitation for single 
executor's cpu usage.

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/deployment/deployment-security.md
----------------------------------------------------------------------
diff --git a/docs/docs/deployment/deployment-security.md 
b/docs/docs/deployment/deployment-security.md
new file mode 100644
index 0000000..e20fc67
--- /dev/null
+++ b/docs/docs/deployment/deployment-security.md
@@ -0,0 +1,80 @@
+Until now Gearpump supports deployment in a secured Yarn cluster and writing 
to secured HBase, where "secured" means Kerberos enabled. 
+Further security related feature is in progress.
+
+## How to launch Gearpump in a secured Yarn cluster
+Suppose user `gear` will launch gearpump on YARN, then the corresponding 
principal `gear` should be created in KDC server.
+
+1. Create Kerberos principal for user `gear`, on the KDC machine
+ 
+               :::bash 
+               sudo kadmin.local
+   
+       In the kadmin.local or kadmin shell, create the principal
+   
+               :::bash
+               kadmin:  addprinc 
gear/[email protected]
+   
+       Remember that user `gear` must exist on every node of Yarn. 
+
+2. Upload the gearpump-{{SCALA_BINARY_VERSION}}-{{GEARPUMP_VERSION}}.zip to 
remote HDFS Folder, suggest to put it under 
`/usr/lib/gearpump/gearpump-{{SCALA_BINARY_VERSION}}-{{GEARPUMP_VERSION}}.zip`
+
+3. Create HDFS folder /user/gear/, make sure all read-write rights are granted 
for user `gear`
+
+               :::bash
+               drwxr-xr-x - gear gear 0 2015-11-27 14:03 /user/gear
+   
+   
+4. Put the YARN configurations under classpath.
+  Before calling `yarnclient launch`, make sure you have put all yarn 
configuration files under classpath. Typically, you can just copy all files 
under `$HADOOP_HOME/etc/hadoop` from one of the YARN cluster machine to 
`conf/yarnconf` of gearpump. `$HADOOP_HOME` points to the Hadoop installation 
directory. 
+  
+5. Get Kerberos credentials to submit the job:
+
+               :::bash
+               kinit gearpump/[email protected]
+   
+   
+       Here you can login with keytab or password. Please refer Kerberos's 
document for details.
+    
+               :::bash
+               yarnclient launch -package 
/usr/lib/gearpump/gearpump-{{SCALA_BINARY_VERSION}}-{{GEARPUMP_VERSION}}.zip
+   
+  
+## How to write to secured HBase
+When the remote HBase is security enabled, a kerberos keytab and the 
corresponding principal name need to be
+provided for the gearpump-hbase connector. Specifically, the `UserConfig` 
object passed into the HBaseSink should contain
+`{("gearpump.keytab.file", "\\$keytab"), ("gearpump.kerberos.principal", 
"\\$principal")}`. example code of writing to secured HBase:
+
+       :::scala
+       val principal = "gearpump/[email protected]"
+       val keytabContent = Files.toByteArray(new File("path_to_keytab_file"))
+       val appConfig = UserConfig.empty
+             .withString("gearpump.kerberos.principal", principal)
+             .withBytes("gearpump.keytab.file", keytabContent)
+       val sink = new HBaseSink(appConfig, "$tableName")
+       val sinkProcessor = DataSinkProcessor(sink, "$sinkNum")
+       val split = Processor[Split]("$splitNum")
+       val computation = split ~> sinkProcessor
+       val application = StreamApplication("HBase", Graph(computation), 
UserConfig.empty)
+
+
+Note here the keytab file set into config should be a byte array.
+
+## Future Plan
+
+### More external components support
+1. HDFS
+2. Kafka
+
+### Authentication(Kerberos)
+Since Gearpump’s Master-Worker structure is similar to HDFS’s 
NameNode-DataNode and Yarn’s ResourceManager-NodeManager, we may follow the 
way they use.
+
+1. User creates kerberos principal and keytab for Gearpump.
+2. Deploy the keytab files to all the cluster nodes.
+3. Configure Gearpump’s conf file, specify kerberos principal and local 
keytab file location.
+4. Start Master and Worker.
+
+Every application has a submitter/user. We will separate the application from 
different users, like different log folders for different applications. 
+Only authenticated users can submit the application to Gearpump's Master.
+
+### Authorization
+Hopefully more on this soon

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/761e04c6/docs/docs/deployment/deployment-standalone.md
----------------------------------------------------------------------
diff --git a/docs/docs/deployment/deployment-standalone.md 
b/docs/docs/deployment/deployment-standalone.md
new file mode 100644
index 0000000..c9d5549
--- /dev/null
+++ b/docs/docs/deployment/deployment-standalone.md
@@ -0,0 +1,59 @@
+Standalone mode is a distributed cluster mode. That is, Gearpump runs as 
service without the help from other services (e.g. YARN).
+
+To deploy Gearpump in cluster mode, please first check that the 
[Pre-requisites](hardware-requirement) are met.
+
+### How to Install
+You need to have Gearpump binary at hand. Please refer to [How to get gearpump 
distribution](get-gearpump-distribution) to get the Gearpump binary.
+
+You are suggested to unzip the package to same directory path on every machine 
you planned to install Gearpump.
+To install Gearpump, you at least need to change the configuration in 
`conf/gear.conf`.
+
+Config | Default value | Description
+------------ | ---------------|------------
+gearpump.hostname      | "127.0.0.1"    | Host or IP address of current 
machine. The ip/host need to be reachable from other machines in the cluster.
+gearpump.cluster.masters |     ["127.0.0.1:3000"] |    List of all master 
nodes, with each item represents host and port of one master.
+gearpump.worker.slots   | 1000 | how many slots this worker has
+
+Besides this, there are other optional configurations related with logs, 
metrics, transports, ui. You can refer to [Configuration 
Guide](deployment-configuration) for more details.
+
+### Start the Cluster Daemons in Standlone mode
+In Standalone mode, you can start master and worker in different JVMs.
+
+##### To start master:
+
+       :::bash
+       bin/master -ip xx -port xx
+
+The ip and port will be checked against settings under `conf/gear.conf`, so 
you need to make sure they are consistent.
+
+**NOTE:** You may need to execute `chmod +x bin/*` in shell to make the script 
file `master` executable.
+
+**NOTE**: for high availability, please check [Master HA Guide](deployment-ha)
+
+##### To start worker:
+
+       :::bash
+       bin/worker
+
+### Start UI
+
+       :::bash
+       bin/services
+       
+
+After UI is started, you can browse to `http://{web_ui_host}:8090` to view the 
cluster status.
+The default username and password is "admin:admin", you can check
+[UI Authentication](deployment-ui-authentication) to find how to manage users.
+
+![Dashboard](../img/dashboard.gif)
+
+**NOTE:** The UI port can be configured in `gear.conf`. Check [Configuration 
Guide](deployment-configuration) for information.
+
+### Bash tool to start cluster
+
+There is a bash tool `bin/start-cluster.sh` can launch the cluster 
conveniently. You need to change the file `conf/masters`, `conf/workers` and 
`conf/dashboard` to specify the corresponding machines.
+Before running the bash tool, please make sure the Gearpump package is already 
unzipped to the same directory path on every machine.
+`bin/stop-cluster.sh` is used to stop the whole cluster of course.
+
+The bash tool is able to launch the cluster without changing the 
`conf/gear.conf` on every machine. The bash sets the `gearpump.cluster.masters` 
and other configurations using JAVA_OPTS.
+However, please note when you log into any these unconfigured machine and try 
to launch the dashboard or submit the application, you still need to modify 
`conf/gear.conf` manually because the JAVA_OPTS is missing.

Reply via email to