This is an automated email from the ASF dual-hosted git repository.

zky pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git


The following commit(s) were added to refs/heads/main by this push:
     new 887415b  Added versioning
887415b is described below

commit 887415b12e53b808884f88cda09033b0f84b806b
Author: yumengwang03 <[email protected]>
AuthorDate: Tue Jul 12 21:03:04 2022 +0800

    Added versioning
---
 docs/DataModels/01-DevLakeDomainLayerSchema.md     |   2 +-
 docs/Glossary.md                                   |   4 +-
 docs/Overview/01-WhatIsDevLake.md                  |   8 +-
 docs/Plugins/github.md                             |   2 +-
 docusaurus.config.js                               |  28 +-
 .../Dashboards/AverageRequirementLeadTime.md       |   9 +
 .../version-0.11/Dashboards/CommitCountByAuthor.md |   9 +
 .../version-0.11/Dashboards/DetailedBugInfo.md     |   9 +
 .../version-0.11/Dashboards/GitHubBasic.md         |   9 +
 .../GitHubReleaseQualityAndContributionAnalysis.md |   9 +
 versioned_docs/version-0.11/Dashboards/Jenkins.md  |   9 +
 .../version-0.11/Dashboards/WeeklyBugRetro.md      |   9 +
 .../version-0.11/Dashboards/_category_.json        |   4 +
 .../DataModels/01-DevLakeDomainLayerSchema.md      |   2 +-
 .../version-0.11/DataModels/02-DataSupport.md      |  62 +++++
 .../version-0.11/DataModels/_category_.json        |   4 +
 .../DeveloperManuals/04-DeveloperSetup.md          | 130 +++++++++
 .../version-0.11/DeveloperManuals/Dal.md           | 173 ++++++++++++
 .../version-0.11/DeveloperManuals/MIGRATIONS.md    |  36 +++
 .../version-0.11/DeveloperManuals/NOTIFICATION.md  |  33 +++
 .../version-0.11/DeveloperManuals/PluginCreate.md  | 292 +++++++++++++++++++++
 .../version-0.11/DeveloperManuals/_category_.json  |   4 +
 versioned_docs/version-0.11/EngineeringMetrics.md  | 195 ++++++++++++++
 {docs => versioned_docs/version-0.11}/Glossary.md  |   4 +-
 .../version-0.11}/Overview/01-WhatIsDevLake.md     |   8 +-
 .../version-0.11/Overview/02-Architecture.md       |  39 +++
 versioned_docs/version-0.11/Overview/03-Roadmap.md |  36 +++
 .../version-0.11/Overview/_category_.json          |   4 +
 .../version-0.11/Plugins/_category_.json           |   4 +
 versioned_docs/version-0.11/Plugins/dbt.md         |  67 +++++
 versioned_docs/version-0.11/Plugins/feishu.md      |  66 +++++
 versioned_docs/version-0.11/Plugins/gitee.md       | 114 ++++++++
 .../version-0.11/Plugins/gitextractor.md           |  65 +++++
 .../Plugins/github-connection-in-config-ui.png     | Bin 0 -> 51159 bytes
 .../version-0.11}/Plugins/github.md                |   2 +-
 .../Plugins/gitlab-connection-in-config-ui.png     | Bin 0 -> 66616 bytes
 versioned_docs/version-0.11/Plugins/gitlab.md      |  94 +++++++
 versioned_docs/version-0.11/Plugins/jenkins.md     |  61 +++++
 .../Plugins/jira-connection-config-ui.png          | Bin 0 -> 76052 bytes
 .../Plugins/jira-more-setting-in-config-ui.png     | Bin 0 -> 300823 bytes
 versioned_docs/version-0.11/Plugins/jira.md        | 253 ++++++++++++++++++
 versioned_docs/version-0.11/Plugins/refdiff.md     | 118 +++++++++
 versioned_docs/version-0.11/Plugins/tapd.md        |  12 +
 .../version-0.11/QuickStart/01-LocalSetup.md       |  43 +++
 .../version-0.11/QuickStart/02-KubernetesSetup.md  |  32 +++
 .../version-0.11/QuickStart/_category_.json        |   4 +
 .../version-0.11/UserManuals/03-TemporalSetup.md   |  35 +++
 versioned_docs/version-0.11/UserManuals/GRAFANA.md | 120 +++++++++
 .../version-0.11/UserManuals/_category_.json       |   4 +
 .../create-pipeline-in-advanced-mode.md            |  89 +++++++
 .../UserManuals/github-user-guide-v0.10.0.md       | 118 +++++++++
 .../version-0.11/UserManuals/recurring-pipeline.md |  30 +++
 versioned_sidebars/version-0.11-sidebars.json      |   8 +
 versions.json                                      |   3 +
 54 files changed, 2457 insertions(+), 18 deletions(-)

diff --git a/docs/DataModels/01-DevLakeDomainLayerSchema.md 
b/docs/DataModels/01-DevLakeDomainLayerSchema.md
index 80bd41b..2ffa512 100644
--- a/docs/DataModels/01-DevLakeDomainLayerSchema.md
+++ b/docs/DataModels/01-DevLakeDomainLayerSchema.md
@@ -33,7 +33,7 @@ This is the up-to-date domain layer schema for DevLake 
v0.10.x. Tables (entities
 
 
 ### Schema Diagram
-![Domain Layer Schema](../../static/img/schema-diagram.png)
+![Domain Layer Schema](/img/schema-diagram.png)
 
 When reading the schema, you'll notice that many tables' primary key is called 
`id`. Unlike auto-increment id or UUID, `id` is a string composed of several 
parts to uniquely identify similar entities (e.g. repo) from different 
platforms (e.g. Github/Gitlab) and allow them to co-exist in a single table.
 
diff --git a/docs/Glossary.md b/docs/Glossary.md
index dc348f6..4ca3117 100644
--- a/docs/Glossary.md
+++ b/docs/Glossary.md
@@ -25,7 +25,7 @@ The following terms are arranged in the order of their 
appearance in the actual
 
 The relationship among Blueprint, Data Connections, Data Scope and 
Transformation Rules is explained as follows:
 
-![Blueprint ERD](../static/img/blueprint-erd.svg)
+![Blueprint ERD](/img/blueprint-erd.svg)
 - Each blueprint can have multiple data connections.
 - Each data connection can have multiple sets of data scope.
 - Each set of data scope only consists of one GitHub/GitLab project or Jira 
board, along with their corresponding data entities.
@@ -88,7 +88,7 @@ For detailed information about the relationship between data 
sources and data pl
 **A pipeline is an orchestration of [tasks](Glossary.md#tasks) of data 
`collection`, `extraction`, `conversion` and `enrichment`, defined in the 
DevLake API.** A pipeline is composed of one or multiple 
[stages](Glossary.md#stages) that are executed in a sequential order. Any error 
occurring during the execution of any stage, task or subtask will cause the 
immediate fail of the pipeline.
 
 The composition of a pipeline is explained as follows:
-![Blueprint ERD](../static/img/pipeline-erd.svg)
+![Blueprint ERD](/img/pipeline-erd.svg)
 Notice: **You can manually orchestrate the pipeline in Configuration UI 
Advanced Mode and the DevLake API; whereas in Configuration UI regular mode, an 
optimized pipeline orchestration will be automatically generated for you.**
 
 
diff --git a/docs/Overview/01-WhatIsDevLake.md 
b/docs/Overview/01-WhatIsDevLake.md
index 9f998cc..75c64a1 100755
--- a/docs/Overview/01-WhatIsDevLake.md
+++ b/docs/Overview/01-WhatIsDevLake.md
@@ -21,21 +21,21 @@ You can easily set up Apache DevLake by following our 
step-by step instruction f
 ### 2. Create a Blueprint
 The DevLake Configuration UI will guide you through the process (a Blueprint) 
to define the data connections, data scope, transformation and sync frequency 
of the data you wish to collect.
 
-![img](../../static/img/userflow1.svg)
+![img](/img/userflow1.svg)
 
 ### 3. Track the Blueprint's progress
 You can track the progress of the Blueprint you have just set up.
 
-![img](../../static/img/userflow2.svg)
+![img](/img/userflow2.svg)
 
 ### 4. View the pre-built dashboards
 Once the first run of the Blueprint is completed, you can view the 
corresponding dashboards.
 
-![img](../../static/img/userflow3.png)
+![img](/img/userflow3.png)
 
 ### 5. Customize the dahsboards with SQL
 If the pre-built dashboards are limited for your use cases, you can always 
customize or create your own metrics or dashboards with SQL.
 
-![img](../../static/img/userflow4.png)
+![img](/img/userflow4.png)
 
 
diff --git a/docs/Plugins/github.md b/docs/Plugins/github.md
index 8dac21b..463f9de 100644
--- a/docs/Plugins/github.md
+++ b/docs/Plugins/github.md
@@ -24,7 +24,7 @@ Here are some examples metrics using `GitHub` data:
 
 ## Screenshot
 
-![image](../../static/img/github-demo.png)
+![image](/img/github-demo.png)
 
 
 ## Configuration
diff --git a/docusaurus.config.js b/docusaurus.config.js
index c104ebb..0046c50 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -1,5 +1,7 @@
 const lightCodeTheme = require('prism-react-renderer/themes/github');
 const darkCodeTheme = require('prism-react-renderer/themes/dracula');
+const versions = require('./versions.json');
+
 
 // With JSDoc @type annotations, IDEs can provide config autocompletion
 /** @type {import('@docusaurus/types').DocusaurusConfig} */
@@ -24,6 +26,14 @@ const darkCodeTheme = 
require('prism-react-renderer/themes/dracula');
           sidebarPath: require.resolve('./sidebars.js'),
           // set to undefined to remove Edit this Page
           editUrl: 
'https://github.com/apache/incubator-devlake-website/edit/main',
+          versions: {
+            current: {
+                path: '',
+            },
+            [versions[0]]: {
+                path: versions[0],
+            }
+          }
         },
         blog: {
           showReadingTime: true,
@@ -76,10 +86,24 @@ const darkCodeTheme = 
require('prism-react-renderer/themes/dracula');
         },
         items: [
           {
-            type: 'doc',
-            docId: 'Overview/WhatIsDevLake',
+            // type: 'docsVersionDropdown',
+            // docId: 'Overview/WhatIsDevLake',
             position: 'right',
             label: 'Docs',
+            items: [
+              ...versions.slice(0, versions.length - 2).map((version) => ({
+                label: version,
+                to: `docs/${version}/Overview/WhatIsDevLake`,
+             })),
+             ...versions.slice(versions.length - 2, 
versions.length).map((version) => ({
+              label: (version === "1.x") ? "1.x(Not Apache Release)" : version,
+              to: `docs/${version}/Overview/WhatIsDevLake`,
+          })),
+              {
+                  label: "Next",
+                  to: "/docs/Overview/WhatIsDevLake",
+              }
+            ]
           },
          {
             type: 'doc',
diff --git 
a/versioned_docs/version-0.11/Dashboards/AverageRequirementLeadTime.md 
b/versioned_docs/version-0.11/Dashboards/AverageRequirementLeadTime.md
new file mode 100644
index 0000000..0710335
--- /dev/null
+++ b/versioned_docs/version-0.11/Dashboards/AverageRequirementLeadTime.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 6
+title: "Average Requirement Lead Time by Assignee"
+description: >
+  DevLake Live Demo
+---
+
+# Average Requirement Lead Time by Assignee
+<iframe 
src="https://grafana-lake.demo.devlake.io/d/q27fk7cnk/demo-average-requirement-lead-time-by-assignee?orgId=1&from=1635945684845&to=1651584084846";
 width="100%" height="940px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11/Dashboards/CommitCountByAuthor.md 
b/versioned_docs/version-0.11/Dashboards/CommitCountByAuthor.md
new file mode 100644
index 0000000..04e029c
--- /dev/null
+++ b/versioned_docs/version-0.11/Dashboards/CommitCountByAuthor.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 2
+title: "Commit Count by Author"
+description: >
+  DevLake Live Demo
+---
+
+# Commit Count by Author
+<iframe 
src="https://grafana-lake.demo.devlake.io/d/F0iYknc7z/demo-commit-count-by-author?orgId=1&from=1634911190615&to=1650635990615";
 width="100%" height="820px"></iframe>
diff --git a/versioned_docs/version-0.11/Dashboards/DetailedBugInfo.md 
b/versioned_docs/version-0.11/Dashboards/DetailedBugInfo.md
new file mode 100644
index 0000000..b777617
--- /dev/null
+++ b/versioned_docs/version-0.11/Dashboards/DetailedBugInfo.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 4
+title: "Detailed Bug Info"
+description: >
+  DevLake Live Demo
+---
+
+# Detailed Bug Info
+<iframe 
src="https://grafana-lake.demo.devlake.io/d/s48Lzn5nz/demo-detailed-bug-info?orgId=1&from=1635945709579&to=1651584109579";
 width="100%" height="800px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11/Dashboards/GitHubBasic.md 
b/versioned_docs/version-0.11/Dashboards/GitHubBasic.md
new file mode 100644
index 0000000..7ea28cd
--- /dev/null
+++ b/versioned_docs/version-0.11/Dashboards/GitHubBasic.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+title: "GitHub Basic Metrics"
+description: >
+  DevLake Live Demo
+---
+
+# GitHub Basic Metrics
+<iframe 
src="https://grafana-lake.demo.devlake.io/d/KXWvOFQnz/github_basic_metrics?orgId=1&from=1635945132339&to=1651583532339";
 width="100%" height="3080px"></iframe>
\ No newline at end of file
diff --git 
a/versioned_docs/version-0.11/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
 
b/versioned_docs/version-0.11/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
new file mode 100644
index 0000000..61db78f
--- /dev/null
+++ 
b/versioned_docs/version-0.11/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 5
+title: "GitHub Release Quality and Contribution Analysis"
+description: >
+  DevLake Live Demo
+---
+
+# GitHub Release Quality and Contribution Analysis
+<iframe 
src="https://grafana-lake.demo.devlake.io/d/2xuOaQUnk1/github_release_quality_and_contribution_analysis?orgId=1&from=1635945847658&to=1651584247658";
 width="100%" height="2800px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11/Dashboards/Jenkins.md 
b/versioned_docs/version-0.11/Dashboards/Jenkins.md
new file mode 100644
index 0000000..506a3c9
--- /dev/null
+++ b/versioned_docs/version-0.11/Dashboards/Jenkins.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 7
+title: "Jenkins"
+description: >
+  DevLake Live Demo
+---
+
+# Jenkins
+<iframe 
src="https://grafana-lake.demo.devlake.io/d/W8AiDFQnk/jenkins?orgId=1&from=1635945337632&to=1651583737632";
 width="100%" height="1060px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11/Dashboards/WeeklyBugRetro.md 
b/versioned_docs/version-0.11/Dashboards/WeeklyBugRetro.md
new file mode 100644
index 0000000..adbc4e8
--- /dev/null
+++ b/versioned_docs/version-0.11/Dashboards/WeeklyBugRetro.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 3
+title: "Weekly Bug Retro"
+description: >
+  DevLake Live Demo
+---
+
+# Weekly Bug Retro
+<iframe 
src="https://grafana-lake.demo.devlake.io/d/-5EKA5w7k/weekly-bug-retro?orgId=1&from=1635945873174&to=1651584273174";
 width="100%" height="2240px"></iframe>
diff --git a/versioned_docs/version-0.11/Dashboards/_category_.json 
b/versioned_docs/version-0.11/Dashboards/_category_.json
new file mode 100644
index 0000000..b27df44
--- /dev/null
+++ b/versioned_docs/version-0.11/Dashboards/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Dashboards (Live Demo)",
+  "position": 9
+}
diff --git a/docs/DataModels/01-DevLakeDomainLayerSchema.md 
b/versioned_docs/version-0.11/DataModels/01-DevLakeDomainLayerSchema.md
similarity index 99%
copy from docs/DataModels/01-DevLakeDomainLayerSchema.md
copy to versioned_docs/version-0.11/DataModels/01-DevLakeDomainLayerSchema.md
index 80bd41b..2ffa512 100644
--- a/docs/DataModels/01-DevLakeDomainLayerSchema.md
+++ b/versioned_docs/version-0.11/DataModels/01-DevLakeDomainLayerSchema.md
@@ -33,7 +33,7 @@ This is the up-to-date domain layer schema for DevLake 
v0.10.x. Tables (entities
 
 
 ### Schema Diagram
-![Domain Layer Schema](../../static/img/schema-diagram.png)
+![Domain Layer Schema](/img/schema-diagram.png)
 
 When reading the schema, you'll notice that many tables' primary key is called 
`id`. Unlike auto-increment id or UUID, `id` is a string composed of several 
parts to uniquely identify similar entities (e.g. repo) from different 
platforms (e.g. Github/Gitlab) and allow them to co-exist in a single table.
 
diff --git a/versioned_docs/version-0.11/DataModels/02-DataSupport.md 
b/versioned_docs/version-0.11/DataModels/02-DataSupport.md
new file mode 100644
index 0000000..7067da1
--- /dev/null
+++ b/versioned_docs/version-0.11/DataModels/02-DataSupport.md
@@ -0,0 +1,62 @@
+---
+title: "Data Support"
+linkTitle: "Data Support"
+tags: []
+categories: []
+weight: 2
+description: >
+  Data sources that DevLake supports
+---
+
+
+## Data Sources and Data Plugins
+DevLake supports the following data sources. The data from each data source is 
collected with one or more plugins. There are 9 data plugins in total: `ae`, 
`feishu`, `gitextractor`, `github`, `gitlab`, `jenkins`, `jira`, `refdiff` and 
`tapd`.
+
+
+| Data Source | Versions                             | Plugins |
+|-------------|--------------------------------------|-------- |
+| AE          |                                      | `ae`    |
+| Feishu      | Cloud                                |`feishu` |
+| GitHub      | Cloud                                |`github`, 
`gitextractor`, `refdiff` |
+| Gitlab      | Cloud, Community Edition 13.x+       |`gitlab`, 
`gitextractor`, `refdiff` |
+| Jenkins     | 2.263.x+                             |`jenkins` |
+| Jira        | Cloud, Server 8.x+, Data Center 8.x+ |`jira` |
+| TAPD        | Cloud                                | `tapd` |
+
+
+
+## Data Collection Scope By Each Plugin
+This table shows the entities collected by each plugin. Domain layer entities 
in this table are consistent with the entities 
[here](./01-DevLakeDomainLayerSchema.md).
+
+| Domain Layer Entities | ae             | gitextractor | github         | 
gitlab  | jenkins | jira    | refdiff | tapd    |
+| --------------------- | -------------- | ------------ | -------------- | 
------- | ------- | ------- | ------- | ------- |
+| commits               | update commits | default      | not-by-default | 
default |         |         |         |         |
+| commit_parents        |                | default      |                |     
    |         |         |         |         |
+| commit_files          |                | default      |                |     
    |         |         |         |         |
+| pull_requests         |                |              | default        | 
default |         |         |         |         |
+| pull_request_commits  |                |              | default        | 
default |         |         |         |         |
+| pull_request_comments |                |              | default        | 
default |         |         |         |         |
+| pull_request_labels   |                |              | default        |     
    |         |         |         |         |
+| refs                  |                | default      |                |     
    |         |         |         |         |
+| refs_commits_diffs    |                |              |                |     
    |         |         | default |         |
+| refs_issues_diffs     |                |              |                |     
    |         |         | default |         |
+| ref_pr_cherry_picks   |                |              |                |     
    |         |         | default |         |
+| repos                 |                |              | default        | 
default |         |         |         |         |
+| repo_commits          |                | default      | default        |     
    |         |         |         |         |
+| board_repos           |                |              |                |     
    |         |         |         |         |
+| issue_commits         |                |              |                |     
    |         |         |         |         |
+| issue_repo_commits    |                |              |                |     
    |         |         |         |         |
+| pull_request_issues   |                |              |                |     
    |         |         |         |         |
+| refs_issues_diffs     |                |              |                |     
    |         |         |         |         |
+| boards                |                |              | default        |     
    |         | default |         | default |
+| board_issues          |                |              | default        |     
    |         | default |         | default |
+| issue_changelogs      |                |              |                |     
    |         | default |         | default |
+| issues                |                |              | default        |     
    |         | default |         | default |
+| issue_comments        |                |              |                |     
    |         | default |         | default |
+| issue_labels          |                |              | default        |     
    |         |         |         |         |
+| sprints               |                |              |                |     
    |         | default |         | default |
+| issue_worklogs        |                |              |                |     
    |         | default |         | default |
+| users o               |                |              | default        |     
    |         | default |         | default |
+| builds                |                |              |                |     
    | default |         |         |         |
+| jobs                  |                |              |                |     
    | default |         |         |         |
+
diff --git a/versioned_docs/version-0.11/DataModels/_category_.json 
b/versioned_docs/version-0.11/DataModels/_category_.json
new file mode 100644
index 0000000..e678e71
--- /dev/null
+++ b/versioned_docs/version-0.11/DataModels/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Data Models",
+  "position": 5
+}
diff --git a/versioned_docs/version-0.11/DeveloperManuals/04-DeveloperSetup.md 
b/versioned_docs/version-0.11/DeveloperManuals/04-DeveloperSetup.md
new file mode 100644
index 0000000..cb27440
--- /dev/null
+++ b/versioned_docs/version-0.11/DeveloperManuals/04-DeveloperSetup.md
@@ -0,0 +1,130 @@
+---
+title: "Developer Setup"
+description: >
+  The steps to install DevLake in develper mode.
+---
+
+
+#### Requirements
+
+- <a href="https://docs.docker.com/get-docker"; target="_blank">Docker 
v19.03.10+</a>
+- <a href="https://golang.org/doc/install"; target="_blank">Golang v1.17+</a>
+- Make
+  - Mac (Already installed)
+  - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
+  - Ubuntu: `sudo apt-get install build-essential libssl-dev`
+
+#### How to setup dev environment
+1. Navigate to where you would like to install this project and clone the 
repository:
+
+   ```sh
+   git clone https://github.com/apache/incubator-devlake
+   cd incubator-devlake
+   ```
+
+2. Install dependencies for plugins:
+
+   - [RefDiff](../Plugins/refdiff.md#development)
+
+3. Install Go packages
+
+    ```sh
+       go get
+    ```
+
+4. Copy the sample config file to new local file:
+
+    ```sh
+    cp .env.example .env
+    ```
+
+5. Update the following variables in the file `.env`:
+
+    * `DB_URL`: Replace `mysql:3306` with `127.0.0.1:3306`
+
+6. Start the MySQL and Grafana containers:
+
+    > Make sure the Docker daemon is running before this step.
+
+    ```sh
+    docker-compose up -d mysql grafana
+    ```
+
+7. Run lake and config UI in dev mode in two separate terminals:
+
+    ```sh
+    # install mockery
+    go install github.com/vektra/mockery/v2@latest
+    # generate mocking stubs
+    make mock
+    # run lake
+    make dev
+    # run config UI
+    make configure-dev
+    ```
+
+    Q: I got an error saying: `libgit2.so.1.3: cannot open share object file: 
No such file or directory`
+
+    A: Make sure your program can find `libgit2.so.1.3`. `LD_LIBRARY_PATH` can 
be assigned like this if your `libgit2.so.1.3` is located at `/usr/local/lib`:
+
+    ```sh
+    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
+    ```
+
+8. Visit config UI at `localhost:4000` to configure data connections.
+    - Navigate to desired plugins pages on the Integrations page
+    - Enter the required information for the plugins you intend to use.
+    - Refer to the following for more details on how to configure each one:
+        - [Jira](../Plugins/jira.md)
+        - [GitLab](../Plugins/gitlab.md)
+        - [Jenkins](../Plugins/jenkins.md)
+        - [GitHub](../Plugins/github.md): For users who'd like to collect 
GitHub data, we recommend reading our [GitHub data collection 
guide](../UserManuals/github-user-guide-v0.10.0.md) which covers the following 
steps in detail.
+    - Submit the form to update the values by clicking on the **Save 
Connection** button on each form page
+
+9. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data 
collection.
+
+
+   Pipelines Runs can be initiated by the new "Create Run" Interface. Simply 
enable the **Data Connection Providers** you wish to run collection for, and 
specify the data you want to collect, for instance, **Project ID** for Gitlab 
and **Repository Name** for GitHub.
+
+   Once a valid pipeline configuration has been created, press **Create Run** 
to start/run the pipeline.
+   After the pipeline starts, you will be automatically redirected to the 
**Pipeline Activity** screen to monitor collection activity.
+
+   **Pipelines** is accessible from the main menu of the config-ui for easy 
access.
+
+   - Manage All Pipelines: `http://localhost:4000/pipelines`
+   - Create Pipeline RUN: `http://localhost:4000/pipelines/create`
+   - Track Pipeline Activity: 
`http://localhost:4000/pipelines/activity/[RUN_ID]`
+
+   For advanced use cases and complex pipelines, please use the Raw JSON API 
to manually initiate a run using **cURL** or graphical API tool such as 
**Postman**. `POST` the following request to the DevLake API Endpoint.
+
+    ```json
+    [
+        [
+            {
+                "plugin": "github",
+                "options": {
+                    "repo": "lake",
+                    "owner": "merico-dev"
+                }
+            }
+        ]
+    ]
+    ```
+
+   Please refer to [Pipeline Advanced 
Mode](../UserManuals/create-pipeline-in-advanced-mode.md) for in-depth 
explanation.
+
+
+10. Click *View Dashboards* button in the top left when done, or visit 
`localhost:3002` (username: `admin`, password: `admin`).
+
+   We use <a href="https://grafana.com/"; target="_blank">Grafana</a> as a 
visualization tool to build charts for the <a 
href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema";>data
 stored in our database</a>. Using SQL queries, we can add panels to build, 
save, and edit customized dashboards.
+
+   All the details on provisioning and customizing a dashboard can be found in 
the [Grafana Doc](../UserManuals/GRAFANA.md).
+
+11. (Optional) To run the tests:
+
+    ```sh
+    make test
+    ```
+
+12. For DB migrations, please refer to [Migration 
Doc](../DeveloperManuals/MIGRATIONS.md).
+<br/><br/><br/>
diff --git a/versioned_docs/version-0.11/DeveloperManuals/Dal.md 
b/versioned_docs/version-0.11/DeveloperManuals/Dal.md
new file mode 100644
index 0000000..da27a55
--- /dev/null
+++ b/versioned_docs/version-0.11/DeveloperManuals/Dal.md
@@ -0,0 +1,173 @@
+---
+title: "Dal"
+sidebar_position: 4
+description: >
+  The Dal (Data Access Layer) is designed to decouple the hard dependency on 
`gorm` in v0.12
+---
+
+## Summary
+
+The Dal (Data Access Layer) is designed to decouple the hard dependency on 
`gorm` in v0.12.  The advantages of introducing this isolation are:
+
+ - Unit Test: Mocking an Interface is easier and more reliable than Patching a 
Pointer.
+ - Clean Code: DBS operations are more consistence than using `gorm ` directly.
+ - Replaceable: It would be easier to replace `gorm` in the future if needed.
+
+## The Dal Interface
+
+```go
+type Dal interface {
+       AutoMigrate(entity interface{}, clauses ...Clause) error
+       Exec(query string, params ...interface{}) error
+       RawCursor(query string, params ...interface{}) (*sql.Rows, error)
+       Cursor(clauses ...Clause) (*sql.Rows, error)
+       Fetch(cursor *sql.Rows, dst interface{}) error
+       All(dst interface{}, clauses ...Clause) error
+       First(dst interface{}, clauses ...Clause) error
+       Count(clauses ...Clause) (int64, error)
+       Pluck(column string, dest interface{}, clauses ...Clause) error
+       Create(entity interface{}, clauses ...Clause) error
+       Update(entity interface{}, clauses ...Clause) error
+       CreateOrUpdate(entity interface{}, clauses ...Clause) error
+       CreateIfNotExist(entity interface{}, clauses ...Clause) error
+       Delete(entity interface{}, clauses ...Clause) error
+       AllTables() ([]string, error)
+}
+```
+
+
+## How to use
+
+### Query
+```go
+// Get a database cursor
+user := &models.User{}
+cursor, err := db.Cursor(
+  dal.From(user),
+  dal.Where("department = ?", "R&D"),
+  dal.Orderby("id DESC"),
+)
+if err != nil {
+  return err
+}
+for cursor.Next() {
+  err = dal.Fetch(cursor, user)  // fetch one record at a time
+  ...
+}
+
+// Get a database cursor by raw sql query
+cursor, err := db.Raw("SELECT * FROM users")
+
+// USE WITH CAUTIOUS: loading a big table at once is slow and dangerous
+// Load all records from database at once. 
+users := make([]models.Users, 0)
+err := db.All(&users, dal.Where("department = ?", "R&D"))
+
+// Load a column as Scalar or Slice
+var email string
+err := db.Pluck("email", &username, dal.Where("id = ?", 1))
+var emails []string
+err := db.Pluck("email", &emails)
+
+// Execute query
+err := db.Exec("UPDATE users SET department = ? WHERE department = ?", 
"Research & Development", "R&D")
+```
+
+### Insert
+```go
+err := db.Create(&models.User{
+  Email: "[email protected]", // assumming this the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Update
+```go
+err := db.Create(&models.User{
+  Email: "[email protected]", // assumming this the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+### Insert or Update
+```go
+err := db.CreateOrUpdate(&models.User{
+  Email: "[email protected]",  // assuming this is the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Insert if record(by PrimaryKey) didn't exist
+```go
+err := db.CreateIfNotExist(&models.User{
+  Email: "[email protected]",  // assuming this is the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Delete
+```go
+err := db.CreateIfNotExist(&models.User{
+  Email: "[email protected]",  // assuming this is the Primary key
+})
+```
+
+### DDL and others
+```go
+// Returns all table names
+allTables, err := db.AllTables()
+
+// Automigrate: create/add missing table/columns
+// Note: it won't delete any existing columns, nor does it update the column 
definition
+err := db.AutoMigrate(&models.User{})
+```
+
+## How to do Unit Test
+First, run the command `make mock` to generate the Mocking Stubs, the 
generated source files should appear in `mocks` folder. 
+```
+mocks
+├── ApiResourceHandler.go
+├── AsyncResponseHandler.go
+├── BasicRes.go
+├── CloseablePluginTask.go
+├── ConfigGetter.go
+├── Dal.go
+├── DataConvertHandler.go
+├── ExecContext.go
+├── InjectConfigGetter.go
+├── InjectLogger.go
+├── Iterator.go
+├── Logger.go
+├── Migratable.go
+├── PluginApi.go
+├── PluginBlueprintV100.go
+├── PluginInit.go
+├── PluginMeta.go
+├── PluginTask.go
+├── RateLimitedApiClient.go
+├── SubTaskContext.go
+├── SubTaskEntryPoint.go
+├── SubTask.go
+└── TaskContext.go
+```
+With these Mocking stubs, you may start writing your TestCases using the 
`mocks.Dal`.
+```go
+import "github.com/apache/incubator-devlake/mocks"
+
+func TestCreateUser(t *testing.T) {
+    mockDal := new(mocks.Dal)
+    mockDal.On("Create", mock.Anything, mock.Anything).Return(nil).Once()
+    userService := &services.UserService{
+        Dal: mockDal,
+    }
+    userService.Post(map[string]interface{}{
+        "email": "[email protected]",
+        "name": "hello",
+        "department": "R&D",
+    })
+    mockDal.AssertExpectations(t)
+```
+
diff --git a/versioned_docs/version-0.11/DeveloperManuals/MIGRATIONS.md 
b/versioned_docs/version-0.11/DeveloperManuals/MIGRATIONS.md
new file mode 100644
index 0000000..edab4ca
--- /dev/null
+++ b/versioned_docs/version-0.11/DeveloperManuals/MIGRATIONS.md
@@ -0,0 +1,36 @@
+---
+title: "DB Migration"
+description: >
+  DB Migration
+---
+
+# Migrations (Database)
+
+## Summary
+Starting in v0.10.0, DevLake provides a lightweight migration tool for 
executing migration scripts.
+Both framework itself and plugins define their migration scripts in their own 
migration folder.
+The migration scripts are written with gorm in Golang to support different SQL 
dialects.
+
+
+## Migration script
+Migration script describes how to do database migration.
+They implement the `Script` interface.
+When DevLake starts, scripts register themselves to the framework by invoking 
the `Register` function
+
+```go
+type Script interface {
+       Up(ctx context.Context, db *gorm.DB) error
+       Version() uint64
+       Name() string
+}
+```
+
+## Table `migration_history`
+
+The table tracks migration scripts execution and schemas changes.
+From which, DevLake could figure out the current state of database schemas.
+## How it Works
+1. Check `migration_history` table, calculate all the migration scripts need 
to be executed.
+2. Sort scripts by Version in ascending order.
+3. Execute scripts.
+4. Save results in the `migration_history` table.
diff --git a/versioned_docs/version-0.11/DeveloperManuals/NOTIFICATION.md 
b/versioned_docs/version-0.11/DeveloperManuals/NOTIFICATION.md
new file mode 100644
index 0000000..d5ebd2b
--- /dev/null
+++ b/versioned_docs/version-0.11/DeveloperManuals/NOTIFICATION.md
@@ -0,0 +1,33 @@
+---
+title: "Notifications"
+description: >
+  Notifications
+---
+
+# Notification
+
+## Request
+Example request
+```
+POST 
/lake/notify?nouce=3-FDXxIootApWxEVtz&sign=424c2f6159bd9e9828924a53f9911059433dc14328a031e91f9802f062b495d5
+
+{"TaskID":39,"PluginName":"jenkins","CreatedAt":"2021-09-30T15:28:00.389+08:00","UpdatedAt":"2021-09-30T15:28:00.785+08:00"}
+```
+
+## Configuration
+If you want to use the notification feature, you should add two configuration 
key to `.env` file.
+```shell
+# .env
+# notification request url, e.g.: http://example.com/lake/notify
+NOTIFICATION_ENDPOINT=
+# secret is used to calculate signature
+NOTIFICATION_SECRET=
+```
+
+## Signature
+You should check the signature before accepting the notification request. We 
use sha256 algorithm to calculate the checksum.
+```go
+// calculate checksum
+sum := sha256.Sum256([]byte(requestBody + NOTIFICATION_SECRET + nouce))
+return hex.EncodeToString(sum[:])
+```
diff --git a/versioned_docs/version-0.11/DeveloperManuals/PluginCreate.md 
b/versioned_docs/version-0.11/DeveloperManuals/PluginCreate.md
new file mode 100644
index 0000000..3f2a4ce
--- /dev/null
+++ b/versioned_docs/version-0.11/DeveloperManuals/PluginCreate.md
@@ -0,0 +1,292 @@
+---
+title: "How to Implement a DevLake plugin?"
+sidebar_position: 1
+description: >
+  How to Implement a DevLake plugin.
+---
+
+## How to Implement a DevLake plugin?
+
+If your favorite DevOps tool is not yet supported by DevLake, don't worry. 
It's not difficult to implement a DevLake plugin. In this post, we'll go 
through the basics of DevLake plugins and build an example plugin from scratch 
together.
+
+## What is a plugin?
+
+A DevLake plugin is a shared library built with Go's `plugin` package that 
hooks up to DevLake core at run-time.
+
+A plugin may extend DevLake's capability in three ways:
+
+1. Integrating with new data sources
+2. Transforming/enriching existing data
+3. Exporting DevLake data to other data systems
+
+
+## How do plugins work?
+
+A plugin mainly consists of a collection of subtasks that can be executed by 
DevLake core. For data source plugins, a subtask may be collecting a single 
entity from the data source (e.g., issues from Jira). Besides the subtasks, 
there're hooks that a plugin can implement to customize its initialization, 
migration, and more. See below for a list of the most important interfaces:
+
+1. 
[PluginMeta](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_meta.go)
 contains the minimal interface that a plugin should implement, with only two 
functions 
+   - Description() returns the description of a plugin
+   - RootPkgPath() returns the root package path of a plugin
+2. 
[PluginInit](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_init.go)
 allows a plugin to customize its initialization
+3. 
[PluginTask](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_task.go)
 enables a plugin to prepare data prior to subtask execution
+4. 
[PluginApi](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_api.go)
 lets a plugin exposes some self-defined APIs
+5. 
[Migratable](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_db_migration.go)
 is where a plugin manages its database migrations 
+
+The diagram below shows the control flow of executing a plugin:
+
+```mermaid
+flowchart TD;
+    subgraph S4[Step4 sub-task extractor running process];
+    direction LR;
+    D4[DevLake];
+    D4 -- Step4.1 create a new\n ApiExtractor\n and execute it --> 
E["ExtractXXXMeta.\nEntryPoint"];
+    E <-- Step4.2 read from\n raw table --> RawDataSubTaskArgs.\nTable;
+    E -- "Step4.3 call with RawData" --> ApiExtractor.Extract;
+    ApiExtractor.Extract -- "decode and return gorm models" --> E
+    end
+    subgraph S3[Step3 sub-task collector running process]
+    direction LR
+    D3[DevLake]
+    D3 -- Step3.1 create a new\n ApiCollector\n and execute it --> 
C["CollectXXXMeta.\nEntryPoint"];
+    C <-- Step3.2 create\n raw table --> RawDataSubTaskArgs.\nRAW_BBB_TABLE;
+    C <-- Step3.3 build query\n before sending requests --> 
ApiCollectorArgs.\nQuery/UrlTemplate;
+    C <-. Step3.4 send requests by ApiClient \n and return HTTP response.-> 
A1["HTTP APIs"];
+    C <-- "Step3.5 call and \nreturn decoded data \nfrom HTTP response" --> 
ResponseParser;
+    end
+    subgraph S2[Step2 DevLake register custom plugin]
+    direction LR
+    D2[DevLake]
+    D2 <-- "Step2.1 function `Init` \nneed to do init jobs" --> plugin.Init;
+    D2 <-- "Step2.2 (Optional) call \nand return migration scripts" --> 
plugin.MigrationScripts;
+    D2 <-- "Step2.3 (Optional) call \nand return taskCtx" --> 
plugin.PrepareTaskData;
+    D2 <-- "Step2.4 call and \nreturn subTasks for execting" --> 
plugin.SubTaskContext;
+    end
+    subgraph S1[Step1 Run DevLake]
+    direction LR
+    main -- Transfer of control \nby `runner.DirectRun` --> D1[DevLake];
+    end
+    S1-->S2-->S3-->S4
+```
+There's a lot of information in the diagram but we don't expect you to digest 
it right away, simply use it as a reference when you go through the example 
below.
+
+## A step-by-step guide towards your first plugin
+
+In this guide, we'll walk through how to create a data source plugin from 
scratch. 
+
+The example in this tutorial comes from DevLake's own needs of managing 
[CLAs](https://en.wikipedia.org/wiki/Contributor_License_Agreement). Whenever 
DevLake receives a new PR on GitHub, we need to check if the author has signed 
a CLA by referencing `https://people.apache.org/public/icla-info.json`. This 
guide will demonstrate how to collect the ICLA info from Apache API, cache the 
raw response, and extract the raw data into a relational table ready to be 
queried.
+
+### Step 1: Bootstrap the new plugin
+
+**Note:** Please make sure you have DevLake up and running before proceeding.
+
+> More info about plugin:
+> Generally, we need these folders in plugin folders: `api`, `models` and 
`tasks`
+> `api` interacts with `config-ui` for test/get/save connection of data source
+>       - connection 
[example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/api/connection.go)
+>       - connection model 
[example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/models/connection.go)
+> `models` stores all `data entities` and `data migration scripts`. 
+>       - entity 
+>       - data migrations 
[template](https://github.com/apache/incubator-devlake/tree/main/generator/template/migrationscripts)
+> `tasks` contains all of our `sub tasks` for a plugin
+>       - task data 
[template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data.go-template)
+>       - api client 
[template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data_with_api_client.go-template)
+
+Don't worry if you cannot figure out what these concepts mean immediately. 
We'll explain them one by one later. 
+
+DevLake provides a generator to create a plugin conveniently. Let's scaffold 
our new plugin by running `go run generator/main.go create-plugin icla`, which 
would ask for `with_api_client` and `Endpoint`.
+
+* `with_api_client` is used for choosing if we need to request HTTP APIs by 
api_client. 
+* `Endpoint` use in which site we will request, in our case, it should be 
`https://people.apache.org/`.
+
+![create plugin](https://i.imgur.com/itzlFg7.png)
+
+Now we have three files in our plugin. `api_client.go` and `task_data.go` are 
in subfolder `tasks/`.
+![plugin files](https://i.imgur.com/zon5waf.png)
+
+Have a try to run this plugin by function `main` in `plugin_main.go`. When you 
see result like this:
+```
+$go run plugins/icla/plugin_main.go
+[2022-06-02 18:07:30]  INFO failed to create dir logs: mkdir logs: file exists
+press `c` to send cancel signal
+[2022-06-02 18:07:30]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-02 18:07:30]  INFO  [icla] scheduler for api 
https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-02 18:07:30]  INFO  [icla] total step: 0
+```
+How exciting. It works! The plugin defined and initiated in `plugin_main.go` 
use some options in `task_data.go`. They are made up as the most 
straightforward plugin in Apache DevLake, and `api_client.go` will be used in 
the next step to request HTTP APIs.
+
+### Step 2: Create a sub-task for data collection
+Before we start, it is helpful to know how collection task is executed: 
+1. First, Apache DevLake would call `plugin_main.PrepareTaskData()` to prepare 
needed data before any sub-tasks. We need to create an API client here.
+2. Then Apache DevLake will call the sub-tasks returned by 
`plugin_main.SubTaskMetas()`. Sub-task is an independent task to do some job, 
like requesting API, processing data, etc.
+
+> Each sub-task must be defined as a SubTaskMeta, and implement 
SubTaskEntryPoint of SubTaskMeta. SubTaskEntryPoint is defined as 
+> ```go
+> type SubTaskEntryPoint func(c SubTaskContext) error
+> ```
+> More info at: https://devlake.apache.org/blog/how-apache-devlake-runs/
+
+#### Step 2.1 Create a sub-task(Collector) for data collection
+
+Let's run `go run generator/main.go create-collector icla committer` and 
confirm it. This sub-task is activated by registering in 
`plugin_main.go/SubTaskMetas` automatically.
+
+![](https://i.imgur.com/tkDuofi.png)
+
+> - Collector will collect data from HTTP or other data sources, and save the 
data into the raw layer. 
+> - Inside the func `SubTaskEntryPoint` of `Collector`, we use 
`helper.NewApiCollector` to create an object of 
[ApiCollector](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/api_collector.go-template),
 then call `execute()` to do the job. 
+
+Now you can notice `data.ApiClient` is inited in 
`plugin_main.go/PrepareTaskData.ApiClient`. `PrepareTaskData` create a new 
`ApiClient`, and it's a tool Apache DevLake suggests to request data from HTTP 
Apis. This tool support some valuable features for HttpApi, like rateLimit, 
proxy and retry. Of course, if you like, you may use the lib `http` instead, 
but it will be more tedious.
+
+Let's move forward to use it.
+
+1. To collect data from `https://people.apache.org/public/icla-info.json`,
+we have filled `https://people.apache.org/` into 
`tasks/api_client.go/ENDPOINT` in Step 1.
+
+![](https://i.imgur.com/q8Zltnl.png)
+
+2. And fill `public/icla-info.json` into `UrlTemplate`, delete unnecessary 
iterator and add `println("receive data:", res)` in `ResponseParser` to see if 
collection was successful.
+
+![](https://i.imgur.com/ToLMclH.png)
+
+Ok, now the collector sub-task has been added to the plugin, and we can kick 
it off by running `main` again. If everything goes smoothly, the output should 
look like this:
+```bash
+[2022-06-06 12:24:52]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 12:24:52]  INFO  [icla] scheduler for api 
https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 12:24:52]  INFO  [icla] total step: 1
+[2022-06-06 12:24:52]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 12:24:52]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 0x140005763f0
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 12:24:55]  INFO  [icla] finished step: 1 / 1
+```
+
+Great! Now we can see data pulled from the server without any problem. The 
last step is to decode the response body in `ResponseParser` and return it to 
the framework, so it can be stored in the database.
+```go
+ResponseParser: func(res *http.Response) ([]json.RawMessage, error) {
+    body := &struct {
+        LastUpdated string          `json:"last_updated"`
+        Committers  json.RawMessage `json:"committers"`
+    }{}
+    err := helper.UnmarshalResponse(res, body)
+    if err != nil {
+        return nil, err
+    }
+    println("receive data:", len(body.Committers))
+    return []json.RawMessage{body.Committers}, nil
+},
+
+```
+Ok, run the function `main` once again, then it turned out like this, and we 
should be able see some records show up in the table `_raw_icla_committer`.
+```bash
+……
+receive data: 272956 /* <- the number means 272956 models received */
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 13:46:57]  INFO  [icla] finished step: 1 / 1
+```
+
+![](https://i.imgur.com/aVYNMRr.png)
+
+#### Step 2.2 Create a sub-task(Extractor) to extract data from the raw layer
+
+> - Extractor will extract data from raw layer and save it into tool db table.
+> - Except for some pre-processing, the main flow is similar to the collector.
+
+We have already collected data from HTTP API and saved them into the DB table 
`_raw_XXXX`. In this step, we will extract the names of committers from the raw 
data. As you may infer from the name, raw tables are temporary and not easy to 
use directly.
+
+Now Apache DevLake suggests to save data by 
[gorm](https://gorm.io/docs/index.html), so we will create a model by gorm and 
add it into `plugin_main.go/AutoSchemas.Up()`.
+
+plugins/icla/models/committer.go
+```go
+package models
+
+import (
+       "github.com/apache/incubator-devlake/models/common"
+)
+
+type IclaCommitter struct {
+       UserName     string `gorm:"primaryKey;type:varchar(255)"`
+       Name         string `gorm:"primaryKey;type:varchar(255)"`
+       common.NoPKModel
+}
+
+func (IclaCommitter) TableName() string {
+       return "_tool_icla_committer"
+}
+```
+
+plugins/icla/plugin_main.go
+![](https://i.imgur.com/4f0zJty.png)
+
+
+Ok, run the plugin, and table `_tool_icla_committer` will be created 
automatically just like the snapshot below:
+![](https://i.imgur.com/7Z324IX.png)
+
+Next, let's run `go run generator/main.go create-extractor icla committer` and 
type in what the command prompt asks for.
+
+![](https://i.imgur.com/UyDP9Um.png)
+
+Let's look at the function `extract` in `committer_extractor.go` created just 
now, and some codes need to be written here. It's obviously `resData.data` is 
raw data, so we could decode them by json and add new `IclaCommitter` to save 
them.
+```go
+Extract: func(resData *helper.RawData) ([]interface{}, error) {
+    names := &map[string]string{}
+    err := json.Unmarshal(resData.Data, names)
+    if err != nil {
+        return nil, err
+    }
+    extractedModels := make([]interface{}, 0)
+    for userName, name := range *names {
+        extractedModels = append(extractedModels, &models.IclaCommitter{
+            UserName: userName,
+            Name:     name,
+        })fco
+    }
+    return extractedModels, nil
+},
+```
+
+Ok, run it then we get:
+```
+[2022-06-06 15:39:40]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 15:39:40]  INFO  [icla] scheduler for api 
https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 15:39:40]  INFO  [icla] total step: 2
+[2022-06-06 15:39:40]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 15:39:40]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 272956
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 15:39:44]  INFO  [icla] finished step: 1 / 2
+[2022-06-06 15:39:44]  INFO  [icla] executing subtask ExtractCommitter
+[2022-06-06 15:39:46]  INFO  [icla] [ExtractCommitter] finished records: 1
+[2022-06-06 15:39:46]  INFO  [icla] finished step: 2 / 2
+```
+Now committer data have been saved in _tool_icla_committer.
+![](https://i.imgur.com/6svX0N2.png)
+
+#### Step 2.3 Convertor
+
+Notes: There are two ways here (open source or using it yourself). It is 
unnecessary, but we encourage it because convertors and the domain layer will 
significantly help build dashboards. More info about the domain layer at: 
https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema/
+
+> - Convertor will convert data from the tool layer and save it into the 
domain layer.
+> - We use `helper.NewDataConverter` to create an object of [DataConvertor], 
then call `execute()`. 
+
+#### Step 2.4 Let's try it
+Sometimes OpenApi will be protected by token or other auth types, and we need 
to log in to gain a token to visit it. For example, only after logging in 
`[email protected]` could we gather the data about contributors signing ICLA. 
Here we briefly introduce how to authorize DevLake to collect data.
+
+Let's look at `api_client.go`. `NewIclaApiClient` load config `ICLA_TOKEN` by 
`.env`, so we can add `ICLA_TOKEN=XXXXXX` in `.env` and use it in 
`apiClient.SetHeaders()` to mock the login status. Code as below:
+![](https://i.imgur.com/dPxooAx.png)
+
+Of course, we can use `username/password` to get a token after login mockery. 
Just try and adjust according to the actual situation.
+
+Look for more related details at https://github.com/apache/incubator-devlake
+
+#### Final step: Submit the code as open source code
+Good ideas and we encourage contributions~ Let's learn about migration scripts 
and domain layers to write normative and platform-neutral codes. More info at 
https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema or contact 
us for ebullient help.
+
+
+## Done!
+
+Congratulations! The first plugin has been created! 🎖 
\ No newline at end of file
diff --git a/versioned_docs/version-0.11/DeveloperManuals/_category_.json 
b/versioned_docs/version-0.11/DeveloperManuals/_category_.json
new file mode 100644
index 0000000..fe67a68
--- /dev/null
+++ b/versioned_docs/version-0.11/DeveloperManuals/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Developer Manuals",
+  "position": 4
+}
diff --git a/versioned_docs/version-0.11/EngineeringMetrics.md 
b/versioned_docs/version-0.11/EngineeringMetrics.md
new file mode 100644
index 0000000..2d9a42a
--- /dev/null
+++ b/versioned_docs/version-0.11/EngineeringMetrics.md
@@ -0,0 +1,195 @@
+---
+sidebar_position: 06
+title: "Engineering Metrics"
+linkTitle: "Engineering Metrics"
+tags: []
+description: >
+  The definition, values and data required for the 20+ engineering metrics 
supported by DevLake.
+---
+
+<table>
+    <tr>
+        <th><b>Category</b></th>
+        <th><b>Metric Name</b></th>
+        <th><b>Definition</b></th>
+        <th><b>Data Required</b></th>
+        <th style={{width:'70%'}}><b>Use Scenarios and Recommended 
Practices</b></th>
+        
<th><b>Value&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</b></th>
+    </tr>
+    <tr>
+        <td rowspan="10">Delivery Velocity</td>
+        <td>Requirement Count</td>
+        <td>Number of issues in type "Requirement"</td>
+        <td>Issue/Task Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md";>Jira 
issues</a>, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub
 issues</a>, etc</td>
+        <td rowspan="2">
+1. Analyze the number of requirements and delivery rate of different time 
cycles to find the stability and trend of the development process.
+<br/>2. Analyze and compare the number of requirements delivered and delivery 
rate of each project/team, and compare the scale of requirements of different 
projects.
+<br/>3. Based on historical data, establish a baseline of the delivery 
capacity of a single iteration (optimistic, probable and pessimistic values) to 
provide a reference for iteration estimation.
+<br/>4. Drill down to analyze the number and percentage of requirements in 
different phases of SDLC. Analyze rationality and identify the requirements 
stuck in the backlog.</td>
+        <td rowspan="2">1. Based on historical data, establish a baseline of 
the delivery capacity of a single iteration to improve the organization and 
planning of R&D resources.
+<br/>2. Evaluate whether the delivery capacity matches the business phase and 
demand scale. Identify key bottlenecks and reasonably allocate resources.</td>
+    </tr>
+    <tr>
+        <td>Requirement Delivery Rate</td>
+        <td>Ratio of delivered requirements to all requirements</td>
+        <td>Issue/Task Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md";>Jira 
issues</a>, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub
 issues</a>, etc</td>
+    </tr>
+    <tr>
+        <td>Requirement Lead Time</td>
+        <td>Lead time of issues with type "Requirement"</td>
+        <td>Issue/Task Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md";>Jira 
issues</a>, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub
 issues</a>, etc</td>
+        <td>
+1. Analyze the trend of requirement lead time to observe if it has improved 
over time.
+<br/>2. Analyze and compare the requirement lead time of each project/team to 
identify key projects with abnormal lead time.
+<br/>3. Drill down to analyze a requirement's staying time in different phases 
of SDLC. Analyze the bottleneck of delivery velocity and improve the 
workflow.</td>
+        <td>1. Analyze key projects and critical points, identify 
good/to-be-improved practices that affect requirement lead time, and reduce the 
risk of delays
+<br/>2. Focus on the end-to-end velocity of value delivery process; coordinate 
different parts of R&D to avoid efficiency shafts; make targeted improvements 
to bottlenecks.</td>
+    </tr>
+    <tr>
+        <td>Requirement Granularity</td>
+        <td>Number of story points associated with an issue</td>
+        <td>Issue/Task Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md";>Jira 
issues</a>, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub
 issues</a>, etc</td>
+        <td>
+1. Analyze the story points/requirement lead time of requirements to evaluate 
whether the ticket size, ie. requirement complexity is optimal.
+<br/>2. Compare the estimated requirement granularity with the actual 
situation and evaluate whether the difference is reasonable by combining more 
microscopic workload metrics (e.g. lines of code/code equivalents)</td>
+        <td>1. Promote product teams to split requirements carefully, improve 
requirements quality, help developers understand requirements clearly, deliver 
efficiently and with high quality, and improve the project management 
capability of the team.
+<br/>2. Establish a data-supported workload estimation model to help R&D teams 
calibrate their estimation methods and more accurately assess the granularity 
of requirements, which is useful to achieve better issue planning in project 
management.</td>
+    </tr>
+    <tr>
+        <td>Commit Count</td>
+        <td>Number of Commits</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md";>Git</a>/<a
 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>/<a
 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 commits</td>
+        <td>
+1. Identify the main reasons for the unusual number of commits and the 
possible impact on the number of commits through comparison
+<br/>2. Evaluate whether the number of commits is reasonable in conjunction 
with more microscopic workload metrics (e.g. lines of code/code 
equivalents)</td>
+        <td>1. Identify potential bottlenecks that may affect output
+<br/>2. Encourage R&D practices of small step submissions and develop 
excellent coding habits</td>
+    </tr>
+    <tr>
+        <td>Added Lines of Code</td>
+        <td>Accumulated number of added lines of code</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md";>Git</a>/<a
 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>/<a
 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 commits</td>
+        <td rowspan="2">
+1. From the project/team dimension, observe the accumulated change in Added 
lines to assess the team activity and code growth rate
+<br/>2. From version cycle dimension, observe the active time distribution of 
code changes, and evaluate the effectiveness of project development model.
+<br/>3. From the member dimension, observe the trend and stability of code 
output of each member, and identify the key points that affect code output by 
comparison.</td>
+        <td rowspan="2">1. identify potential bottlenecks that may affect the 
output
+<br/>2. Encourage the team to implement a development model that matches the 
business requirements; develop excellent coding habits</td>
+    </tr>
+    <tr>
+        <td>Deleted Lines of Code</td>
+        <td>Accumulated number of deleted lines of code</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md";>Git</a>/<a
 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>/<a
 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 commits</td>
+    </tr>
+    <tr>
+        <td>Pull Request Review Time</td>
+        <td>Time from Pull/Merge created time until merged</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 MRs, etc</td>
+        <td>
+1. Observe the mean and distribution of code review time from the 
project/team/individual dimension to assess the rationality of the review 
time</td>
+        <td>1. Take inventory of project/team code review resources to avoid 
lack of resources and backlog of review sessions, resulting in long waiting time
+<br/>2. Encourage teams to implement an efficient and responsive code review 
mechanism</td>
+    </tr>
+    <tr>
+        <td>Bug Age</td>
+        <td>Lead time of issues in type "Bug"</td>
+        <td>Issue/Task Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md";>Jira 
issues</a>, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub
 issues</a>, etc</td>
+        <td rowspan="2">
+1. Observe the trend of bug age and locate the key reasons.<br/>
+2. According to the severity level, type (business, functional 
classification), affected module, source of bugs, count and observe the length 
of bug and incident age.</td>
+        <td rowspan="2">1. Help the team to establish an effective 
hierarchical response mechanism for bugs and incidents. Focus on the resolution 
of important problems in the backlog.<br/>
+2. Improve team's and individual's bug/incident fixing efficiency. Identify 
good/to-be-improved practices that affect bug age or incident age</td>
+    </tr>
+    <tr>
+        <td>Incident Age</td>
+        <td>Lead time of issues in type "Incident"</td>
+        <td>Issue/Task Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md";>Jira 
issues</a>, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub
 issues</a>, etc</td>
+    </tr>
+    <tr>
+        <td rowspan="8">Delivery Quality</td>
+        <td>Pull Request Count</td>
+        <td>Number of Pull/Merge Requests</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 MRs, etc</td>
+        <td rowspan="3">
+1. From the developer dimension, we evaluate the code quality of developers by 
combining the task complexity with the metrics related to the number of review 
passes and review rounds.<br/>
+2. From the reviewer dimension, we observe the reviewer's review style by 
taking into account the task complexity, the number of passes and the number of 
review rounds.<br/>
+3. From the project/team dimension, we combine the project phase and team task 
complexity to aggregate the metrics related to the number of review passes and 
review rounds, and identify the modules with abnormal code review process and 
possible quality risks.</td>
+        <td rowspan="3">1. Code review metrics are process indicators to 
provide quick feedback on developers' code quality<br/>
+2. Promote the team to establish a unified coding specification and 
standardize the code review criteria<br/>
+3. Identify modules with low-quality risks in advance, optimize practices, and 
precipitate into reusable knowledge and tools to avoid technical debt 
accumulation</td>
+    </tr>
+    <tr>
+        <td>Pull Request Pass Rate</td>
+        <td>Ratio of Pull/Merge Review requests to merged</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Pull Request Review Rounds</td>
+        <td>Number of cycles of commits followed by comments/final merge</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Pull Request Review Count</td>
+        <td>Number of Pull/Merge Reviewers</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 MRs, etc</td>
+        <td>1. As a secondary indicator, assess the cost of labor invested in 
the code review process</td>
+        <td>1. Take inventory of project/team code review resources to avoid 
long waits for review sessions due to insufficient resource input</td>
+    </tr>
+    <tr>
+        <td>Bug Count</td>
+        <td>Number of bugs found during testing</td>
+        <td>Issue/Task Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md";>Jira 
issues</a>, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub
 issues</a>, etc</td>
+        <td rowspan="4">
+1. From the project or team dimension, observe the statistics on the total 
number of defects, the distribution of the number of defects in each severity 
level/type/owner, the cumulative trend of defects, and the change trend of the 
defect rate in thousands of lines, etc.<br/>
+2. From version cycle dimension, observe the statistics on the cumulative 
trend of the number of defects/defect rate, which can be used to determine 
whether the growth rate of defects is slowing down, showing a flat convergence 
trend, and is an important reference for judging the stability of software 
version quality<br/>
+3. From the time dimension, analyze the trend of the number of test defects, 
defect rate to locate the key items/key points<br/>
+4. Evaluate whether the software quality and test plan are reasonable by 
referring to CMMI standard values</td>
+        <td rowspan="4">1. Defect drill-down analysis to inform the 
development of design and code review strategies and to improve the internal QA 
process<br/>
+2. Assist teams to locate projects/modules with higher defect severity and 
density, and clean up technical debts<br/>
+3. Analyze critical points, identify good/to-be-improved practices that affect 
defect count or defect rate, to reduce the amount of future defects</td>
+    </tr>
+    <tr>
+        <td>Incident Count</td>
+        <td>Number of Incidents found after shipping</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Bugs Count per 1k Lines of Code</td>
+        <td>Amount of bugs per 1,000 lines of code</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Incidents Count per 1k Lines of Code</td>
+        <td>Amount of incidents per 1,000 lines of code</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Delivery Cost</td>
+        <td>Commit Author Count</td>
+        <td>Number of Contributors who have committed code</td>
+        <td>Source Code Management entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md";>Git</a>/<a
 
href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md";>GitHub</a>/<a
 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLab</a>
 commits</td>
+        <td>1. As a secondary indicator, this helps assess the labor cost of 
participating in coding</td>
+        <td>1. Take inventory of project/team R&D resource inputs, assess 
input-output ratio, and rationalize resource deployment</td>
+    </tr>
+    <tr>
+        <td rowspan="3">Delivery Capability</td>
+        <td>Build Count</td>
+        <td>The number of builds started</td>
+        <td>CI/CD entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md";>Jenkins</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLabCI</a>
 MRs, etc</td>
+        <td rowspan="3">1. From the project dimension, compare the number of 
builds and success rate by combining the project phase and the complexity of 
tasks<br/>
+2. From the time dimension, analyze the trend of the number of builds and 
success rate to see if it has improved over time</td>
+        <td rowspan="3">1. As a process indicator, it reflects the value flow 
efficiency of upstream production and research links<br/>
+2. Identify excellent/to-be-improved practices that impact the build, and 
drive the team to precipitate reusable tools and mechanisms to build 
infrastructure for fast and high-frequency delivery</td>
+    </tr>
+    <tr>
+        <td>Build Duration</td>
+        <td>The duration of successful builds</td>
+        <td>CI/CD entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md";>Jenkins</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLabCI</a>
 MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Build Success Rate</td>
+        <td>The percentage of successful builds</td>
+        <td>CI/CD entities: <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md";>Jenkins</a>
 PRs, <a 
href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md";>GitLabCI</a>
 MRs, etc</td>
+    </tr>
+</table>
+<br/><br/><br/>
diff --git a/docs/Glossary.md b/versioned_docs/version-0.11/Glossary.md
similarity index 98%
copy from docs/Glossary.md
copy to versioned_docs/version-0.11/Glossary.md
index dc348f6..4ca3117 100644
--- a/docs/Glossary.md
+++ b/versioned_docs/version-0.11/Glossary.md
@@ -25,7 +25,7 @@ The following terms are arranged in the order of their 
appearance in the actual
 
 The relationship among Blueprint, Data Connections, Data Scope and 
Transformation Rules is explained as follows:
 
-![Blueprint ERD](../static/img/blueprint-erd.svg)
+![Blueprint ERD](/img/blueprint-erd.svg)
 - Each blueprint can have multiple data connections.
 - Each data connection can have multiple sets of data scope.
 - Each set of data scope only consists of one GitHub/GitLab project or Jira 
board, along with their corresponding data entities.
@@ -88,7 +88,7 @@ For detailed information about the relationship between data 
sources and data pl
 **A pipeline is an orchestration of [tasks](Glossary.md#tasks) of data 
`collection`, `extraction`, `conversion` and `enrichment`, defined in the 
DevLake API.** A pipeline is composed of one or multiple 
[stages](Glossary.md#stages) that are executed in a sequential order. Any error 
occurring during the execution of any stage, task or subtask will cause the 
immediate fail of the pipeline.
 
 The composition of a pipeline is explained as follows:
-![Blueprint ERD](../static/img/pipeline-erd.svg)
+![Blueprint ERD](/img/pipeline-erd.svg)
 Notice: **You can manually orchestrate the pipeline in Configuration UI 
Advanced Mode and the DevLake API; whereas in Configuration UI regular mode, an 
optimized pipeline orchestration will be automatically generated for you.**
 
 
diff --git a/docs/Overview/01-WhatIsDevLake.md 
b/versioned_docs/version-0.11/Overview/01-WhatIsDevLake.md
similarity index 92%
copy from docs/Overview/01-WhatIsDevLake.md
copy to versioned_docs/version-0.11/Overview/01-WhatIsDevLake.md
index 9f998cc..75c64a1 100755
--- a/docs/Overview/01-WhatIsDevLake.md
+++ b/versioned_docs/version-0.11/Overview/01-WhatIsDevLake.md
@@ -21,21 +21,21 @@ You can easily set up Apache DevLake by following our 
step-by step instruction f
 ### 2. Create a Blueprint
 The DevLake Configuration UI will guide you through the process (a Blueprint) 
to define the data connections, data scope, transformation and sync frequency 
of the data you wish to collect.
 
-![img](../../static/img/userflow1.svg)
+![img](/img/userflow1.svg)
 
 ### 3. Track the Blueprint's progress
 You can track the progress of the Blueprint you have just set up.
 
-![img](../../static/img/userflow2.svg)
+![img](/img/userflow2.svg)
 
 ### 4. View the pre-built dashboards
 Once the first run of the Blueprint is completed, you can view the 
corresponding dashboards.
 
-![img](../../static/img/userflow3.png)
+![img](/img/userflow3.png)
 
 ### 5. Customize the dahsboards with SQL
 If the pre-built dashboards are limited for your use cases, you can always 
customize or create your own metrics or dashboards with SQL.
 
-![img](../../static/img/userflow4.png)
+![img](/img/userflow4.png)
 
 
diff --git a/versioned_docs/version-0.11/Overview/02-Architecture.md 
b/versioned_docs/version-0.11/Overview/02-Architecture.md
new file mode 100755
index 0000000..8daa859
--- /dev/null
+++ b/versioned_docs/version-0.11/Overview/02-Architecture.md
@@ -0,0 +1,39 @@
+---
+title: "Architecture"
+linkTitle: "Architecture"
+description: >
+  Understand the architecture of Apache DevLake.
+---
+
+## Architecture Overview
+
+<p align="center"><img src="/img/arch-component.svg" /></p>
+<p align="center">DevLake Components</p>
+
+A DevLake installation typically consists of the following components:
+
+- Config UI: A handy user interface to create, trigger, and debug Blueprints. 
A Blueprint specifies the where (data connection), what (data scope), how 
(transformation rule), and when (sync frequency) of a data pipeline.
+- API Server: The main programmatic interface of DevLake.
+- Runner: The runner does all the heavy-lifting for executing tasks. In the 
default DevLake installation, it runs within the API Server, but DevLake 
provides a temporal-based runner (beta) for production environments.
+- Database: The database stores both DevLake's metadata and user data 
collected by data pipelines. DevLake supports MySQL and PostgreSQL as of v0.11.
+- Plugins: Plugins enable DevLake to collect and analyze dev data from any 
DevOps tools with an accessible API. DevLake community is actively adding 
plugins for popular DevOps tools, but if your preferred tool is not covered 
yet, feel free to open a GitHub issue to let us know or check out our doc on 
how to build a new plugin by yourself.
+- Dashboards: Dashboards deliver data and insights to DevLake users. A 
dashboard is simply a collection of SQL queries along with corresponding 
visualization configurations. DevLake's official dashboard tool is Grafana and 
pre-built dashboards are shipped in Grafana's JSON format. Users are welcome to 
swap for their own choice of dashboard/BI tool if desired.
+
+## Dataflow
+
+<p align="center"><img src="/img/arch-dataflow.svg" /></p>
+<p align="center">DevLake Dataflow</p>
+
+A typical plugin's dataflow is illustrated below:
+
+1. The Raw layer stores the API responses from data sources (DevOps tools) in 
JSON. This saves developers' time if the raw data is to be transformed 
differently later on. Please note that communicating with data sources' APIs is 
usually the most time-consuming step.
+2. The Tool layer extracts raw data from JSONs into a relational schema that's 
easier to consume by analytical tasks. Each DevOps tool would have a schema 
that's tailored to their data structure, hence the name, the Tool layer.
+3. The Domain layer attempts to build a layer of abstraction on top of the 
Tool layer so that analytics logics can be re-used across different tools. For 
example, GitHub's Pull Request (PR) and GitLab's Merge Request (MR) are similar 
entities. They each have their own table name and schema in the Tool layer, but 
they're consolidated into a single entity in the Domain layer, so that 
developers only need to implement metrics like Cycle Time and Code Review 
Rounds once against the domain la [...]
+
+## Principles
+
+1. Extensible: DevLake's plugin system allows users to integrate with any 
DevOps tool. DevLake also provides a dbt plugin that enables users to define 
their own data transformation and analysis workflows.
+2. Portable: DevLake has a modular design and provides multiple options for 
each module. Users of different setups can freely choose the right 
configuration for themselves.
+3. Robust: DevLake provides an SDK to help plugins efficiently and reliably 
collect data from data sources while respecting their API rate limits and 
constraints.
+
+<br/>
diff --git a/versioned_docs/version-0.11/Overview/03-Roadmap.md 
b/versioned_docs/version-0.11/Overview/03-Roadmap.md
new file mode 100644
index 0000000..f10b62e
--- /dev/null
+++ b/versioned_docs/version-0.11/Overview/03-Roadmap.md
@@ -0,0 +1,36 @@
+---
+title: "Roadmap"
+linkTitle: "Roadmap"
+tags: []
+categories: []
+weight: 3
+description: >
+  The goals and roadmap for DevLake in 2022.
+---
+
+
+## Goals
+DevLake has joined the Apache Incubator and is aiming to become a top-level 
project. To achieve this goal, the Apache DevLake (Incubating) community will 
continue to make efforts in helping development teams to analyze and improve 
their engineering productivity. In the 2022 Roadmap, we have summarized three 
major goals followed by the feature breakdown to invite the broader community 
to join us and grow together.
+
+1. As a dev data analysis application, discover and implement 3 (or even 
more!) usage scenarios:
+   - A collection of metrics to track the contribution, quality and growth of 
open-source projects
+   - DORA metrics for DevOps engineers
+   - To be decided ([let us 
know](https://join.slack.com/t/devlake-io/shared_invite/zt-17b6vuvps-x98pqseoUagM7EAmKC82xQ)
 if you have any suggestions!)
+2. As dev data infrastructure, provide robust data collection modules, 
customizable data models, and data extensibility.
+3. Design better user experience for end-users and contributors.
+
+## Feature Breakdown
+Apache DevLake is currently under rapid development. You are more than welcome 
to use the following table to explore your intereted features and make 
contributions. We deeply appreciate the collective effort of our community to 
make this project possible!
+
+| Category | Features|
+| --- | --- |
+| More data sources across different [DevOps 
domains](../DataModels/01-DevLakeDomainLayerSchema.md) (Goal No.1 & 2)| 
Features in **bold** are of higher priority <br/><br/> Issue/Task Management: 
<ul><li>**Jira server** [#886 
(closed)](https://github.com/apache/incubator-devlake/issues/886)</li><li>**Jira
 data center** [#1687 
(closed)](https://github.com/apache/incubator-devlake/issues/1687)</li><li>GitLab
 Issues [#715 
(closed)](https://github.com/apache/incubator-devlake/issues/715)</li> [...]
+| Improved data collection, [data 
models](../DataModels/01-DevLakeDomainLayerSchema.md) and data extensibility 
(Goal No.2)| Data Collection: <br/> <ul><li>Complete the logging 
system</li><li>Implement a good error handling mechanism during data 
collection</li></ul> Data Models:<ul><li>Introduce DBT to allow users to create 
and modify the domain layer schema. [#1479 
(closed)](https://github.com/apache/incubator-devlake/issues/1479)</li><li>Design
 the data models for 5 new domains, please  [...]
+| Better user experience (Goal No.3) | For new users: <ul><li> Iterate on a 
clearer step-by-step guide to improve the pre-configuration 
experience.</li><li>Provide a new Config UI to reduce frictions for data 
configuration [#1700 
(in-progress)](https://github.com/apache/incubator-devlake/issues/1700)</li><li>
 Showcase dashboard live demos to let users explore and learn about the 
dashboards. [#1784 
(open)](https://github.com/apache/incubator-devlake/issues/1784)</li></ul>For 
returning use [...]
+
+
+## How to Influence the Roadmap
+A roadmap is only useful when it captures real user needs. We are glad to hear 
from you if you have specific use cases, feedback, or ideas. You can submit an 
issue to let us know!
+Also, if you plan to work (or are already working) on a new or existing 
feature, tell us, so that we can update the roadmap accordingly. We are happy 
to share knowledge and context to help your feature land successfully.
+<br/><br/><br/>
+
diff --git a/versioned_docs/version-0.11/Overview/_category_.json 
b/versioned_docs/version-0.11/Overview/_category_.json
new file mode 100644
index 0000000..e224ed8
--- /dev/null
+++ b/versioned_docs/version-0.11/Overview/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Overview",
+  "position": 1
+}
diff --git a/versioned_docs/version-0.11/Plugins/_category_.json 
b/versioned_docs/version-0.11/Plugins/_category_.json
new file mode 100644
index 0000000..534bad8
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Plugins",
+  "position": 7
+}
diff --git a/versioned_docs/version-0.11/Plugins/dbt.md 
b/versioned_docs/version-0.11/Plugins/dbt.md
new file mode 100644
index 0000000..059bf12
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/dbt.md
@@ -0,0 +1,67 @@
+---
+title: "DBT"
+description: >
+  DBT Plugin
+---
+
+
+## Summary
+
+dbt (data build tool) enables analytics engineers to transform data in their 
warehouses by simply writing select statements. dbt handles turning these 
select statements into tables and views.
+dbt does the T in ELT (Extract, Load, Transform) processes – it doesn’t 
extract or load data, but it’s extremely good at transforming data that’s 
already loaded into your warehouse.
+
+## User setup<a id="user-setup"></a>
+- If you plan to use this product, you need to install some environments first.
+
+#### Required Packages to Install<a id="user-setup-requirements"></a>
+- [python3.7+](https://www.python.org/downloads/)
+- [dbt-mysql](https://pypi.org/project/dbt-mysql/#configuring-your-profile)
+
+#### Commands to run or create in your terminal and the dbt project<a 
id="user-setup-commands"></a>
+1. pip install dbt-mysql
+2. dbt init demoapp (demoapp is project name)
+3. create your SQL transformations and data models
+
+## Convert Data By DBT
+
+Use the Raw JSON API to manually initiate a run using **cURL** or graphical 
API tool such as **Postman**. `POST` the following request to the DevLake API 
Endpoint.
+
+```json
+[
+  [
+    {
+      "plugin": "dbt",
+      "options": {
+          "projectPath": "/Users/abeizn/demoapp",
+          "projectName": "demoapp",
+          "projectTarget": "dev",
+          "selectedModels": ["my_first_dbt_model","my_second_dbt_model"],
+          "projectVars": {
+            "demokey1": "demovalue1",
+            "demokey2": "demovalue2"
+        }
+      }
+    }
+  ]
+]
+```
+
+- `projectPath`: the absolute path of the dbt project. (required)
+- `projectName`: the name of the dbt project. (required)
+- `projectTarget`: this is the default target your dbt project will use. 
(optional)
+- `selectedModels`: a model is a select statement. Models are defined in .sql 
files, and typically in your models directory. (required)
+And selectedModels accepts one or more arguments. Each argument can be one of:
+1. a package name, runs all models in your project, example: example
+2. a model name, runs a specific model, example: my_fisrt_dbt_model
+3. a fully-qualified path to a directory of models.
+
+- `projectVars`: variables to parametrize dbt models. (optional)
+example:
+`select * from events where event_type = '{{ var("event_type") }}'`
+To execute this SQL query in your model, you need set a value for `event_type`.
+
+### Resources:
+- Learn more about dbt [in the docs](https://docs.getdbt.com/docs/introduction)
+- Check out [Discourse](https://discourse.getdbt.com/) for commonly asked 
questions and answers
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-0.11/Plugins/feishu.md 
b/versioned_docs/version-0.11/Plugins/feishu.md
new file mode 100644
index 0000000..f19e4b0
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/feishu.md
@@ -0,0 +1,66 @@
+---
+title: "Feishu"
+description: >
+  Feishu Plugin
+---
+
+# Feishu
+
+## Summary
+
+This plugin collects Feishu meeting data through [Feishu 
Openapi](https://open.feishu.cn/document/home/user-identity-introduction/introduction).
+
+## Configuration
+
+In order to fully use this plugin, you will need to get app_id and app_secret 
from a Feishu administrator (for help on App info, please see [official Feishu 
Docs](https://open.feishu.cn/document/ukTMukTMukTM/ukDNz4SO0MjL5QzM/auth-v3/auth/tenant_access_token_internal)),
+then set these two parameters via Dev Lake's `.env`.
+
+### By `.env`
+
+The connection aspect of the configuration screen requires the following key 
fields to connect to the Feishu API. As Feishu is a single-source data provider 
at the moment, the connection name is read-only as there is only one instance 
to manage. As we continue our development roadmap we may enable multi-source 
connections for Feishu in the future.
+
+```
+FEISHU_APPID=app_id
+FEISHU_APPSCRECT=app_secret
+```
+
+## Collect data from Feishu
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and 
paste a JSON config like the following:
+
+
+```json
+[
+  [
+    {
+      "plugin": "feishu",
+      "options": {
+        "numOfDaysToCollect" : 80,
+        "rateLimitPerSecond" : 5
+      }
+    }
+  ]
+]
+```
+
+> `numOfDaysToCollect`: The number of days you want to collect
+
+> `rateLimitPerSecond`: The number of requests to send(Maximum is 8)
+
+You can also trigger data collection by making a POST request to `/pipelines`.
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "feishu 20211126",
+    "tasks": [[{
+      "plugin": "feishu",
+      "options": {
+        "numOfDaysToCollect" : 80,
+        "rateLimitPerSecond" : 5
+      }
+    }]]
+}
+'
+```
\ No newline at end of file
diff --git a/versioned_docs/version-0.11/Plugins/gitee.md 
b/versioned_docs/version-0.11/Plugins/gitee.md
new file mode 100644
index 0000000..0c4307a
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/gitee.md
@@ -0,0 +1,114 @@
+---
+title: "Gitee(WIP)"
+description: >
+  Gitee Plugin
+---
+
+# Gitee
+
+## Summary
+
+## Configuration
+
+### Provider (Datasource) Connection
+The connection aspect of the configuration screen requires the following key 
fields to connect to the **Gitee API**. As gitee is a _single-source data 
provider_ at the moment, the connection name is read-only as there is only one 
instance to manage. As we continue our development roadmap we may enable 
_multi-source_ connections for gitee in the future.
+
+- **Connection Name** [`READONLY`]
+    - ⚠️ Defaults to "**Gitee**" and may not be changed.
+- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
+    - This should be a valid REST API Endpoint eg. `https://gitee.com/api/v5/`
+    - ⚠️ URL should end with`/`
+- **Auth Token(s)** (Personal Access Token)
+    - For help on **Creating a personal access token**
+    - Provide at least one token for Authentication with the . This field 
accepts a comma-separated list of values for multiple tokens. The data 
collection will take longer for gitee since they have a **rate limit of 2k 
requests per hour**. You can accelerate the process by configuring _multiple_ 
personal access tokens.
+
+"For API requests using `Basic Authentication` or `OAuth`
+
+
+If you have a need for more api rate limits, you can set many tokens in the 
config file and we will use all of your tokens.
+
+For an overview of the **gitee REST API**, please see official [gitee Docs on 
REST](https://gitee.com/api/v5/swagger)
+
+Click **Save Connection** to update connection settings.
+
+
+### Provider (Datasource) Settings
+Manage additional settings and options for the gitee Datasource Provider. 
Currently there is only one **optional** setting, *Proxy URL*. If you are 
behind a corporate firewall or VPN you may need to utilize a proxy server.
+
+**gitee Proxy URL [ `Optional`]**
+Enter a valid proxy server address on your Network, e.g. 
`http://your-proxy-server.com:1080`
+
+Click **Save Settings** to update additional settings.
+
+### Regular Expression Configuration
+Define regex pattern in .env
+- GITEE_PR_BODY_CLOSE_PATTERN: Define key word to associate issue in pr body, 
please check the example in .env.example
+
+## Sample Request
+In order to collect data, you have to compose a JSON looks like following one, 
and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
+1. Configure-UI Mode
+```json
+[
+  [
+    {
+      "plugin": "gitee",
+      "options": {
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+and if you want to perform certain subtasks.
+```json
+[
+  [
+    {
+      "plugin": "gitee",
+      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
+      "options": {
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+
+2. Curl Mode:
+   You can also trigger data collection by making a POST request to 
`/pipelines`.
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee 20211126",
+    "tasks": [[{
+        "plugin": "gitee",
+        "options": {
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
+and if you want to perform certain subtasks.
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee 20211126",
+    "tasks": [[{
+        "plugin": "gitee",
+        "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
+        "options": {
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
diff --git a/versioned_docs/version-0.11/Plugins/gitextractor.md 
b/versioned_docs/version-0.11/Plugins/gitextractor.md
new file mode 100644
index 0000000..ac97fa3
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/gitextractor.md
@@ -0,0 +1,65 @@
+---
+title: "GitExtractor"
+description: >
+  GitExtractor Plugin
+---
+
+# Git Repo Extractor
+
+## Summary
+This plugin extracts commits and references from a remote or local git 
repository. It then saves the data into the database or csv files.
+
+## Steps to make this plugin work
+
+1. Use the Git repo extractor to retrieve data about commits and branches from 
your repository.
+2. Use the GitHub plugin to retrieve data about Github issues and PRs from 
your repository.
+NOTE: you can run only one issue collection stage as described in the Github 
Plugin README.
+3. Use the [RefDiff](./refdiff.md#development) plugin to calculate version 
diff, which will be stored in `refs_commits_diffs` table.
+
+## Sample Request
+
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "git repo extractor",
+    "tasks": [
+        [
+            {
+                "Plugin": "gitextractor",
+                "Options": {
+                    "url": "https://github.com/merico-dev/lake.git";,
+                    "repoId": "github:GithubRepo:384111310"
+                }
+            }
+        ]
+    ]
+}
+'
+```
+- `url`: the location of the git repository. It should start with 
`http`/`https` for a remote git repository and with `/` for a local one.
+- `repoId`: column `id` of  `repos`.
+- `proxy`: optional, http proxy, e.g. `http://your-proxy-server.com:1080`.
+- `user`: optional, for cloning private repository using HTTP/HTTPS
+- `password`: optional, for cloning private repository using HTTP/HTTPS
+- `privateKey`: optional, for SSH cloning, base64 encoded `PEM` file
+- `passphrase`: optional, passphrase for the private key
+
+
+## Standalone Mode
+
+You call also run this plugin in a standalone mode without any DevLake service 
running using the following command:
+
+```
+go run plugins/gitextractor/main.go -url 
https://github.com/merico-dev/lake.git -id github:GithubRepo:384111310 -db 
"merico:merico@tcp(127.0.0.1:3306)/lake?charset=utf8mb4&parseTime=True"
+```
+
+For more options (e.g., saving to a csv file instead of a db), please read 
`plugins/gitextractor/main.go`.
+
+## Development
+
+This plugin depends on `libgit2`, you need to install version 1.3.0 in order 
to run and debug this plugin on your local
+machine. [Click here](./refdiff.md#development) for a brief guide.
+
+<br/><br/><br/>
diff --git 
a/versioned_docs/version-0.11/Plugins/github-connection-in-config-ui.png 
b/versioned_docs/version-0.11/Plugins/github-connection-in-config-ui.png
new file mode 100644
index 0000000..5359fb1
Binary files /dev/null and 
b/versioned_docs/version-0.11/Plugins/github-connection-in-config-ui.png differ
diff --git a/docs/Plugins/github.md 
b/versioned_docs/version-0.11/Plugins/github.md
similarity index 98%
copy from docs/Plugins/github.md
copy to versioned_docs/version-0.11/Plugins/github.md
index 8dac21b..463f9de 100644
--- a/docs/Plugins/github.md
+++ b/versioned_docs/version-0.11/Plugins/github.md
@@ -24,7 +24,7 @@ Here are some examples metrics using `GitHub` data:
 
 ## Screenshot
 
-![image](../../static/img/github-demo.png)
+![image](/img/github-demo.png)
 
 
 ## Configuration
diff --git 
a/versioned_docs/version-0.11/Plugins/gitlab-connection-in-config-ui.png 
b/versioned_docs/version-0.11/Plugins/gitlab-connection-in-config-ui.png
new file mode 100644
index 0000000..7aacee8
Binary files /dev/null and 
b/versioned_docs/version-0.11/Plugins/gitlab-connection-in-config-ui.png differ
diff --git a/versioned_docs/version-0.11/Plugins/gitlab.md 
b/versioned_docs/version-0.11/Plugins/gitlab.md
new file mode 100644
index 0000000..21a86d7
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/gitlab.md
@@ -0,0 +1,94 @@
+---
+title: "GitLab"
+description: >
+  GitLab Plugin
+---
+
+
+## Metrics
+
+| Metric Name                 | Description                                    
              |
+|:----------------------------|:-------------------------------------------------------------|
+| Pull Request Count          | Number of Pull/Merge Requests                  
              |
+| Pull Request Pass Rate      | Ratio of Pull/Merge Review requests to merged  
              |
+| Pull Request Reviewer Count | Number of Pull/Merge Reviewers                 
              |
+| Pull Request Review Time    | Time from Pull/Merge created time until merged 
              |
+| Commit Author Count         | Number of Contributors                         
              |
+| Commit Count                | Number of Commits                              
              |
+| Added Lines                 | Accumulated Number of New Lines                
              |
+| Deleted Lines               | Accumulated Number of Removed Lines            
              |
+| Pull Request Review Rounds  | Number of cycles of commits followed by 
comments/final merge |
+
+## Configuration
+
+### Provider (Datasource) Connection
+The connection section of the configuration screen requires the following key 
fields to connect to the **GitLab API**.
+
+![connection-in-config-ui](gitlab-connection-in-config-ui.png)
+
+- **Connection Name** [`READONLY`]
+  - ⚠️ Defaults to "**GitLab**" and may not be changed. As GitLab is a 
_single-source data provider_ at the moment, the connection name is read-only 
as there is only one instance to manage. As we advance on our development 
roadmap we may enable _multi-source_ connections for GitLab in the future.
+- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
+  - This should be a valid REST API Endpoint eg. 
`https://gitlab.example.com/api/v4/`
+  - ⚠️ URL should end with`/`
+- **Personal Access Token** (HTTP Basic Auth)
+  - Login to your GitLab Account and create a **Personal Access Token** to 
authenticate with the API using HTTP Basic Authentication. The token must be 20 
characters long. Save the personal access token somewhere safe. After you leave 
the page, you no longer have access to the token.
+
+    1. In the top-right corner, select your **avatar**.
+    2. Click on **Edit profile**.
+    3. On the left sidebar, select **Access Tokens**.
+    4. Enter a **name** and optional **expiry date** for the token.
+    5. Select the desired **scopes**.
+    6. Click on **Create personal access token**.
+
+    For help on **Creating a personal access token**, please see official 
[GitLab Docs on Personal 
Tokens](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html).
+    For an overview of the **GitLab REST API**, please see official [GitLab 
Docs on 
REST](https://docs.gitlab.com/ee/development/documentation/restful_api_styleguide.html#restful-api)
+
+Click **Save Connection** to update connection settings.
+
+### Provider (Datasource) Settings
+There are no additional settings for the GitLab Datasource Provider at this 
time.
+
+> NOTE: `GitLab Project ID` Mappings feature has been deprecated.
+
+## Gathering Data with GitLab
+
+To collect data, you can make a POST request to `/pipelines`
+
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitlab 20211126",
+    "tasks": [[{
+        "plugin": "gitlab",
+        "options": {
+            "projectId": <Your gitlab project id>
+        }
+    }]]
+}
+'
+```
+
+## Finding Project Id
+
+To get the project id for a specific `GitLab` repository:
+- Visit the repository page on GitLab
+- Find the project id just below the title
+
+  ![Screen Shot 2021-08-06 at 4 32 53 
PM](https://user-images.githubusercontent.com/3789273/128568416-a47b2763-51d8-4a6a-8a8b-396512bffb03.png)
+
+> Use this project id in your requests, to collect data from this project
+
+## ⚠️ (WIP) Create a GitLab API Token <a id="gitlab-api-token"></a>
+
+1. When logged into `GitLab` visit 
`https://gitlab.com/-/profile/personal_access_tokens`
+2. Give the token any name, no expiration date and all scopes (excluding write 
access)
+
+    ![Screen Shot 2021-08-06 at 4 44 01 
PM](https://user-images.githubusercontent.com/3789273/128569148-96f50d4e-5b3b-4110-af69-a68f8d64350a.png)
+
+3. Click the **Create Personal Access Token** button
+4. Save the API token into `.env` file via `cofnig-ui` or edit the file 
directly.
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-0.11/Plugins/jenkins.md 
b/versioned_docs/version-0.11/Plugins/jenkins.md
new file mode 100644
index 0000000..26e72a6
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/jenkins.md
@@ -0,0 +1,61 @@
+---
+title: "Jenkins"
+description: >
+  Jenkins Plugin
+---
+
+# Jenkins
+
+## Summary
+
+This plugin collects Jenkins data through [Remote Access 
API](https://www.jenkins.io/doc/book/using/remote-access-api/). It then 
computes and visualizes various DevOps metrics from the Jenkins data.
+
+![image](https://user-images.githubusercontent.com/61080/141943122-dcb08c35-cb68-4967-9a7c-87b63c2d6988.png)
+
+## Metrics
+
+| Metric Name        | Description                         |
+|:-------------------|:------------------------------------|
+| Build Count        | The number of builds created        |
+| Build Success Rate | The percentage of successful builds |
+
+## Configuration
+
+In order to fully use this plugin, you will need to set various configurations 
via Dev Lake's `config-ui`.
+
+### By `config-ui`
+
+The connection section of the configuration screen requires the following key 
fields to connect to the Jenkins API.
+
+- Connection Name [READONLY]
+  - ⚠️ Defaults to "Jenkins" and may not be changed. As Jenkins is a 
_single-source data provider_ at the moment, the connection name is read-only 
as there is only one instance to manage. As we advance on our development 
roadmap we may enable multi-source connections for Jenkins in the future.
+- Endpoint URL (REST URL, starts with `https://` or `http://`i, ends with `/`)
+  - This should be a valid REST API Endpoint eg. `https://ci.jenkins.io/`
+- Username (E-mail)
+  - Your User ID for the Jenkins Instance.
+- Password (Secret Phrase or API Access Token)
+  - Secret password for common credentials.
+  - For help on Username and Password, please see official Jenkins Docs on 
Using Credentials
+  - Or you can use **API Access Token** for this field, which can be generated 
at `User` -> `Configure` -> `API Token` section on Jenkins.
+
+Click Save Connection to update connection settings.
+
+## Collect Data From Jenkins
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and 
paste a JSON config like the following:
+
+```json
+[
+  [
+    {
+      "plugin": "jenkins",
+      "options": {}
+    }
+  ]
+]
+```
+
+## Relationship between job and build
+
+Build is kind of a snapshot of job. Running job each time creates a build.
+<br/><br/><br/>
diff --git a/versioned_docs/version-0.11/Plugins/jira-connection-config-ui.png 
b/versioned_docs/version-0.11/Plugins/jira-connection-config-ui.png
new file mode 100644
index 0000000..df2e8e3
Binary files /dev/null and 
b/versioned_docs/version-0.11/Plugins/jira-connection-config-ui.png differ
diff --git 
a/versioned_docs/version-0.11/Plugins/jira-more-setting-in-config-ui.png 
b/versioned_docs/version-0.11/Plugins/jira-more-setting-in-config-ui.png
new file mode 100644
index 0000000..dffb0c9
Binary files /dev/null and 
b/versioned_docs/version-0.11/Plugins/jira-more-setting-in-config-ui.png differ
diff --git a/versioned_docs/version-0.11/Plugins/jira.md 
b/versioned_docs/version-0.11/Plugins/jira.md
new file mode 100644
index 0000000..8ac28d6
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/jira.md
@@ -0,0 +1,253 @@
+---
+title: "Jira"
+description: >
+  Jira Plugin
+---
+
+
+## Summary
+
+This plugin collects Jira data through Jira Cloud REST API. It then computes 
and visualizes various engineering metrics from the Jira data.
+
+<img width="2035" alt="jira metric display" 
src="https://user-images.githubusercontent.com/2908155/132926143-7a31d37f-22e1-487d-92a3-cf62e402e5a8.png";
 />
+
+## Project Metrics This Covers
+
+| Metric Name                         | Description                            
                                                           |
+|:------------------------------------|:--------------------------------------------------------------------------------------------------|
+| Requirement Count                      | Number of issues with type 
"Requirement"                                                          |
+| Requirement Lead Time                      | Lead time of issues with type 
"Requirement"                                                       |
+| Requirement Delivery Rate           | Ratio of delivered requirements to all 
requirements                                               |
+| Requirement Granularity             | Number of story points associated with 
an issue                                                   |
+| Bug Count                              | Number of issues with type 
"Bug"<br/><i>bugs are found during testing</i>                         |
+| Bug Age                                | Lead time of issues with type 
"Bug"<br/><i>both new and deleted lines count</i>                   |
+| Bugs Count per 1k Lines of Code     | Amount of bugs per 1000 lines of code  
                                                           |
+| Incident Count                      | Number of issues with type 
"Incident"<br/><i>incidents are found when running in production</i>   |
+| Incident Age                        | Lead time of issues with type 
"Incident"                                                          |
+| Incident Count per 1k Lines of Code | Amount of incidents per 1000 lines of 
code                                                        |
+
+## Configuration
+
+In order to fully use this plugin, you will need to set various configurations 
via Dev Lake's `config-ui` service. Open `config-ui` on browser, by default the 
URL is http://localhost:4000, then go to **Data Integrations / JIRA** page. 
JIRA plugin currently supports multiple data connections, Here you can **add** 
new connection to your JIRA connection or **update** the settings if needed.
+
+For each connection, you will need to set up following items first:
+
+![connection at config ui](jira-connection-config-ui.png)
+
+- Connection Name: This allow you to distinguish different connections.
+- Endpoint URL: The JIRA instance API endpoint, for JIRA Cloud Service: 
`https://<mydomain>.atlassian.net/rest`. DevLake officially supports JIRA Cloud 
Service on atlassian.net, but may or may not work for JIRA Server Instance.
+- Basic Auth Token: First, generate a **JIRA API TOKEN** for your JIRA account 
on the JIRA console (see [Generating API token](#generating-api-token)), then, 
in `config-ui` click the KEY icon on the right side of the input to generate a 
full `HTTP BASIC AUTH` token for you.
+- Proxy Url: Just use when you want collect through VPN.
+
+### More custom configuration
+If you want to add more custom config, you can click "settings" to change 
these config
+![More config in config ui](jira-more-setting-in-config-ui.png)
+- Issue Type Mapping: JIRA is highly customizable, each JIRA instance may have 
a different set of issue types than others. In order to compute and visualize 
metrics for different instances, you need to map your issue types to standard 
ones. See [Issue Type Mapping](#issue-type-mapping) for detail.
+- Epic Key: unfortunately, epic relationship implementation in JIRA is based 
on `custom field`, which is vary from instance to instance. Please see [Find 
Out Custom Fields](#find-out-custom-fields).
+- Story Point Field: same as Epic Key.
+- Remotelink Commit SHA:A regular expression that matches commit links to 
determine whether an external link is a link to a commit. Taking gitlab as an 
example, to match all commits similar to 
https://gitlab.com/merico-dev/ce/example-repository/-/commit/8ab8fb319930dbd8615830276444b8545fd0ad24,
 you can directly use the regular expression **/commit/([0-9a-f]{40})$**
+
+
+### Generating API token
+1. Once logged into Jira, visit the url 
`https://id.atlassian.com/manage-profile/security/api-tokens`
+2. Click the **Create API Token** button, and give it any label name
+![image](https://user-images.githubusercontent.com/27032263/129363611-af5077c9-7a27-474a-a685-4ad52366608b.png)
+
+
+### Issue Type Mapping
+
+Devlake supports 3 standard types, all metrics are computed based on these 
types:
+
+ - `Bug`: Problems found during the `test` phase, before they can reach the 
production environment.
+ - `Incident`: Problems that went through the `test` phase, got deployed into 
production environment.
+ - `Requirement`: Normally, it would be `Story` on your instance if you 
adopted SCRUM.
+
+You can map arbitrary **YOUR OWN ISSUE TYPE** to a single **STANDARD ISSUE 
TYPE**. Normally, one would map `Story` to `Requirement`, but you could map 
both `Story` and `Task` to `Requirement` if that was your case. Unspecified 
types are copied directly for your convenience, so you don't need to map your 
`Bug` to standard `Bug`.
+
+Type mapping is critical for some metrics, like **Requirement Count**, make 
sure to map your custom type correctly.
+
+### Find Out Custom Field
+
+Please follow this guide: [How to find the custom field ID in 
Jira?](https://github.com/apache/incubator-devlake/wiki/How-to-find-the-custom-field-ID-in-Jira)
+
+
+## Collect Data From JIRA
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and 
paste a JSON config like the following:
+
+> <font color="#ED6A45">Warning: Data collection only supports single-task 
execution, and the results of concurrent multi-task execution may not meet 
expectations.</font>
+
+```
+[
+  [
+    {
+      "plugin": "jira",
+      "options": {
+          "connectionId": 1,
+          "boardId": 8,
+          "since": "2006-01-02T15:04:05Z"
+      }
+    }
+  ]
+]
+```
+
+- `connectionId`: The `ID` field from **JIRA Integration** page.
+- `boardId`: JIRA board id, see "Find Board Id" for details.
+- `since`: optional, download data since a specified date only.
+
+
+### Find Board Id
+
+1. Navigate to the Jira board in the browser
+2. in the URL bar, get the board id from the parameter `?rapidView=`
+
+**Example:**
+
+`https://{your_jira_endpoint}/secure/RapidBoard.jspa?rapidView=51`
+
+![Screenshot](https://user-images.githubusercontent.com/27032263/129363083-df0afa18-e147-4612-baf9-d284a8bb7a59.png)
+
+Your board id is used in all REST requests to Apache DevLake. You do not need 
to configure this at the data connection level.
+
+
+
+## API
+
+### Data Connections
+
+1. Get all data connection
+
+```GET /plugins/jira/connections
+[
+  {
+    "ID": 14,
+    "CreatedAt": "2021-10-11T11:49:19.029Z",
+    "UpdatedAt": "2021-10-11T11:49:19.029Z",
+    "name": "test-jira-connection",
+    "endpoint": "https://merico.atlassian.net/rest";,
+    "basicAuthEncoded": "basicAuth",
+    "epicKeyField": "epicKeyField",
+      "storyPointField": "storyPointField"
+  }
+]
+```
+
+2. Create a new data connection
+
+```POST /plugins/jira/connections
+{
+       "name": "jira data connection name",
+       "endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest";,
+    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} 
| base64`",
+       "epicKeyField": "name of customfield of epic key",
+       "storyPointField": "name of customfield of story point",
+       "typeMappings": { // optional, send empty object to delete all 
typeMappings of the data connection
+               "userType": {
+                       "standardType": "devlake standard type"
+               }
+       }
+}
+```
+
+
+3. Update data connection
+
+```PUT /plugins/jira/connections/:connectionId
+{
+       "name": "jira data connection name",
+       "endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest";,
+    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} 
| base64`",
+       "epicKeyField": "name of customfield of epic key",
+       "storyPointField": "name of customfield of story point",
+       "typeMappings": { // optional, send empty object to delete all 
typeMappings of the data connection
+               "userType": {
+                       "standardType": "devlake standard type",
+               }
+       }
+}
+```
+
+4. Get data connection detail
+```GET /plugins/jira/connections/:connectionId
+{
+       "name": "jira data connection name",
+       "endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest";,
+    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} 
| base64`",
+       "epicKeyField": "name of customfield of epic key",
+       "storyPointField": "name of customfield of story point",
+       "typeMappings": { // optional, send empty object to delete all 
typeMappings of the data connection
+               "userType": {
+                       "standardType": "devlake standard type",
+               }
+       }
+}
+```
+
+5. Delete data connection
+
+```DELETE /plugins/jira/connections/:connectionId
+```
+
+
+### Type mappings
+
+1. Get all type mappings
+```GET /plugins/jira/connections/:connectionId/type-mappings
+[
+  {
+    "jiraConnectionId": 16,
+    "userType": "userType",
+    "standardType": "standardType"
+  }
+]
+```
+
+2. Create a new type mapping
+
+```POST /plugins/jira/connections/:connectionId/type-mappings
+{
+    "userType": "userType",
+    "standardType": "standardType"
+}
+```
+
+3. Update type mapping
+
+```PUT /plugins/jira/connections/:connectionId/type-mapping/:userType
+{
+    "standardType": "standardTypeUpdated"
+}
+```
+
+
+4. Delete type mapping
+
+```DELETE /plugins/jira/connections/:connectionId/type-mapping/:userType
+```
+
+5. API forwarding
+For example:
+Requests to 
`http://your_devlake_host/plugins/jira/connections/1/proxy/rest/agile/1.0/board/8/sprint`
+would be forwarded to `https://your_jira_host/rest/agile/1.0/board/8/sprint`
+
+```GET /plugins/jira/connections/:connectionId/proxy/rest/*path
+{
+    "maxResults": 1,
+    "startAt": 0,
+    "isLast": false,
+    "values": [
+        {
+            "id": 7,
+            "self": "https://merico.atlassian.net/rest/agile/1.0/sprint/7";,
+            "state": "closed",
+            "name": "EE Sprint 7",
+            "startDate": "2020-06-12T00:38:51.882Z",
+            "endDate": "2020-06-26T00:38:00.000Z",
+            "completeDate": "2020-06-22T05:59:58.980Z",
+            "originBoardId": 8,
+            "goal": ""
+        }
+    ]
+}
+```
diff --git a/versioned_docs/version-0.11/Plugins/refdiff.md 
b/versioned_docs/version-0.11/Plugins/refdiff.md
new file mode 100644
index 0000000..35d3049
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/refdiff.md
@@ -0,0 +1,118 @@
+---
+title: "RefDiff"
+description: >
+  RefDiff Plugin
+---
+
+# RefDiff
+
+
+## Summary
+
+For development workload analysis, we often need to know how many commits have 
been created between 2 releases. This plugin calculates which commits differ 
between 2 Ref (branch/tag), and the result will be stored back into database 
for further analysis.
+
+## Important Note
+
+You need to run gitextractor before the refdiff plugin. The gitextractor 
plugin should create records in the `refs` table in your DB before this plugin 
can be run.
+
+## Configuration
+
+This is a enrichment plugin based on Domain Layer data, no configuration needed
+
+## How to use
+
+In order to trigger the enrichment, you need to insert a new task into your 
pipeline.
+
+1. Make sure `commits` and `refs` are collected into your database, `refs` 
table should contain records like following:
+```
+id                                            ref_type
+github:GithubRepo:384111310:refs/tags/0.3.5   TAG
+github:GithubRepo:384111310:refs/tags/0.3.6   TAG
+github:GithubRepo:384111310:refs/tags/0.5.0   TAG
+github:GithubRepo:384111310:refs/tags/v0.0.1  TAG
+github:GithubRepo:384111310:refs/tags/v0.2.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.3.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.4.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.6.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.6.1  TAG
+```
+2. If you want to run calculateIssuesDiff, please configure 
GITHUB_PR_BODY_CLOSE_PATTERN in .env, you can check the example in 
.env.example(we have a default value, please make sure your pattern is 
disclosed by single quotes '')
+3. If you want to run calculatePrCherryPick, please configure 
GITHUB_PR_TITLE_PATTERN in .env, you can check the example in .env.example(we 
have a default value, please make sure your pattern is disclosed by single 
quotes '')
+4. And then, trigger a pipeline like following, you can also define sub tasks, 
calculateRefDiff will calculate commits between two ref, and creatRefBugStats 
will create a table to show bug list between two ref:
+```
+curl -v -XPOST http://localhost:8080/pipelines --data @- <<'JSON'
+{
+    "name": "test-refdiff",
+    "tasks": [
+        [
+            {
+                "plugin": "refdiff",
+                "options": {
+                    "repoId": "github:GithubRepo:384111310",
+                    "pairs": [
+                       { "newRef": "refs/tags/v0.6.0", "oldRef": 
"refs/tags/0.5.0" },
+                       { "newRef": "refs/tags/0.5.0", "oldRef": 
"refs/tags/0.4.0" }
+                    ],
+                    "tasks": [
+                        "calculateCommitsDiff",
+                        "calculateIssuesDiff",
+                        "calculatePrCherryPick",
+                    ]
+                }
+            }
+        ]
+    ]
+}
+JSON
+```
+
+## Development
+
+This plugin depends on `libgit2`, you need to install version 1.3.0 in order 
to run and debug this plugin on your local
+machine.
+
+### Ubuntu
+
+```
+apt install cmake
+git clone https://github.com/libgit2/libgit2.git
+cd libgit2
+git checkout v1.3.0
+mkdir build
+cd build
+cmake ..
+make
+make install
+```
+
+### MacOS
+1. [MacPorts](https://guide.macports.org/#introduction) install
+```
+port install [email protected]
+```
+2. Source install
+```
+brew install cmake
+git clone https://github.com/libgit2/libgit2.git
+cd libgit2
+git checkout v1.3.0
+mkdir build
+cd build
+cmake ..
+make
+make install
+```
+
+#### Troubleshooting (MacOS)
+
+> Q: I got an error saying: `pkg-config: exec: "pkg-config": executable file 
not found in $PATH`
+
+> A:
+> 1. Make sure you have pkg-config installed:
+>
+> `brew install pkg-config`
+>
+> 2. Make sure your pkg config path covers the installation:
+> `export 
PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib:/usr/local/lib/pkgconfig`
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-0.11/Plugins/tapd.md 
b/versioned_docs/version-0.11/Plugins/tapd.md
new file mode 100644
index 0000000..fc93539
--- /dev/null
+++ b/versioned_docs/version-0.11/Plugins/tapd.md
@@ -0,0 +1,12 @@
+# TAPD
+
+## Summary
+
+This plugin collects TAPD data.
+
+This plugin is in development so you can't modify settings in config-ui.
+
+## Configuration
+
+In order to fully use this plugin, you will need to get 
endpoint/basic_auth_encoded/rate_limit and insert it into table 
`_tool_tapd_connections`.
+
diff --git a/versioned_docs/version-0.11/QuickStart/01-LocalSetup.md 
b/versioned_docs/version-0.11/QuickStart/01-LocalSetup.md
new file mode 100644
index 0000000..9b81bc9
--- /dev/null
+++ b/versioned_docs/version-0.11/QuickStart/01-LocalSetup.md
@@ -0,0 +1,43 @@
+---
+title: "Deploy Locally"
+description: >
+  The steps to install DevLake locally.
+---
+
+
+#### Prerequisites
+
+- [Docker v19.03.10+](https://docs.docker.com/get-docker)
+- [docker-compose v2.2.3+](https://docs.docker.com/compose/install/)
+
+#### Launch DevLake
+
+- Commands written `like this` are to be run in your terminal.
+
+1. Download `docker-compose.yml` and `env.example` from [latest release 
page](https://github.com/apache/incubator-devlake/releases/latest) into a 
folder.
+2. Rename `env.example` to `.env`. For Mac/Linux users, please run `mv 
env.example .env` in the terminal.
+3. Run `docker-compose up -d` to launch DevLake.
+
+#### Configure data connections and collect data
+
+1. Visit `config-ui` at `http://localhost:4000` in your browser to configure 
data connections.
+   - Navigate to desired plugins on the Integrations page
+   - Please reference the following for more details on how to configure each 
one:<br/>
+      - [Jira](../Plugins/jira.md)
+      - [GitHub](../Plugins/github.md): For users who'd like to collect GitHub 
data, we recommend reading our [GitHub data collection 
guide](../UserManuals/github-user-guide-v0.10.0.md) which covers the following 
steps in detail.
+      - [GitLab](../Plugins/gitlab.md)
+      - [Jenkins](../Plugins/jenkins.md)
+   - Submit the form to update the values by clicking on the **Save 
Connection** button on each form page
+   - `devlake` takes a while to fully boot up. if `config-ui` complaining 
about api being unreachable, please wait a few seconds and try refreshing the 
page.
+2. Create pipelines to trigger data collection in `config-ui`
+3. Click *View Dashboards* button in the top left when done, or visit 
`localhost:3002` (username: `admin`, password: `admin`).
+   - We use [Grafana](https://grafana.com/) as a visualization tool to build 
charts for the [data](../DataModels/02-DataSupport.md) stored in our database.
+   - Using SQL queries, we can add panels to build, save, and edit customized 
dashboards.
+   - All the details on provisioning and customizing a dashboard can be found 
in the [Grafana Doc](../UserManuals/GRAFANA.md).
+4. To synchronize data periodically, users can set up recurring pipelines with 
DevLake's [pipeline blueprint](../UserManuals/recurring-pipeline.md) for 
details.
+
+#### Upgrade to a newer version
+
+Support for database schema migration was introduced to DevLake in v0.10.0. 
From v0.10.0 onwards, users can upgrade their instance smoothly to a newer 
version. However, versions prior to v0.10.0 do not support upgrading to a newer 
version with a different database schema. We recommend users to deploy a new 
instance if needed.
+
+<br/>
diff --git a/versioned_docs/version-0.11/QuickStart/02-KubernetesSetup.md 
b/versioned_docs/version-0.11/QuickStart/02-KubernetesSetup.md
new file mode 100644
index 0000000..19bdc4d
--- /dev/null
+++ b/versioned_docs/version-0.11/QuickStart/02-KubernetesSetup.md
@@ -0,0 +1,32 @@
+---
+title: "Deploy to Kubernetes"
+description: >
+  The steps to install Apache DevLake in Kubernetes.
+---
+
+
+We provide a sample 
[k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml)
 for users interested in deploying Apache DevLake on a k8s cluster.
+
+[k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml)
 will create a namespace `devlake` on your k8s cluster, and use `nodePort 
30004` for `config-ui`,  `nodePort 30002` for `grafana` dashboards. If you 
would like to use certain version of Apache DevLake, please update the image 
tag of `grafana`, `devlake` and `config-ui` services to specify versions like 
`v0.10.1`.
+
+Here's the step-by-step guide:
+
+1. Download 
[k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml)
 to local machine
+2. Some key points:
+   - `config-ui` deployment:
+     * `GRAFANA_ENDPOINT`: FQDN of grafana service which can be reached from 
user's browser
+     * `DEVLAKE_ENDPOINT`: FQDN of devlake service which can be reached within 
k8s cluster, normally you don't need to change it unless namespace was changed
+     * `ADMIN_USER`/`ADMIN_PASS`: Not required, but highly recommended
+   - `devlake-config` config map:
+     * `MYSQL_USER`: shared between `mysql` and `grafana` service
+     * `MYSQL_PASSWORD`: shared between `mysql` and `grafana` service
+     * `MYSQL_DATABASE`: shared between `mysql` and `grafana` service
+     * `MYSQL_ROOT_PASSWORD`: set root password for `mysql`  service
+   - `devlake` deployment:
+     * `DB_URL`: update this value if  `MYSQL_USER`, `MYSQL_PASSWORD` or 
`MYSQL_DATABASE` were changed
+3. The `devlake` deployment store its configuration in `/app/.env`. In our 
sample yaml, we use `hostPath` volume, so please make sure directory 
`/var/lib/devlake` exists on your k8s workers, or employ other techniques to 
persist `/app/.env` file. Please do NOT mount the entire `/app` directory, 
because plugins are located in `/app/bin` folder.
+4. Finally, execute the following command, Apache DevLake should be up and 
running:
+    ```sh
+    kubectl apply -f k8s-deploy.yaml
+    ```
+<br/><br/><br/>
diff --git a/versioned_docs/version-0.11/QuickStart/_category_.json 
b/versioned_docs/version-0.11/QuickStart/_category_.json
new file mode 100644
index 0000000..133c30f
--- /dev/null
+++ b/versioned_docs/version-0.11/QuickStart/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Quick Start",
+  "position": 2
+}
diff --git a/versioned_docs/version-0.11/UserManuals/03-TemporalSetup.md 
b/versioned_docs/version-0.11/UserManuals/03-TemporalSetup.md
new file mode 100644
index 0000000..f893a83
--- /dev/null
+++ b/versioned_docs/version-0.11/UserManuals/03-TemporalSetup.md
@@ -0,0 +1,35 @@
+---
+title: "Temporal Setup"
+sidebar_position: 5
+description: >
+  The steps to install DevLake in Temporal mode.
+---
+
+
+Normally, DevLake would execute pipelines on a local machine (we call it 
`local mode`), it is sufficient most of the time. However, when you have too 
many pipelines that need to be executed in parallel, it can be problematic, as 
the horsepower and throughput of a single machine is limited.
+
+`temporal mode` was added to support distributed pipeline execution, you can 
fire up arbitrary workers on multiple machines to carry out those pipelines in 
parallel to overcome the limitations of a single machine.
+
+But, be careful, many API services like JIRA/GITHUB have a request rate limit 
mechanism. Collecting data in parallel against the same API service with the 
same identity would most likely hit such limit.
+
+## How it works
+
+1. DevLake Server and Workers connect to the same temporal server by setting 
up `TEMPORAL_URL`
+2. DevLake Server sends a `pipeline` to the temporal server, and one of the 
Workers pick it up and execute it
+
+
+**IMPORTANT: This feature is in early stage of development. Please use with 
caution**
+
+
+## Temporal Demo
+
+### Requirements
+
+- [Docker](https://docs.docker.com/get-docker)
+- [docker-compose](https://docs.docker.com/compose/install/)
+- [temporalio](https://temporal.io/)
+
+### How to setup
+
+1. Clone and fire up  [temporalio](https://temporal.io/) services
+2. Clone this repo, and fire up DevLake with command `docker-compose -f 
docker-compose-temporal.yml up -d`
\ No newline at end of file
diff --git a/versioned_docs/version-0.11/UserManuals/GRAFANA.md 
b/versioned_docs/version-0.11/UserManuals/GRAFANA.md
new file mode 100644
index 0000000..bd81651
--- /dev/null
+++ b/versioned_docs/version-0.11/UserManuals/GRAFANA.md
@@ -0,0 +1,120 @@
+---
+title: "How to use Grafana"
+sidebar_position: 1
+description: >
+  How to use Grafana
+---
+
+
+# Grafana
+
+<img 
src="https://user-images.githubusercontent.com/3789273/128533901-3107e9bf-c3e3-4320-ba47-879fe2b0ea4d.png";
 width="450px" />
+
+When first visiting Grafana, you will be provided with a sample dashboard with 
some basic charts setup from the database.
+
+## Contents
+
+Section | Link
+:------------ | :-------------
+Logging In | [View Section](#logging-in)
+Viewing All Dashboards | [View Section](#viewing-all-dashboards)
+Customizing a Dashboard | [View Section](#customizing-a-dashboard)
+Dashboard Settings | [View Section](#dashboard-settings)
+Provisioning a Dashboard | [View Section](#provisioning-a-dashboard)
+Troubleshooting DB Connection | [View Section](#troubleshooting-db-connection)
+
+## Logging In<a id="logging-in"></a>
+
+Once the app is up and running, visit `http://localhost:3002` to view the 
Grafana dashboard.
+
+Default login credentials are:
+
+- Username: `admin`
+- Password: `admin`
+
+## Viewing All Dashboards<a id="viewing-all-dashboards"></a>
+
+To see all dashboards created in Grafana visit `/dashboards`
+
+Or, use the sidebar and click on **Manage**:
+
+![Screen Shot 2021-08-06 at 11 27 08 
AM](https://user-images.githubusercontent.com/3789273/128534617-1992c080-9385-49d5-b30f-be5c96d5142a.png)
+
+
+## Customizing a Dashboard<a id="customizing-a-dashboard"></a>
+
+When viewing a dashboard, click the top bar of a panel, and go to **edit**
+
+![Screen Shot 2021-08-06 at 11 35 36 
AM](https://user-images.githubusercontent.com/3789273/128535505-a56162e0-72ad-46ac-8a94-70f1c7a910ed.png)
+
+**Edit Dashboard Panel Page:**
+
+![grafana-sections](https://user-images.githubusercontent.com/3789273/128540136-ba36ee2f-a544-4558-8282-84a7cb9df27a.png)
+
+### 1. Preview Area
+- **Top Left** is the variable select area (custom dashboard variables, used 
for switching projects, or grouping data)
+- **Top Right** we have a toolbar with some buttons related to the display of 
the data:
+  - View data results in a table
+  - Time range selector
+  - Refresh data button
+- **The Main Area** will display the chart and should update in real time
+
+> Note: Data should refresh automatically, but may require a refresh using the 
button in some cases
+
+### 2. Query Builder
+Here we form the SQL query to pull data into our chart, from our database
+- Ensure the **Data Source** is the correct database
+
+  ![Screen Shot 2021-08-06 at 10 14 22 
AM](https://user-images.githubusercontent.com/3789273/128545278-be4846e0-852d-4bc8-8994-e99b79831d8c.png)
+
+- Select **Format as Table**, and **Edit SQL** buttons to write/edit queries 
as SQL
+
+  ![Screen Shot 2021-08-06 at 10 17 52 
AM](https://user-images.githubusercontent.com/3789273/128545197-a9ff9cb3-f12d-4331-bf6a-39035043667a.png)
+
+- The **Main Area** is where the queries are written, and in the top right is 
the **Query Inspector** button (to inspect returned data)
+
+  ![Screen Shot 2021-08-06 at 10 18 23 
AM](https://user-images.githubusercontent.com/3789273/128545557-ead5312a-e835-4c59-b9ca-dd5c08f2a38b.png)
+
+### 3. Main Panel Toolbar
+In the top right of the window are buttons for:
+- Dashboard settings (regarding entire dashboard)
+- Save/apply changes (to specific panel)
+
+### 4. Grafana Parameter Sidebar
+- Change chart style (bar/line/pie chart etc)
+- Edit legends, chart parameters
+- Modify chart styling
+- Other Grafana specific settings
+
+## Dashboard Settings<a id="dashboard-settings"></a>
+
+When viewing a dashboard click on the settings icon to view dashboard 
settings. Here are 2 important sections to use:
+
+![Screen Shot 2021-08-06 at 1 51 14 
PM](https://user-images.githubusercontent.com/3789273/128555763-4d0370c2-bd4d-4462-ae7e-4b140c4e8c34.png)
+
+- Variables
+  - Create variables to use throughout the dashboard panels, that are also 
built on SQL queries
+
+  ![Screen Shot 2021-08-06 at 2 02 40 
PM](https://user-images.githubusercontent.com/3789273/128553157-a8e33042-faba-4db4-97db-02a29036e27c.png)
+
+- JSON Model
+  - Copy `json` code here and save it to a new file in `/grafana/dashboards/` 
with a unique name in the `lake` repo. This will allow us to persist dashboards 
when we load the app
+
+  ![Screen Shot 2021-08-06 at 2 02 52 
PM](https://user-images.githubusercontent.com/3789273/128553176-65a5ae43-742f-4abf-9c60-04722033339e.png)
+
+## Provisioning a Dashboard<a id="provisioning-a-dashboard"></a>
+
+To save a dashboard in the `lake` repo and load it:
+
+1. Create a dashboard in browser (visit `/dashboard/new`, or use sidebar)
+2. Save dashboard (in top right of screen)
+3. Go to dashboard settings (in top right of screen)
+4. Click on _JSON Model_ in sidebar
+5. Copy code into a new `.json` file in `/grafana/dashboards`
+
+## Troubleshooting DB Connection<a id="troubleshooting-db-connection"></a>
+
+To ensure we have properly connected our database to the data source in 
Grafana, check database settings in `./grafana/datasources/datasource.yml`, 
specifically:
+- `database`
+- `user`
+- `secureJsonData/password`
diff --git a/versioned_docs/version-0.11/UserManuals/_category_.json 
b/versioned_docs/version-0.11/UserManuals/_category_.json
new file mode 100644
index 0000000..b47bdfd
--- /dev/null
+++ b/versioned_docs/version-0.11/UserManuals/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "User Manuals",
+  "position": 3
+}
diff --git 
a/versioned_docs/version-0.11/UserManuals/create-pipeline-in-advanced-mode.md 
b/versioned_docs/version-0.11/UserManuals/create-pipeline-in-advanced-mode.md
new file mode 100644
index 0000000..14afd01
--- /dev/null
+++ 
b/versioned_docs/version-0.11/UserManuals/create-pipeline-in-advanced-mode.md
@@ -0,0 +1,89 @@
+---
+title: "Create Pipeline in Advanced Mode"
+sidebar_position: 2
+description: >
+  Create Pipeline in Advanced Mode
+---
+
+
+## Why advanced mode?
+
+Advanced mode allows users to create any pipeline by writing JSON. This is 
useful for users who want to:
+
+1. Collect multiple GitHub/GitLab repos or Jira projects within a single 
pipeline
+2. Have fine-grained control over what entities to collect or what subtasks to 
run for each plugin
+3. Orchestrate a complex pipeline that consists of multiple stages of plugins.
+
+Advanced mode gives the most flexibility to users by exposing the JSON API.
+
+## How to use advanced mode to create pipelines?
+
+1. Visit the "Create Pipeline Run" page on `config-ui`
+
+![image](https://user-images.githubusercontent.com/2908155/164569669-698da2f2-47c1-457b-b7da-39dfa7963e09.png)
+
+2. Scroll to the bottom and toggle on the "Advanced Mode" button
+
+![image](https://user-images.githubusercontent.com/2908155/164570039-befb86e2-c400-48fe-8867-da44654194bd.png)
+
+3. The pipeline editor expects a 2D array of plugins. The first dimension 
represents different stages of the pipeline and the second dimension describes 
the plugins in each stage. Stages run in sequential order and plugins within 
the same stage runs in parallel. We provide some templates for users to get 
started. Please also see the next section for some examples.
+
+![image](https://user-images.githubusercontent.com/2908155/164576122-fc015fea-ca4a-48f2-b2f5-6f1fae1ab73c.png)
+
+## Examples
+
+1. Collect multiple GitLab repos sequentially.
+
+>When there're multiple collection tasks against a single data source, we 
recommend running these tasks sequentially since the collection speed is mostly 
limited by the API rate limit of the data source.
+>Running multiple tasks against the same data source is unlikely to speed up 
the process and may overwhelm the data source.
+
+
+Below is an example for collecting 2 GitLab repos sequentially. It has 2 
stages, each contains a GitLab task.
+
+
+```
+[
+  [
+    {
+      "Plugin": "gitlab",
+      "Options": {
+        "projectId": 15238074
+      }
+    }
+  ],
+  [
+    {
+      "Plugin": "gitlab",
+      "Options": {
+        "projectId": 11624398
+      }
+    }
+  ]
+]
+```
+
+
+2. Collect a GitHub repo and a Jira board in parallel
+
+Below is an example for collecting a GitHub repo and a Jira board in parallel. 
It has a single stage with a GitHub task and a Jira task. Since users can 
configure multiple Jira connection, it's required to pass in a `connectionId` 
for Jira task to specify which connection to use.
+
+```
+[
+  [
+    {
+      "Plugin": "github",
+      "Options": {
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    },
+    {
+      "Plugin": "jira",
+      "Options": {
+        "connectionId": 1,
+        "boardId": 76
+      }
+    }
+  ]
+]
+```
diff --git 
a/versioned_docs/version-0.11/UserManuals/github-user-guide-v0.10.0.md 
b/versioned_docs/version-0.11/UserManuals/github-user-guide-v0.10.0.md
new file mode 100644
index 0000000..9a9014b
--- /dev/null
+++ b/versioned_docs/version-0.11/UserManuals/github-user-guide-v0.10.0.md
@@ -0,0 +1,118 @@
+---
+title: "GitHub User Guide v0.10.0"
+sidebar_position: 4
+description: >
+  GitHub User Guide v0.10.0
+---
+
+## Summary
+
+GitHub has a rate limit of 5,000 API calls per hour for their REST API.
+As a result, it may take hours to collect commits data from GitHub API for a 
repo that has 10,000+ commits.
+To accelerate the process, DevLake introduces GitExtractor, a new plugin that 
collects git data by cloning the git repo instead of by calling GitHub APIs.
+
+Starting from v0.10.0, DevLake will collect GitHub data in 2 separate plugins:
+
+- GitHub plugin (via GitHub API): collect repos, issues, pull requests
+- GitExtractor (via cloning repos):  collect commits, refs
+
+Note that GitLab plugin still collects commits via API by default since GitLab 
has a much higher API rate limit.
+
+This doc details the process of collecting GitHub data in v0.10.0. We're 
working on simplifying this process in the next releases.
+
+Before start, please make sure all services are started.
+
+## GitHub Data Collection Procedure
+
+There're 3 steps.
+
+1. Configure GitHub connection
+2. Create a pipeline to run GitHub plugin
+3. Create a pipeline to run GitExtractor plugin
+4. [Optional] Set up a recurring pipeline to keep data fresh
+
+### Step 1 - Configure GitHub connection
+
+1. Visit `config-ui` at `http://localhost:4000` and click the GitHub icon
+
+2. Click the default connection 'Github' in the list
+    
![image](https://user-images.githubusercontent.com/14050754/163591959-11d83216-057b-429f-bb35-a9d845b3de5a.png)
+
+3. Configure connection by providing your GitHub API endpoint URL and your 
personal access token(s).
+    
![image](https://user-images.githubusercontent.com/14050754/163592015-b3294437-ce39-45d6-adf6-293e620d3942.png)
+
+- Endpoint URL: Leave this unchanged if you're using github.com. Otherwise 
replace it with your own GitHub instance's REST API endpoint URL. This URL 
should end with '/'.
+- Auth Token(s): Fill in your personal access tokens(s). For how to generate 
personal access tokens, please see GitHub's [official 
documentation](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).
+You can provide multiple tokens to speed up the data collection process, 
simply concatenating tokens with commas.
+- GitHub Proxy URL: This is optional. Enter a valid proxy server address on 
your Network, e.g. http://your-proxy-server.com:1080
+
+4. Click 'Test Connection' and see it's working, then click 'Save Connection'.
+
+5. [Optional] Help DevLake understand your GitHub data by customizing data 
enrichment rules shown below.
+    
![image](https://user-images.githubusercontent.com/14050754/163592506-1873bdd1-53cb-413b-a528-7bda440d07c5.png)
+
+   1. Pull Request Enrichment Options
+
+      1. `Type`: PRs with label that matches given Regular Expression, their 
properties `type` will be set to the value of first sub match. For example, 
with Type being set to `type/(.*)$`, a PR with label `type/bug`, its `type` 
would be set to `bug`, with label `type/doc`, it would be `doc`.
+      2. `Component`: Same as above, but for `component` property.
+
+   2. Issue Enrichment Options
+
+      1. `Severity`: Same as above, but for `issue.severity` of course.
+
+      2. `Component`: Same as above.
+
+      3. `Priority`: Same as above.
+
+      4. **Requirement** : Issues with label that matches given Regular 
Expression, their properties `type` will be set to `REQUIREMENT`. Unlike 
`PR.type`, submatch does nothing,    because for Issue Management Analysis, 
people tend to focus on 3 kinds of types (Requirement/Bug/Incident), however, 
the concrete naming varies from repo to repo, time to time, so we decided to 
standardize them to help analysts make general purpose metrics.
+
+      5. **Bug**: Same as above, with `type` setting to `BUG`
+
+      6. **Incident**: Same as above, with `type` setting to `INCIDENT`
+
+6. Click 'Save Settings'
+
+### Step 2 - Create a pipeline to collect GitHub data
+
+1. Select 'Pipelines > Create Pipeline Run' from `config-ui`
+
+![image](https://user-images.githubusercontent.com/14050754/163592542-8b9d86ae-4f16-492c-8f90-12f1e90c5772.png)
+
+2. Toggle on GitHub plugin, enter the repo you'd like to collect data from.
+
+![image](https://user-images.githubusercontent.com/14050754/163592606-92141c7e-e820-4644-b2c9-49aa44f10871.png)
+
+3. Click 'Run Pipeline'
+
+You'll be redirected to newly created pipeline:
+
+![image](https://user-images.githubusercontent.com/14050754/163592677-268e6b77-db3f-4eec-8a0e-ced282f5a361.png)
+
+
+See the pipeline finishes (progress 100%):
+
+![image](https://user-images.githubusercontent.com/14050754/163592709-cce0d502-92e9-4c19-8504-6eb521b76169.png)
+
+### Step 3 - Create a pipeline to run GitExtractor plugin
+
+1. Enable the `GitExtractor` plugin, and enter your `Git URL` and, select the 
`Repository ID` from dropdown menu.
+
+![image](https://user-images.githubusercontent.com/2908155/164125950-37822d7f-6ee3-425d-8523-6f6b6213cb89.png)
+
+2. Click 'Run Pipeline' and wait until it's finished.
+
+3. Click `View Dashboards` on the top left corner of `config-ui`, the default 
username and password of Grafana are `admin`.
+
+![image](https://user-images.githubusercontent.com/61080/163666814-e48ac68d-a0cc-4413-bed7-ba123dd291c8.png)
+
+4. See dashboards populated with GitHub data.
+
+### Step 4 - [Optional] Set up a recurring pipeline to keep data fresh
+
+Please see [How to create recurring pipelines](./recurring-pipeline.md) for 
details.
+
+
+
+
+
+
diff --git a/versioned_docs/version-0.11/UserManuals/recurring-pipeline.md 
b/versioned_docs/version-0.11/UserManuals/recurring-pipeline.md
new file mode 100644
index 0000000..3e92349
--- /dev/null
+++ b/versioned_docs/version-0.11/UserManuals/recurring-pipeline.md
@@ -0,0 +1,30 @@
+---
+title: "Create Recurring Pipelines"
+sidebar_position: 3
+description: >
+  Create Recurring Pipelines
+---
+
+## How to create recurring pipelines?
+
+Once you've verified that a pipeline works, most likely you'll want to run 
that pipeline periodically to keep data fresh, and DevLake's pipeline blueprint 
feature have got you covered.
+
+
+1. Click 'Create Pipeline Run' and
+  - Toggle the plugins you'd like to run, here we use GitHub and GitExtractor 
plugin as an example
+  - Toggle on Automate Pipeline
+    
![image](https://user-images.githubusercontent.com/14050754/163596590-484e4300-b17e-4119-9818-52463c10b889.png)
+
+
+2. Click 'Add Blueprint'. Fill in the form and 'Save Blueprint'.
+
+    - **NOTE**: The schedule syntax is standard unix cron syntax, 
[Crontab.guru](https://crontab.guru/) is an useful reference
+    - **IMPORANT**: The scheduler is running using the `UTC` timezone. If you 
want data collection to happen at 3 AM New York time (UTC-04:00) every day, use 
**Custom Shedule** and set it to `0 7 * * *`
+
+    
![image](https://user-images.githubusercontent.com/14050754/163596655-db59e154-405f-4739-89f2-7dceab7341fe.png)
+
+3. Click 'Save Blueprint'.
+
+4. Click 'Pipeline Blueprints', you can view and edit the new blueprint in the 
blueprint list.
+
+    
![image](https://user-images.githubusercontent.com/14050754/163596773-4fb4237e-e3f2-4aef-993f-8a1499ca30e2.png)
\ No newline at end of file
diff --git a/versioned_sidebars/version-0.11-sidebars.json 
b/versioned_sidebars/version-0.11-sidebars.json
new file mode 100644
index 0000000..39332bf
--- /dev/null
+++ b/versioned_sidebars/version-0.11-sidebars.json
@@ -0,0 +1,8 @@
+{
+  "docsSidebar": [
+    {
+      "type": "autogenerated",
+      "dirName": "."
+    }
+  ]
+}
diff --git a/versions.json b/versions.json
new file mode 100644
index 0000000..fff9bee
--- /dev/null
+++ b/versions.json
@@ -0,0 +1,3 @@
+[
+  "0.11"
+]

Reply via email to