This is an automated email from the ASF dual-hosted git repository.

hez pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake.git


The following commit(s) were added to refs/heads/main by this push:
     new d1a6c2eb docs: update all readmes to website link (#1933)
d1a6c2eb is described below

commit d1a6c2eb92f488e50ef77002b11059b7ff53057f
Author: Louis.z <[email protected]>
AuthorDate: Thu May 19 01:56:30 2022 +0800

    docs: update all readmes to website link (#1933)
    
    Co-authored-by: Startrekzky <[email protected]>
---
 README-zh-CN.md                                    | 314 --------------------
 README.md                                          | 315 +++------------------
 docs/GRAFANA.md                                    | 112 --------
 docs/MIGRATIONS.md                                 |  30 --
 docs/NOTIFICATION.md                               |  28 --
 docs/create-pipeline-advanced-mode.md              |  81 ------
 docs/github-user-guide-v0.10.0.md                  | 113 --------
 docs/godoc.md                                      |  10 -
 docs/recurring-pipeline.md                         |  23 --
 img/logo.svg                                       |   7 +
 .../wechat_community_barcode.png                   | Bin
 plugins/README-zh-CN.md                            | 102 -------
 plugins/ae/README.md                               |  97 -------
 plugins/feishu/README-zh-CN.md                     |  65 -----
 plugins/github/README-zh-CN.md                     |  97 -------
 plugins/gitlab/README-zh-CN.md                     | 102 -------
 plugins/jenkins/README-zh-CN.md                    |  60 ----
 plugins/jira/README-zh-CN.md                       | 243 ----------------
 plugins/refdiff/README-zh-CN.md                    |  62 ----
 19 files changed, 48 insertions(+), 1813 deletions(-)

diff --git a/README-zh-CN.md b/README-zh-CN.md
deleted file mode 100644
index 20044ecd..00000000
--- a/README-zh-CN.md
+++ /dev/null
@@ -1,314 +0,0 @@
-<div align="center">
-<br />
-<img 
src="https://user-images.githubusercontent.com/3789273/128085813-92845abd-7c26-4fa2-9f98-928ce2246616.png";
 width="120px">
-
-# DevLake
-
-[![PRs 
Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat&logo=github&color=2370ff&labelColor=454545)](http://makeapullrequest.com)
-[![Discord](https://img.shields.io/discord/844603288082186240.svg?style=flat?label=&logo=discord&logoColor=ffffff&color=747df7&labelColor=454545)](https://discord.gg/83rDG6ydVZ)
-![badge](https://github.com/merico-dev/lake/actions/workflows/test.yml/badge.svg)
-[![Go Report 
Card](https://goreportcard.com/badge/github.com/merico-dev/lake)](https://goreportcard.com/report/github.com/merico-dev/lake)
-
-
-| [English](README.md) | 中文 |
-| --- | --- |
-
-</div>
-<br>
-<div align="left">
-
-### 什么是 DevLake?
-DevLake 将你所有 DevOps 工具里的数据以实用、个性化、可扩展的视图呈现。通过 DevLake,从不断增加的工具列表中收集、分析和可视化数据。
-
-DevLake 适用于希望更好地通过数据了解其开发过程的开发团队,以及希望以数据驱动提升自身实践的开发团队。有了 
DevLake,你可以向你的开发过程提出任何问题,只要连接数据并查询。
-
-
-### [查看基于本仓库数据的 
demo](https://grafana-lake.demo.devlake.io/d/0Rjxknc7z/demo-homepage?orgId=1)
-
-
-#### 开始安装 DevLake
-<table>
-  <tr>
-    <td valign="middle"><a href="#user-setup">运行 DevLake</a></td>
-  </tr>
-</table>
-
-
-
-<br>
-
-<div align="left">
-<img 
src="https://user-images.githubusercontent.com/14050754/142356580-40637a30-5578-48ed-8e4a-128cd0738e3e.png";
 width="100%" alt="User Flow" style="border-radius:15px;"/>
-<p align="center">用户使用流程</p><br>
-
-
-
-### DevLake 可以完成什么?
-1. 归集 DevOps 全流程效能数据,连接数据孤岛
-2. 标准的<a 
href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema"; 
target="_blank">研发数据模型</a>和开箱即用的<a 
href="https://github.com/merico-dev/lake/wiki/Metric-Cheatsheet"; 
target="_blank">效能指标</a>
-3. 灵活的数据收集、ETL的<a 
href="https://github.com/merico-dev/lake/blob/main/ARCHITECTURE.md";>框架</a>,支持自定义分析
-
-
-
-<br>
-
-### 数据源支持
-
-| 数据源     | 版本                                   |
-|---------|--------------------------------------|
-| Feishu  | Cloud                                |
-| GitHub  | Cloud                                |
-| Gitlab  | Cloud, Community Edition 13.x+       |
-| Jenkins | 2.263.x+                             |
-| Jira    | Cloud, Server 8.x+, Data Center 8.x+ |
-| TAPD    | Cloud                                |
-
-## 用户安装<a id="user-setup"></a>
-
-- 如果你只打算运行 DevLake,你只需要阅读这一小节<br>
-- 本节描述了 2 种安装方式,[本地安装](#local-setup)和[Kubernetes安装](#k8s-setup)
-- 如果你想在云端安装 
DevLake,你可以参考[安装手册](https://github.com/merico-dev/lake/wiki/How-to-Set-Up-Dev-Lake-with-Tin-zh-CN),点击
 <a valign="middle" 
href="https://www.teamcode.com/tin/clone?applicationId=259777118600769536";>
-        <img
-          
src="https://static01.teamcode.com/badge/teamcode-badge-run-in-cloud-cn.svg";
-          width="120px"
-          alt="Teamcode" valign="middle"
-        />
-      </a> 完成安装
-- 写成 `这样` 的命令需要在你的终端中运行
-
-  
-### 部署到本地<a id="local-setup"></a>
-#### 需要安装的软件包<a id="user-setup-requirements"></a>
-
-- [Docker v19.03.10+](https://docs.docker.com/get-docker)
-- [docker-compose v2.2.3+](https://docs.docker.com/compose/install/)
-
-注:安装完 Docker 后,你可能需要运行 Docker 应用程序并重新启动你的终端
-
-#### 在你的终端中运行以下命令<a id="user-setup-commands"></a>
-
-**IMPORTANT(新用户可以忽略): DevLake暂不支持向前兼容。当 DB Schema 发生变化时,直接更新已有实例可能出错,建议已经安装 
DevLake 的用户在升级时,重新部署实例并导入数据。**
-
-1. 在[最新版本列表](https://github.com/merico-dev/lake/releases/latest) 下载 
`docker-compose.yml` 和 `env.example`
-2. 将 `env.example` 重命名为 `.env`。Mac/Linux 用户请在命令行里运行 `mv env.example .env` 来完成修改
-3. 启动 Docker,然后运行 `docker-compose up -d` 启动服务
-4. 访问 `localhost:4000` 来设置 DevLake 的配置文件
-   >- 在 Integrations 页面上找到你想要导入的数据源
-   >- 了解如何配置每个数据源:<br>
-      > <a href="plugins/jira/README-zh-CN.md" target="_blank">Jira</a><br>
-      > <a href="plugins/gitlab/README-zh-CN.md" target="_blank">GitLab</a><br>
-      > <a href="plugins/jenkins/README-zh-CN.md" 
target="_blank">Jenkins</a><br>
-      > <a href="plugins/github/README-zh-CN.md" target="_blank">GitHub</a><br>
-   >- 提交表单,通过点击每个表单页面上的**Save Connection**按钮来更新数值。
-   >- `devlake`需要一段时间才能完全启动。如果`config-ui`提示 API 无法访问,请等待几秒钟并尝试刷新页面。
-
-5. 访问 `localhost:4000/pipelines/create`,创建 1个Pipeline run,并触发数据收集
-
-   Pipeline Runs 可以通过新的 "Create 
Run"界面启动。只需启用你希望运行的**数据源**,并指定数据收集的范围,比如Gitlab的项目ID和GitHub的仓库名称。
-
-   一旦创建了有效的 Pipeline Run 配置,按**Create Run**来启动/运行该 Pipeline。
-   Pipeline Run 启动后,你会被自动转到**Pipeline Activity**界面,以监控采集活动。
-
-   **Pipelines**可从 config-ui 的主菜单进入。
-
-   - 管理所有Pipeline: `http://localhost:4000/pipelines`。
-   - 创建Pipeline Run: `localhost:4000/pipelines/create`。
-   - 查看Pipeline Activity: `http://localhost:4000/pipelines/activity/[RUN_ID]`。
-
-   对于复杂度较高的用例,请使用Raw JSON 
API进行任务配置。使用**cURL**或图形API工具(如**Postman**)手动启动运行。`POST`以下请求到DevLake API端点。
-
-   >   ```json
-   >   [
-   >     [
-   >       {
-   >         "Plugin": "github",
-   >         "Options": {
-   >           "repo": "lake",
-   >           "owner": "merico-dev"
-   >         }
-   >       }
-   >     ]
-   >   ]
-   >   ```
-   
-   请参考这篇 wiki [How to trigger data 
collection](https://github.com/merico-dev/lake/wiki/How-to-use-the-triggers-page).
-
-6. 数据收集完成后,点击配置页面左上角的 *View Dashboards* 按钮或者访问 `localhost:3002`,访问 Grafana 
(用户名: `admin`, 密码: `admin`)
-
-   我们使用 <a href="https://grafana.com/"; target="_blank">Grafana</a> 
作为可视化工具,为存储在<a 
href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema";>我们数据库中的数据</a>建立图表。可以使用SQL查询,添加面板来构建、保存和编辑自定义仪表盘。
-
-   关于配置和定制仪表盘的所有细节可以在 [Grafana 文档](docs/GRAFANA.md) 中找到。
-
-#### 设置 Cron job
-为了定期同步数据,我们提供了[`lake-cli`](./cmd/lake-cli/README.md)以方便发送数据收集请求,我们同时提供了[cron 
job](./devops/sync/README.md)以定期触发 cli 工具。
-
-<br>
-
-
-### 部署到 Kubernetes 环境<a id="k8s-setup"></a>
-
-你也可以选择将 DevLake 部署到 Kubernetes 集群。这个操作只有一个前提条件,就是你有一套可以用的 Kubernetes 集群,并且确保本地 
kubeconfig 配置正确。接着执行如下命令完成部署:
-
-```sh
-kubectl apply -f 
https://raw.githubusercontent.com/merico-dev/lake/main/k8s-deploy.yaml
-```
-
-接下来的设置与上一节 docker-compose 方式部署一致,需要注意的点是由于 Kubernetes 默认 NodePort 端口范围的限制,所以:
-
-1. DevLake 的 4000 端口需要通过 30004 访问
-2. Grafana 的 3000 端口需要通过 30002 访问
-
-<br>
-
-## 开发者安装<a id="dev-setup"></a>
-
-#### 前期准备
-
-- <a href="https://docs.docker.com/get-docker"; target="_blank">Docker 
v19.03.10+</a>
-- <a href="https://golang.org/doc/install"; target="_blank">Golang</a>
-- Make
-  - Mac (Already installed)
-  - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
-  - Ubuntu: `sudo apt-get install build-essential libssl-dev`
-
-#### 如何设置开发环境
-
-1. 进入你想安装本项目的路径,并克隆资源库
-
-   ```sh
-   git clone https://github.com/merico-dev/lake.git
-   cd lake
-   ```
-
-2. 安装插件依赖
-
-   - [RefDiff](plugins/refdiff#development)
-
-3. 安装 go packages
-
-   ```sh
-   go get
-   ```
-
-4. 将样本配置文件复制到新的本地文件
-
-    ```sh
-    cp .env.example .env
-    ```
-
-5. 在`.env`文件中找到以`DB_URL`开头的那一行,把`mysql:3306`替换为`127.0.0.1:3306`
-
-6. 启动 MySQL 和 Grafana
-
-    > 确保在此步骤之前 Docker 正在运行。
-
-    ```sh
-    docker-compose up -d mysql grafana
-    ```
-
-7. 在 2 个终端种分别以开发者模式运行 lake 和 config UI:
-
-    ```sh
-    # run lake
-    make dev
-    # run config UI
-    make configure-dev
-    ```
-
-    Q: 在执行 `make dev` 时出现错误:`libgit2.so.1.3: cannot open share object file: No 
such file or directory`
-
-    A: 确保程序在运行时可以找到 `libgit2.so.1.3`。如果机器中 `libgit2.so.1.3` 在 `/usr/local/lib` 
下,您可以执行这样的命令:
-
-    ```sh
-   export LD_LIBRARY_PATH=/usr/local/lib
-    ```
-
-8. 访问 config-ui `localhost:4000` 来配置 DevLake 数据源
-
-   >- 在 "Integration"页面上找到到所需的插件页面
-   >- 你需要为你打算使用的插件输入必要的信息
-   >- 请参考以下内容,以了解如何配置每个插件的更多细节
-   >-> <a href="plugins/jira/README-zh-CN.md" target="_blank">Jira</a>
-   >-> <a href="plugins/gitlab/README-zh-CN.md" target="_blank">GitLab</a>
-   >-> <a href="plugins/jenkins/README-zh-CN.md" target="_blank">Jenkins</a>
-   >-> <a href="plugins/github/README-zh-CN.md" target="_blank">GitHub</a>
-
-9. 访问 `localhost:4000/pipelines/create`,创建 1个Pipeline run,并触发数据收集
-
-   Pipeline Runs 可以通过新的 "Create 
Run"界面启动。只需启用你希望运行的数据源,并指定数据收集的范围,比如Gitlab的项目ID和GitHub的仓库名称。
-
-   一旦创建了有效的 Pipeline Run 配置,按**Create Run**来启动/运行该 Pipeline。
-   Pipeline Run 启动后,你会被自动转到**Pipeline Activity**界面,以监控采集活动。
-
-   **Pipelines**可从 config-ui 的主菜单进入。
-
-   - 管理所有Pipeline: `http://localhost:4000/pipelines`。
-   - 创建Pipeline Run: `http://localhost:4000/pipelines/create`。
-   - 查看Pipeline Activity: `http://localhost:4000/pipelines/activity/[RUN_ID]`。
-
-   对于复杂度较高的用例,请使用Raw JSON 
API进行任务配置。使用**cURL**或图形API工具(如**Postman**)手动启动运行。`POST`以下请求到DevLake API端点。
-
-   >   ```json
-   >   [
-   >     [
-   >       {
-   >         "Plugin": "github",
-   >         "Options": {
-   >           "repo": "lake",
-   >           "owner": "merico-dev"
-   >         }
-   >       }
-   >     ]
-   >   ]
-   >   ```
-
-   请参考这篇 wiki [How to trigger data 
collection](https://github.com/merico-dev/lake/wiki/How-to-use-the-triggers-page).
-
-10. 数据收集完成后,点击配置页面左上角的 *View Dashboards* 按钮或者访问 `localhost:3002`(用户名: `admin`, 
密码: `admin`)
-
-   我们使用 <a href="https://grafana.com/"; target="_blank">Grafana</a> 
作为可视化工具,为存储在<a 
href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema";>我们数据库中的数据</a>建立图表。可以使用SQL查询,添加面板来构建、保存和编辑自定义仪表盘。
-
-   关于配置和定制仪表盘的所有细节可以在 [Grafana 文档](docs/GRAFANA.md) 中找到。
-
-11. (可选)运行测试:
-
-    ```sh
-    make test
-    ```
-
-12. 关于DB migration请参考[Migration文档](docs/MIGRATIONS.md).
-<br>
-
-## 项目路线图
-- <a href="https://github.com/merico-dev/lake/wiki/Roadmap-2022"; 
target="_blank">2022年路线图</a>: 2022年的目标和路线图
-- DevLake 已经支持的数据源:
-    - <a href="plugins/jira/README.md" target="_blank">Jira(Cloud)</a>
-    - <a href="plugins/gitextractor/README.md" target="_blank">Git</a>
-    - <a href="plugins/github/README.md" target="_blank">GitHub</a>
-    - <a href="plugins/gitlab/README.md" target="_blank">GitLab(Cloud)</a>
-    - <a href="plugins/jenkins/README.md" target="_blank">Jenkins</a>
-- <a href="https://github.com/merico-dev/lake/wiki/Metric-Cheatsheet"; 
target="_blank">已经支持的指标</a>: 为观测和分析提供不同的视角
-
-<br>
-
-## 贡献
-本节列出了所有与共建 DevLake 相关的文档
-
-- [架构设计](ARCHITECTURE.md): DevLake的架构设计
-- [添加一个插件](/plugins/README.md): 如何添加一个新插件
-- [添加新的指标](/plugins/HOW-TO-ADD-METRICS.md): 如何在一个插件里添加新的指标
-- [贡献规范](CONTRIBUTING.md): 如果你想给 DevLake 贡献代码,请看下这个文档
-
-<br>
-
-## 社区
-
-- <a href="https://discord.com/invite/83rDG6ydVZ"; target="_blank">Discord</a>: 
在 Discord 上给我们发消息
-- <a href="https://github.com/merico-dev/lake/wiki/FAQ"; 
target="_blank">FAQ</a>: 常见问题汇总
-- 微信用户群二维码<br>
-![](DevLake 社区群(小)活码.png)
-<br>
-
-### License<a id="license"></a>
-
-此项目的许可证为 Apache License 2.0 - 查看 [许可证](LICENSE) 详情。
diff --git a/README.md b/README.md
index 4f719da9..dcf14286 100644
--- a/README.md
+++ b/README.md
@@ -1,323 +1,90 @@
 <div align="center">
-<br />
-<img 
src="https://user-images.githubusercontent.com/3789273/128085813-92845abd-7c26-4fa2-9f98-928ce2246616.png";
 width="120px">
+<br/>
+<img src="img/logo.svg" width="120px">
+<br/>
 
-# DevLake
+# Apache DevLake(Incubating)
 
 [![PRs 
Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat&logo=github&color=2370ff&labelColor=454545)](http://makeapullrequest.com)
-![badge](https://github.com/merico-dev/lake/actions/workflows/test.yml/badge.svg)
-[![Go Report 
Card](https://goreportcard.com/badge/github.com/merico-dev/lake)](https://goreportcard.com/report/github.com/merico-dev/lake)
+![badge](https://github.com/apache/incubator-devlake/actions/workflows/test.yml/badge.svg)
+[![Go Report 
Card](https://goreportcard.com/badge/github.com/apache/incubator-devlake)](https://goreportcard.com/report/github.com/apache/incubator-devlake)
 
[![Slack](https://img.shields.io/badge/slack-join_chat-success.svg?logo=slack)](https://join.slack.com/t/devlake-io/shared_invite/zt-17b6vuvps-x98pqseoUagM7EAmKC82xQ)
-
-| English | [中文](README-zh-CN.md) |
-| --- | --- |
 </div>
 <br>
 <div align="left">
 
-### What is DevLake?
-DevLake brings your DevOps data into one practical, customized, extensible 
view. Ingest, analyze, and visualize data from an ever-growing list of 
developer tools, with our open source product.
+### What is Apache DevLake?
+Apache DevLake is an open-source dev data platform that ingests, analyzes, and 
visualizes the fragmented data from DevOps tools to distill insights for 
engineering productivity.
+
+Apache DevLake is designed for developer teams looking to make better sense of 
their development process and to bring a more data-driven approach to their own 
practices. You can ask Apache DevLake many questions regarding your development 
process. Just connect and query.
 
-DevLake is designed for developer teams looking to make better sense of their 
development process and to bring a more data-driven approach to their own 
practices. You can ask DevLake many questions regarding your development 
process. Just connect and query.
+### Demo
+See 
[demo](https://grafana-lake.demo.devlake.io/d/0Rjxknc7z/demo-homepage?orgId=1). 
The data in the demo comes from this repo.
 
-### See [demo based on this 
repo](https://grafana-lake.demo.devlake.io/d/0Rjxknc7z/demo-homepage?orgId=1)
 
 #### Get started with just a few clicks
 <table>
   <tr>
-    <td valign="middle"><a href="#user-setup">Run DevLake</a></td>
+    <td valign="middle"><a href="#user-setup">Run Apache DevLake</a></td>
   </tr>
 </table>
 
-
-<br>
-
+<br/>
 
 <div align="left">
 <img 
src="https://user-images.githubusercontent.com/14050754/145056261-ceaf7044-f5c5-420f-80ca-54e56eb8e2a7.png";
 width="100%" alt="User Flow" style="border-radius:15px;"/>
 <p align="center">User Flow</p>
 
+<br/>
 
 
-### What can be accomplished with DevLake?
-1. Collect DevOps data across the entire SDLC process and connect data silos
-2. A standard <a 
href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema";>data
 model</a> and out-of-the-box <a 
href="https://github.com/merico-dev/lake/wiki/Metric-Cheatsheet";>metrics</a> 
for software engineering
-3. Flexible <a 
href="https://github.com/merico-dev/lake/blob/main/ARCHITECTURE.md";>framework</a>
 for data collection and ETL, support customized analysis
-
-
-<br>
-
-### Supported data sources
-
-| Data Source | Versions                             |
-|-------------|--------------------------------------|
-| Feishu      | Cloud                                |
-| GitHub      | Cloud                                |
-| Gitlab      | Cloud, Community Edition 13.x+       |
-| Jenkins     | 2.263.x+                             |
-| Jira        | Cloud, Server 8.x+, Data Center 8.x+ |
-| TAPD        | Cloud                                |
-
-## User setup<a id="user-setup"></a>
-
-- If you only plan to run the product locally, this is the **ONLY** section 
you should need.
-- If you want to run in a cloud environment, click <a valign="middle" 
href="https://www.teamcode.com/tin/clone?applicationId=259777118600769536";>
-        <img
-          
src="https://static01.teamcode.com/badge/teamcode-badge-run-in-cloud-en.svg";
-          width="120px"
-          alt="Teamcode" valign="middle"
-        />
-      </a> to set up. This is the detailed 
[guide](https://github.com/merico-dev/lake/wiki/How-to-Set-Up-Dev-Lake-with-Tin).
-- Commands written `like this` are to be run in your terminal.
-
-#### Prerequisites
-
-- [Docker v19.03.10+](https://docs.docker.com/get-docker)
-- [docker-compose v2.2.3+](https://docs.docker.com/compose/install/)
-
-#### Launch DevLake
-
-1. Download `docker-compose.yml` and `env.example` from [latest release 
page](https://github.com/merico-dev/lake/releases/latest) into a folder.
-2. Rename `env.example` to `.env`. For Mac/Linux users, please run `mv 
env.example .env` in the terminal.
-3. Run `docker-compose up -d` to launch DevLake.
-
-#### Configure data connections and collect data
-
-1. Visit `config-ui` at `http://localhost:4000` in your browser to configure 
data connections. **For users who'd like to collect GitHub data, we recommend 
reading our [GitHub data collection guide](./docs/github-user-guide-v0.10.0.md) 
which covers the following steps in detail.**
-   >- Navigate to desired plugins on the Integrations page
-   >- Please reference the following for more details on how to configure each 
one:<br>
-      > <a href="plugins/jira/README.md" target="_blank">Jira</a><br>
-      > <a href="plugins/gitlab/README.md" target="_blank">GitLab</a><br>
-      > <a href="plugins/jenkins/README.md" target="_blank">Jenkins</a><br>
-      > <a href="plugins/github/README.md" target="_blank">GitHub</a><br>
-   >- Submit the form to update the values by clicking on the **Save 
Connection** button on each form page
-   >- `devlake` takes a while to fully boot up. if `config-ui` complaining 
about api being unreachable, please wait a few seconds and try refreshing the 
page.
-2. Create pipelines to trigger data collection in `config-ui`
-3. Click *View Dashboards* button in the top left when done, or visit 
`localhost:3002` (username: `admin`, password: `admin`).
-
-   We use <a href="https://grafana.com/"; target="_blank">Grafana</a> as a 
visualization tool to build charts for the <a 
href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema";>data
 stored in our database</a>. Using SQL queries, we can add panels to build, 
save, and edit customized dashboards.
-
-   All the details on provisioning and customizing a dashboard can be found in 
the [Grafana Doc](docs/GRAFANA.md).
-4. To synchronize data periodically, users can set up recurring pipelines with 
DevLake's [pipeline blueprint](./docs/recurring-pipeline.md) for details.
-
-#### Upgrade to a newer version
-
-Support for database schema migration was introduced to DevLake in v0.10.0. 
From v0.10.0 onwards, users can upgrade their instance smoothly to a newer 
version. However, versions prior to v0.10.0 do not support upgrading to a newer 
version with a different database schema. We recommend users deploying a new 
instance if needed.
-
-#### Deploy to Kubernates
-
-We provide a sample [k8s-deploy.yaml](k8s-deploy.yaml) for users interested in 
deploying DevLake on a k8s cluster.
-
-[k8s-deploy.yaml](k8s-deploy.yaml) will create a namespace `devlake` on your 
k8s cluster, and use `nodePort 30004` for `config-ui`,  `nodePort 30002` for 
`grafana` dashboards. If you would like to use certain version of DevLake, 
please update the image tag of `grafana`, `devlake` and `config-ui` services to 
specify versions like `v0.10.1`.
-
-Here's the step-by-step guide:
-
-1. Download [k8s-deploy.yaml](k8s-deploy.ymal) to local machine
-2. Some key points:
-   - `config-ui` deployment:
-     * `GRAFANA_ENDPOINT`: FQDN of grafana service which can be reached from 
user's browser
-     * `DEVLAKE_ENDPOINT`: FQDN of devlake service which can be reached within 
k8s cluster, normally you don't need to change it unless namespace was changed
-     * `ADMIN_USER`/`ADMIN_PASS`: Not required, but highly recommended
-   - `devlake-config` config map:
-     * `MYSQL_USER`: shared between `mysql` and `grafana` service
-     * `MYSQL_PASSWORD`: shared between `mysql` and `grafana` service
-     * `MYSQL_DATABASE`: shared between `mysql` and `grafana` service
-     * `MYSQL_ROOT_PASSWORD`: set root password for `mysql`  service
-   - `devlake` deployment:
-     * `DB_URL`: update this value if  `MYSQL_USER`, `MYSQL_PASSWORD` or 
`MYSQL_DATABASE` were changed
-3. The `devlake` deployment store its configuration in `/app/.env`. In our 
sample yaml, we use `hostPath` volume, so please make sure directory 
`/var/lib/devlake` exists on your k8s workers, or employ other techniques to 
persist `/app/.env` file. Please do NOT mount the entire `/app` directory, 
because plugins are located in `/app/bin` folder.
-4. Finally, execute the following command, DevLake should be up and running:
-    ```sh
-    kubectl apply -f k8s-deploy.yaml
-    ```
-
-
-## Developer Setup<a id="dev-setup"></a>
-
-#### Requirements
-
-- <a href="https://docs.docker.com/get-docker"; target="_blank">Docker 
v19.03.10+</a>
-- <a href="https://golang.org/doc/install"; target="_blank">Golang v1.17+</a>
-- Make
-  - Mac (Already installed)
-  - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
-  - Ubuntu: `sudo apt-get install build-essential libssl-dev`
-
-#### How to setup dev environment
-1. Navigate to where you would like to install this project and clone the 
repository:
-
-   ```sh
-   git clone https://github.com/merico-dev/lake.git
-   cd lake
-   ```
-
-2. Install dependencies for plugins:
-
-   - [RefDiff](plugins/refdiff#development)
-
-3. Install Go packages
-
-    ```sh
-       go get
-    ```
-
-4. Copy the sample config file to new local file:
-
-    ```sh
-    cp .env.example .env
-    ```
-
-5. Update the following variables in the file `.env`:
-
-    * `DB_URL`: Replace `mysql:3306` with `127.0.0.1:3306`
-
-6. Start the MySQL and Grafana containers:
+## What can be accomplished with Apache DevLake?
+1. Collect DevOps data across the entire Software Development Life Cycle 
(SDLC) and connect the siloed data with a standard [data 
model](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema).
+2. Provide out-of-the-box engineering 
[metrics](https://devlake.apache.org/docs/EngineeringMetrics) to be visualized 
in a sereis of dashboards.
+3. Allow a flexible 
[framework](https://devlake.apache.org/docs/Overview/Architecture) for data 
collection ad ETL to support customizable data analysis.
 
-    > Make sure the Docker daemon is running before this step.
 
-    ```sh
-    docker-compose up -d mysql grafana
-    ```
+## Supported Data Sources
 
-7. Run lake and config UI in dev mode in two seperate terminals:
+| Data Source                                                | Domain          
                                           | Versions                           
  |
+| ---------------------------------------------------------- | 
---------------------------------------------------------- | 
------------------------------------ |
+| [Feishu](https://devlake.apache.org/docs/Plugins/feishu)   | Documentation   
                                           | Cloud                              
  |
+| [GitHub](https://devlake.apache.org/docs/Plugins/github)   | Source Code 
Management, Code Review, Issue/Task Management | Cloud                          
      |
+| [Gitlab](https://devlake.apache.org/docs/Plugins/gitlab)   | Source Code 
Management, Code Review, Issue/Task Management | Cloud, Community Edition 13.x+ 
      |
+| [Jenkins](https://devlake.apache.org/docs/Plugins/jenkins) | CI/CD           
                                           | 2.263.x+                           
  |
+| [Jira](https://devlake.apache.org/docs/Plugins/jira)       | Issue/Task 
Management                                      | Cloud, Server 8.x+, Data 
Center 8.x+ |
+| TAPD                                                       | Issue/Task 
Management                                      | Cloud                         
       |
 
-    ```sh
-    # run lake
-    make dev
-    # run config UI
-    make configure-dev
-    ```
 
-    Q: I got an error saying: `libgit2.so.1.3: cannot open share object file: 
No such file or directory`
-
-    A: Make sure your program find `libgit2.so.1.3`. `LD_LIBRARY_PATH` can be 
assigned like this if your `libgit2.so.1.3` is located at `/usr/local/lib`:
-
-    ```sh
-    export LD_LIBRARY_PATH=/usr/local/lib
-    ```
-
-8. Visit config UI at `localhost:4000` to configure data connections.
-   >- Navigate to desired plugins pages on the Integrations page
-   >- You will need to enter the required information for the plugins you 
intend to use.
-   >- Please reference the following for more details on how to configure each 
one:
-   >-> <a href="plugins/jira/README.md" target="_blank">Jira</a>
-   >-> <a href="plugins/gitlab/README.md" target="_blank">GitLab</a>,
-   >-> <a href="plugins/jenkins/README.md" target="_blank">Jenkins</a>
-   >-> <a href="plugins/github/README.md" target="_blank">GitHub</a>
-
-   >- Submit the form to update the values by clicking on the **Save 
Connection** button on each form page
-
-9. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data 
collection.
-
-
-   Pipelines Runs can be initiated by the new "Create Run" Interface. Simply 
enable the **Data Connection Providers** you wish to run collection for, and 
specify the data you want to collect, for instance, **Project ID** for Gitlab 
and **Repository Name** for GitHub.
-
-   Once a valid pipeline configuration has been created, press **Create Run** 
to start/run the pipeline.
-   After the pipeline starts, you will be automatically redirected to the 
**Pipeline Activity** screen to monitor collection activity.
-
-   **Pipelines** is accessible from the main menu of the config-ui for easy 
access.
-
-   - Manage All Pipelines: `http://localhost:4000/pipelines`
-   - Create Pipeline RUN: `http://localhost:4000/pipelines/create`
-   - Track Pipeline Activity: 
`http://localhost:4000/pipelines/activity/[RUN_ID]`
-
-   For advanced use cases and complex pipelines, please use the Raw JSON API 
to manually initiate a run using **cURL** or graphical API tool such as 
**Postman**. `POST` the following request to the DevLake API Endpoint.
-
-    ```json
-    [
-        [
-            {
-                "plugin": "github",
-                "options": {
-                    "repo": "lake",
-                    "owner": "merico-dev"
-                }
-            }
-        ]
-    ]
-    ```
-
-   Please refer to [Pipeline Advanced 
Mode](docs/create-pipeline-advanced-mode.md) for in-depth explanation.
-
-
-10. Click *View Dashboards* button in the top left when done, or visit 
`localhost:3002` (username: `admin`, password: `admin`).
-
-   We use <a href="https://grafana.com/"; target="_blank">Grafana</a> as a 
visualization tool to build charts for the <a 
href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema";>data
 stored in our database</a>. Using SQL queries, we can add panels to build, 
save, and edit customized dashboards.
-
-   All the details on provisioning and customizing a dashboard can be found in 
the [Grafana Doc](docs/GRAFANA.md).
-
-
-11. (Optional) To run the tests:
-
-    ```sh
-    make test
-    ```
-
-12. For DB migrations, please refer to [Migration Doc](docs/MIGRATIONS.md).
-<br>
-
-
-## Temporal Mode
-
-Normally, DevLake would execute pipelines on local machine (we call it `local 
mode`), it is sufficient most of the time.However, when you have too many 
pipelines that need to be executed in parallel, it can be problematic, either 
limited by the horsepower or throughput of a single machine.
-
-`temporal mode` was added to support distributed pipeline execution, you can 
fire up arbitrary workers on multiple machines to carry out those pipelines in 
parallel without hitting the single machine limitation.
-
-But, be careful, many API services like JIRA/GITHUB have request rate limit 
mechanism, collect data in parallel against same API service with same identity 
would most likely hit the wall.
-
-### How it works
-
-1. DevLake Server and Workers connect to the same temporal server by setting 
up `TEMPORAL_URL`
-2. DevLake Server sends `pipeline` to temporal server, and one of the Workers 
would pick it up and execute
-
-
-**IMPORTANT: This feature is in early stage of development, use with cautious**
-
-
-### Temporal Demo
-
-#### Requirements
-
-- [Docker](https://docs.docker.com/get-docker)
-- [docker-compose](https://docs.docker.com/compose/install/)
-- [temporalio](https://temporal.io/)
-
-#### How to setup
-
-1. Clone and fire up  [temporalio](https://temporal.io/) services
-2. Clone this repo, and fire up DevLake with command `docker-compose -f 
docker-compose-temporal.yml up -d`
+## Quick Start
+- [Deploy Locally](https://devlake.apache.org/docs/QuickStart/LocalSetup)
+- [Deploy to 
Kubernates](https://devlake.apache.org/docs/QuickStart/KubernetesSetup)
+- [Deploy in Temporal 
Mode](https://devlake.apache.org/docs/QuickStart/TemporalSetup)
+- [Deploy in Developer 
Mode](https://devlake.apache.org/docs/QuickStart/DeveloperSetup)
 
 
 ## Project Roadmap
-- <a href="https://github.com/merico-dev/lake/wiki/Roadmap-2022"; 
target="_blank">Roadmap 2022</a>: Detailed project roadmaps for 2022.
-- DevLake already supported following data sources:
-    - <a href="plugins/jira/README.md" target="_blank">Jira(Cloud)</a>
-    - <a href="plugins/gitextractor/README.md" target="_blank">Git</a>
-    - <a href="plugins/github/README.md" target="_blank">GitHub</a>
-    - <a href="plugins/gitlab/README.md" target="_blank">GitLab(Cloud)</a>
-    - <a href="plugins/jenkins/README.md" target="_blank">Jenkins</a>
-- <a href="https://github.com/merico-dev/lake/wiki/Metric-Cheatsheet"; 
target="_blank">Supported engineering metrics</a>: provide rich perspectives to 
observe and analyze SDLC.
+- <a href="https://devlake.apache.org/docs/Overview/Roadmap"; 
target="_blank">Roadmap 2022</a>: Detailed project roadmaps for 2022.
+- <a href="https://devlake.apache.org/docs/EngineeringMetrics"; 
target="_blank">Supported engineering metrics</a>: provide rich perspectives to 
observe and analyze SDLC.
 
-<br>
 
 ## How to Contribute
 This section lists all the documents to help you contribute to the repo.
 
-- [Architecture](ARCHITECTURE.md): Architecture of DevLake
-- [Data 
Model](https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema): 
Domain Layer Schema
+- [Architecture](https://devlake.apache.org/docs/Overview/Architecture): 
Architecture of Apache DevLake
+- [Data 
Model](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema): 
Domain Layer Schema
 - [Add a Plugin](/plugins/README.md): Guide to add a plugin
 - [Add metrics](/plugins/HOW-TO-ADD-METRICS.md): Guide to add metrics in a 
plugin
 - [Contribution guidelines](CONTRIBUTING.md): Start from here if you want to 
make contribution
 
-<br>
 
 ## Community
 
 - <a 
href="https://join.slack.com/t/devlake-io/shared_invite/zt-17b6vuvps-x98pqseoUagM7EAmKC82xQ";
 target="_blank">Slack</a>: Message us on Slack
-- <a href="https://github.com/merico-dev/lake/wiki/FAQ"; 
target="_blank">FAQ</a>: Frequently Asked Questions
-- Wechat Group QR Code<br>
-![](DevLake 社区群(小)活码.png)
-<br>
+- <a href="https://github.com/apache/incubator-devlake/wiki/FAQ"; 
target="_blank">FAQ</a>: Frequently Asked Questions
+- Wechat Community:<br>
+  ![](img/wechat_community_barcode.png)
+
 
 ## License<a id="license"></a>
 
diff --git a/docs/GRAFANA.md b/docs/GRAFANA.md
deleted file mode 100644
index f6ebf92d..00000000
--- a/docs/GRAFANA.md
+++ /dev/null
@@ -1,112 +0,0 @@
-# Grafana
-
-<img 
src="https://user-images.githubusercontent.com/3789273/128533901-3107e9bf-c3e3-4320-ba47-879fe2b0ea4d.png";
 width="450px" />
-
-When first visiting grafana, you will be provided with a sample dashboard with 
some basic charts setup from the database
-
-## Contents
-
-Section | Link
-:------------ | :-------------
-Logging In | [View Section](#logging-in)
-Viewing All Dashboards | [View Section](#viewing-all-dashboards)
-Customizing a Dashboard | [View Section](#customizing-a-dashboard)
-Dashboard Settings | [View Section](#dashboard-settings)
-Provisioning a Dashboard | [View Section](#provisioning-a-dashboard)
-Troubleshooting DB Connection | [View Section](#troubleshooting-db-connection)
-
-## Logging In<a id="logging-in"></a>
-
-Once the app is up and running, visit `http://localhost:3002` to view the 
Grafana dashboard.
-
-Default login credentials are:
-
-- Username: `admin`
-- Password: `admin`
-
-## Viewing All Dashboards<a id="viewing-all-dashboards"></a>
-
-To see all dashboards created in Grafana visit `/dashboards`
-
-Or, use the sidebar and click on **Manage**:
-
-![Screen Shot 2021-08-06 at 11 27 08 
AM](https://user-images.githubusercontent.com/3789273/128534617-1992c080-9385-49d5-b30f-be5c96d5142a.png)
-
-
-## Customizing a Dashboard<a id="customizing-a-dashboard"></a>
-
-When viewing a dashboard, click the top bar of a panel, and go to **edit**
-
-![Screen Shot 2021-08-06 at 11 35 36 
AM](https://user-images.githubusercontent.com/3789273/128535505-a56162e0-72ad-46ac-8a94-70f1c7a910ed.png)
-
-**Edit Dashboard Panel Page:**
-
-![grafana-sections](https://user-images.githubusercontent.com/3789273/128540136-ba36ee2f-a544-4558-8282-84a7cb9df27a.png)
-
-### 1. Preview Area
-- **Top Left** is the variable select area (custom dashboard variables, used 
for switching projects, or grouping data)
-- **Top Right** we have a toolbar with some buttons related to the display of 
the data:
-  - View data results in a table
-  - Time range selector
-  - Refresh data button
-- **The Main Area** will display the chart and should update in real time
-
-> Note: Data should refresh automatically, but may require a refresh using the 
button in some cases
-
-### 2. Query Builder
-Here we form the SQL query to pull data into our chart, from our database
-- Ensure the **Data Source** is the correct database
-
-  ![Screen Shot 2021-08-06 at 10 14 22 
AM](https://user-images.githubusercontent.com/3789273/128545278-be4846e0-852d-4bc8-8994-e99b79831d8c.png)
-
-- Select **Format as Table**, and **Edit SQL** buttons to write/edit queries 
as SQL
-
-  ![Screen Shot 2021-08-06 at 10 17 52 
AM](https://user-images.githubusercontent.com/3789273/128545197-a9ff9cb3-f12d-4331-bf6a-39035043667a.png)
-
-- The **Main Area** is where the queries are written, and in the top right is 
the **Query Inspector** button (to inspect returned data)
-
-  ![Screen Shot 2021-08-06 at 10 18 23 
AM](https://user-images.githubusercontent.com/3789273/128545557-ead5312a-e835-4c59-b9ca-dd5c08f2a38b.png)
-
-### 3. Main Panel Toolbar
-In the top right of the window are buttons for:
-- Dashboard settings (regarding entire dashboard)
-- Save/apply changes (to specific panel)
-
-### 4. Grafana Parameter Sidebar
-- Change chart style (bar/line/pie chart etc)
-- Edit legends, chart parameters
-- Modify chart styling
-- Other Grafana specific settings
-
-## Dashboard Settings<a id="dashboard-settings"></a>
-
-When viewing a dashboard click on the settings icon to view dashboard 
settings. In here there is 2 pages important sections to use:
-
-![Screen Shot 2021-08-06 at 1 51 14 
PM](https://user-images.githubusercontent.com/3789273/128555763-4d0370c2-bd4d-4462-ae7e-4b140c4e8c34.png)
-
-- Variables
-  - Create variables to use throughout the dashboard panels, that are also 
built on SQL queries
-
-  ![Screen Shot 2021-08-06 at 2 02 40 
PM](https://user-images.githubusercontent.com/3789273/128553157-a8e33042-faba-4db4-97db-02a29036e27c.png)
-
-- JSON Model
-  - Copy `json` code here and save it to a new file in `/grafana/dashboards/` 
with a unique name in the `lake` repo. This will allow us to persist dashboards 
when we load the app
-
-  ![Screen Shot 2021-08-06 at 2 02 52 
PM](https://user-images.githubusercontent.com/3789273/128553176-65a5ae43-742f-4abf-9c60-04722033339e.png)
-
-## Provisioning a Dashboard<a id="provisioning-a-dashboard"></a>
-
-To save a dashboard in the `lake` repo and load it:
-
-1. Create a dashboard in browser (visit `/dashboard/new`, or use sidebar)
-2. Save dashboard (in top right of screen)
-3. Go to dashboard settings (in top right of screen)
-4. Click on _JSON Model_ in sidebar
-5. Copy code into a new `.json` file in `/grafana/dashboards`
-
-## Troubleshooting DB Connection<a id="troubleshooting-db-connection"></a>
-
-To ensure we have properly connected our database to the data source in 
Grafana, check database settings in `./grafana/datasources/datasource.yml`, 
specifically:
-- `database`
-- `user`
-- `secureJsonData/password`
diff --git a/docs/MIGRATIONS.md b/docs/MIGRATIONS.md
deleted file mode 100644
index ccdf238e..00000000
--- a/docs/MIGRATIONS.md
+++ /dev/null
@@ -1,30 +0,0 @@
-# Migrations (Database)
-
-## Summary
-Starting in v0.10.0, DevLake provides a lightweight migration tool for 
executing migration scripts. 
-Both framework itself and plugins define their migration scripts in their own 
migration folder. 
-The migration scripts are written with gorm in Golang to support different SQL 
dialects.
-
-
-## Migration script
-Migration script describes how to do database migration. 
-They implement the `Script` interface. 
-When DevLake starts, scripts register themselves to the framework by invoking 
the `Register` function
-
-```go
-type Script interface {
-       Up(ctx context.Context, db *gorm.DB) error
-       Version() uint64
-       Name() string
-}
-```
-
-## Table `migration_history`
-
-The table tracks migration scripts execution and schemas changes. 
-From which, DevLake could figure out the current state of database schemas.
-## How it Works
-1. check `migration_history` table, calculate all the migration scripts need 
to be executed.
-2. sort scripts by Version in ascending order.
-3. execute scripts.
-4. save results in the `migration_history` table.
diff --git a/docs/NOTIFICATION.md b/docs/NOTIFICATION.md
deleted file mode 100644
index 50011499..00000000
--- a/docs/NOTIFICATION.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# Notification
-
-
-## Request
-example request
-```
-POST 
/lake/notify?nouce=3-FDXxIootApWxEVtz&sign=424c2f6159bd9e9828924a53f9911059433dc14328a031e91f9802f062b495d5
-
-{"TaskID":39,"PluginName":"jenkins","CreatedAt":"2021-09-30T15:28:00.389+08:00","UpdatedAt":"2021-09-30T15:28:00.785+08:00"}
-```
-
-## Configuration
-If you want to use the notification feature, you should add two configuration 
key to `.env` file. 
-```shell
-# .env
-# endpoint is the notification request url, eg: http://example.com/lake/notify
-NOTIFICATION_ENDPOINT=
-# screte is used to calculate signature
-NOTIFICATION_SECRET=
-```
-
-## Signature
-You should check the signature before accepting the notification request. We 
use sha256 algorithm to calculate the checksum.  
-```go
-// calculate checksum
-sum := sha256.Sum256([]byte(requestBody + NOTIFICATION_SECRET + nouce))
-return hex.EncodeToString(sum[:])
-```
diff --git a/docs/create-pipeline-advanced-mode.md 
b/docs/create-pipeline-advanced-mode.md
deleted file mode 100644
index d0ad3f2d..00000000
--- a/docs/create-pipeline-advanced-mode.md
+++ /dev/null
@@ -1,81 +0,0 @@
-## Why advanced mode?
-
-Advanced mode allows users to create any pipeline by writing JSON. This is 
most useful for users who'd like to:
-
-1. Collect multiple GitHub/GitLab repos or Jira projects within a single 
pipeline
-2. Have fine-grained control over what entities to collect or what subtasks to 
run for each plugin
-3. Orchestrate a complex pipeline that consists of multiple stages of plugins.
-
-Advaned mode gives the most flexiblity to users by exposing the JSON API
-
-## How to use advanced mode to create pipelines?
-
-1. Visit the "Create Pipeline Run" page on `config-ui`
-
-![image](https://user-images.githubusercontent.com/2908155/164569669-698da2f2-47c1-457b-b7da-39dfa7963e09.png)
-
-2. Scroll to the bottom and toggle on the "Advanced Mode" button
-
-![image](https://user-images.githubusercontent.com/2908155/164570039-befb86e2-c400-48fe-8867-da44654194bd.png)
-
-3. The pipeline editor expects a 2D array of plugins. The first dimension 
represents different stages of the pipeline and the second dimension describes 
the plugins in each stage. Stages run in sequential order and plugins within 
the same stage runs in parallel. We provide some templates for users to get 
started. Please also see the next section for some examples.
-
-![image](https://user-images.githubusercontent.com/2908155/164576122-fc015fea-ca4a-48f2-b2f5-6f1fae1ab73c.png)
-
-## Examples
-
-1. Collect multiple GitLab repos sequentially. 
-
->When there're multiple collection tasks against a single data source, we 
recommend running these tasks sequentially since the collection speed is mostly 
limited by the API rate limit of the data source. 
->Running multiple tasks against the same data source is unlikely to speed up 
the process and may overwhelm the data source.
-
-
-Below is an example for collecting 2 GitLab repos sequentially. It has 2 
stages, each contains a GitLab task. 
-
-
-```
-[
-  [
-    {
-      "Plugin": "gitlab",
-      "Options": {
-        "projectId": 15238074
-      }
-    }
-  ],
-  [
-    {
-      "Plugin": "gitlab",
-      "Options": {
-        "projectId": 11624398
-      }
-    }
-  ]
-]
-```
-
-
-2. Collect a GitHub repo and a Jira board in parallel
-
-Below is an example for collecting a GitHub repo and a Jira board in parallel. 
It has a single stage with a GitHub task and a Jira task. Since users can 
configure multiple Jira connection, it's required to pass in a `connectionId` 
for Jira task to specify which connection to use.
-
-```
-[
-  [
-    {
-      "Plugin": "github",
-      "Options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-      }
-    },
-    {
-      "Plugin": "jira",
-      "Options": {
-        "connectionId": 1,
-        "boardId": 76
-      }
-    }
-  ]
-]
-```
diff --git a/docs/github-user-guide-v0.10.0.md 
b/docs/github-user-guide-v0.10.0.md
deleted file mode 100644
index 56272229..00000000
--- a/docs/github-user-guide-v0.10.0.md
+++ /dev/null
@@ -1,113 +0,0 @@
-## Summary
-
-GitHub has a rate limit of 2,000 API calls per hour for their REST API.
-As a result, it may take hours to collect commits data from GitHub API for a 
repo that has 10,000+ commits.
-To accelerate the process, DevLake introduces GitExtractor, a new plugin that 
collects git data by cloning the git repo instead of by calling GitHub APIs.
-
-Starting from v0.10.0, DevLake will collect GitHub data in 2 separate plugins: 
-
-- GitHub plugin (via GitHub API): collect repos, issues, pull requests
-- GitExtractor (via cloning repos):  collect commits, refs
-
-Note that GitLab plugin still collects commits via API by default since GitLab 
has a much higher API rate limit.
-
-This doc details the process of collecting GitHub data in v0.10.0. We're 
working on simplifying this process in the next releases.
-
-Before start, please make sure all services are started.
-
-## GitHub Data Collection Procedure
-
-There're 3 steps.
-
-1. Configure GitHub connection
-2. Create a pipeline to run GitHub plugin
-3. Create a pipeline to run GitExtractor plugin
-4. [Optional] Set up a recurring pipeline to keep data fresh
-
-### Step 1 - Configure GitHub connection
-
-1. Visit `config-ui` at `http://localhost:4000`, click the GitHub icon
-
-2. Click the default connection 'Github' in the list
-    
![image](https://user-images.githubusercontent.com/14050754/163591959-11d83216-057b-429f-bb35-a9d845b3de5a.png)
-    
-3. Configure connection by providing your GitHub API endpoint URL and your 
personal access token(s).
-    
![image](https://user-images.githubusercontent.com/14050754/163592015-b3294437-ce39-45d6-adf6-293e620d3942.png)
-
-    > Endpoint URL: Leave this unchanged if you're using github.com. Otherwise 
replace it with your own GitHub instance's REST API endpoint URL. This URL 
should end with '/'.
-    >
-    > Auth Token(s): Fill in your personal access tokens(s). For how to 
generate personal access tokens, please see GitHub's [official 
documentation](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).
-    > You can provide multiple tokens to speed up the data collection process, 
simply concatenating tokens with commas.
-    >
-    > GitHub Proxy URL: This is optional. Enter a valid proxy server address 
on your Network, e.g. http://your-proxy-server.com:1080
-    
-4. Click 'Test Connection' and see it's working, then click 'Save Connection'.
-
-5. [Optional] Help DevLake understand your GitHub data by customizing data 
enrichment rules shown below.
-    
![image](https://user-images.githubusercontent.com/14050754/163592506-1873bdd1-53cb-413b-a528-7bda440d07c5.png)
-   
-   1. Pull Request Enrichment Options
-   
-      1. `Type`: PRs with label that matches given Regular Expression, their 
properties `type` will be set to the value of first sub match. For example, 
with Type being set to `type/(.*)$`, a PR with label `type/bug`, its `type` 
would be set to `bug`, with label `type/doc`, it would be `doc`.
-      2. `Component`: Same as above, but for `component` property.
-   
-   2. Issue Enrichment Options
-   
-      1. `Severity`: Same as above, but for `issue.severity` of course.
-   
-      2. `Component`: Same as above.
-   
-      3. `Priority`: Same as above.
-   
-      4. **Requirement** : Issues with label that matches given Regular 
Expression, their properties `type` will be set to `REQUIREMENT`. Unlike 
`PR.type`, submatch does nothing,    because for Issue Management Analysis, 
people tend to focus on 3 kinds of type (Requiremnt/Bug/Incident), however, the 
concrete naming varies from repo to repo, time to time, so we decided to 
standardize them to help analyst making general purpose metric. 
-   
-      5. **Bug**: Same as above, with `type` setting to `BUG`
-   
-      6. **Incident**: Same as above, with `type` setting to `INCIDENT`
-   
-6. Click 'Save Settings'
-
-### Step 2 - Create a pipeline to collect GitHub data
-
-1. Select 'Pipelines > Create Pipeline Run' from `config-ui`
-
-![image](https://user-images.githubusercontent.com/14050754/163592542-8b9d86ae-4f16-492c-8f90-12f1e90c5772.png)
-
-2. Toggle on GitHub plugin, enter the repo you'd like to collect data from.
-
-![image](https://user-images.githubusercontent.com/14050754/163592606-92141c7e-e820-4644-b2c9-49aa44f10871.png)
-
-3. Click 'Run Pipeline'
-
-You'll be redirected to newly created pipeline:
-
-![image](https://user-images.githubusercontent.com/14050754/163592677-268e6b77-db3f-4eec-8a0e-ced282f5a361.png)
-
-
-See the pipeline finishes (progress 100%):
-
-![image](https://user-images.githubusercontent.com/14050754/163592709-cce0d502-92e9-4c19-8504-6eb521b76169.png)
-
-### Step 3 - Create a pipeline to run GitExtractor plugin
-
-1. Enable the `GitExtractor` plugin, and enter your `Git URL` and, select the 
`Repository ID` from dropdown menu.
-
-![image](https://user-images.githubusercontent.com/2908155/164125950-37822d7f-6ee3-425d-8523-6f6b6213cb89.png)
-
-2. Click 'Run Pipeline' and wait until it's finished.
-
-3. Click `View Dashboards` on the top left corner of `config-ui`
-
-![image](https://user-images.githubusercontent.com/61080/163666814-e48ac68d-a0cc-4413-bed7-ba123dd291c8.png)
-
-4. See dashboards populated with GitHub data.
-
-### Step 4 - [Optional] Set up a recurring pipeline to keep data fresh
-
-Please see [How to create recurring pipelines](./recurring-pipeline.md) for 
details.
-
-
-
-
-
-
diff --git a/docs/godoc.md b/docs/godoc.md
deleted file mode 100644
index 72f4dda4..00000000
--- a/docs/godoc.md
+++ /dev/null
@@ -1,10 +0,0 @@
-# How to preview document on local machine
-
-1. Install `godoc` by running command `go install 
golang.org/x/tools/cmd/godoc@latest`
-2. Go to project root folder, and run `godoc`
-3. Open `http://localhost:6060` in your browser
-4. Search package you needed (i.e. "testhelper", there are more than one 
result, make sure you pick the right one)
-
-# References
-
-- [godoc-tricks - greate resource for how to write 
godoc](https://pkg.go.dev/github.com/fluhus/godoc-tricks)
diff --git a/docs/recurring-pipeline.md b/docs/recurring-pipeline.md
deleted file mode 100644
index fe1dbfb8..00000000
--- a/docs/recurring-pipeline.md
+++ /dev/null
@@ -1,23 +0,0 @@
-## How to create recurring pipelines?
-
-Once you've verified a pipeline works well, mostly likely you'll want to run 
that pipeline periodically to keep data fresh, and DevLake's pipeline blueprint 
feature have got you covered.
-
-
-1. Click 'Create Pipeline Run' and 
-  - Toggle the plugins you'd like to run, here we use GitHub and GitExtractor 
plugin as an example
-  - Toggle on Automate Pipeline
-    
![image](https://user-images.githubusercontent.com/14050754/163596590-484e4300-b17e-4119-9818-52463c10b889.png)
-
-
-2. Click 'Add Blueprint'. Fill in the form and 'Save Blueprint'.
-    
-    - **NOTE**: That the schedule syntax is standard unix cron syntax, 
[Crontab.guru](https://crontab.guru/) could be a useful reference
-    - **IMPORANT**: The scheduler is running under `UTC` timezone. If you 
prefer data collecting happens at 3am NewYork(UTC-04:00) every day, use 
**Custom Shedule** and set it to `0 7 * * *`
-    
-    
![image](https://user-images.githubusercontent.com/14050754/163596655-db59e154-405f-4739-89f2-7dceab7341fe.png)
-    
-3. Click 'Save Blueprint'.
-    
-4. Click 'Pipeline Blueprints', you can view and edit the new blueprint in the 
blueprint list.
-    
-    
![image](https://user-images.githubusercontent.com/14050754/163596773-4fb4237e-e3f2-4aef-993f-8a1499ca30e2.png)
\ No newline at end of file
diff --git a/img/logo.svg b/img/logo.svg
new file mode 100644
index 00000000..9c233357
--- /dev/null
+++ b/img/logo.svg
@@ -0,0 +1,7 @@
+<svg width="86" height="64" viewBox="0 0 86 64" fill="none" 
xmlns="http://www.w3.org/2000/svg";>
+<circle cx="32" cy="32" r="30" fill="#7497F7"/>
+<path d="M47.4455 37.3015C45.5675 37.3452 43.843 37.2221 42.2473 
36.9688C44.0458 35.348 45.1034 33.118 44.9725 30.7079C44.8485 28.4315 43.6852 
26.4196 41.9002 24.9929C44.2929 23.9743 47.0645 22.3845 48.4968 20.1165C50.1586 
17.4848 49.1404 15.266 48.3264 14.1222C48.0011 13.6649 47.2356 13.6227 46.7632 
14.0385L36.7274 22.8706C36.1965 22.8065 35.6521 22.7812 35.0987 22.7953C36.7816 
20.8059 38.4364 18.1876 38.8842 15.2449C39.4673 11.4145 37.4618 9.55034 36.1296 
8.7526C35.5972 8.43393 34.7895 [...]
+<path d="M31.2926 29.0562C31.2926 30.3224 32.32 31.3481 33.5868 
31.3481C34.8543 31.3481 35.8809 30.3224 35.8809 29.0562C35.8809 27.7899 34.8543 
26.7643 33.5868 26.7643C32.3193 26.7643 31.2926 27.7899 31.2926 29.0562Z" 
fill="#FF8B8B"/>
+<path fill-rule="evenodd" clip-rule="evenodd" d="M8.77372 50.9893C7.5519 
49.4966 6.47113 47.884 5.55176 46.1719C9.74951 44.9284 15.4211 43.7714 21.5836 
43.7714C26.244 43.7714 29.7677 44.6397 33.1977 45.488L33.2629 45.5041C36.6471 
46.3411 40.0078 47.1723 44.7083 47.3812C49.9395 47.6137 54.8697 46.4715 58.8981 
45.3002C57.6704 47.7782 56.1104 50.0623 54.2764 52.0943C51.2519 52.6689 47.9161 
53.0352 44.4638 52.8818C39.2151 52.6486 35.4146 51.7084 32.0158 50.8676L31.8757 
50.833C28.4649 49.9894 [...]
+<path d="M73.624 8.528V18.128H75.16V11.144H75.232L78.256 18.128H79.552L82.576 
11.144H82.648V18.128H84.184V8.528H82.288L78.952 16.112H78.88L75.52 
8.528H73.624ZM64.528 8.528V9.944H67.84V18.128H69.4V9.944H72.712V8.528H64.528Z" 
fill="#7497F7"/>
+</svg>
diff --git "a/DevLake 
\347\244\276\345\214\272\347\276\244\357\274\210\345\260\217\357\274\211\346\264\273\347\240\201.png"
 b/img/wechat_community_barcode.png
similarity index 100%
rename from "DevLake 
\347\244\276\345\214\272\347\276\244\357\274\210\345\260\217\357\274\211\346\264\273\347\240\201.png"
rename to img/wechat_community_barcode.png
diff --git a/plugins/README-zh-CN.md b/plugins/README-zh-CN.md
deleted file mode 100644
index 4f8c3fbf..00000000
--- a/plugins/README-zh-CN.md
+++ /dev/null
@@ -1,102 +0,0 @@
-# 听说你想建立一个新的插件...
-
-...好消息是,这很容易!
-
-
-## 基本写法
-
-```golang
-type YourPlugin string
-
-func (plugin YourPlugin) Description() string {
-       return "To collect and enrich data from YourPlugin"
-}
-
-func (plugin YourPlugin) Execute(options map[string]interface{}, progress 
chan<- float32) {
-       logger.Print("Starting YourPlugin execution...")
-
-  // 检查选项中需要的字段
-       projectId, ok := options["projectId"]
-       if !ok {
-               logger.Print("projectId is required for YourPlugin execution")
-               return
-       }
-
-  // 开始收集
-  if err := tasks.CollectProject(projectId); err != nil {
-               logger.Error("Could not collect projects: ", err)
-               return
-       }
-  // 处理错误
-  if err != nil {
-    logger.Error(err)
-  }
-}
-
-// 导出一个名为 PluginEntry 的变量供 Framework 搜索和加载
-var PluginEntry YourPlugin //nolint
-```
-
-## 概要
-
-要建立一个新的插件,你将需要做下列事项。你应该选择一个你想看的数据的 API。首先考虑你想看到的指标,然后寻找能够支持这些指标的数据。
-
-## 收集(Collection)
-
-然后你要写一个 `Collection` 来收集数据。你需要阅读一些 API 文档,弄清楚你想在最后的 Grafana 
仪表盘中看到哪些指标(配置Grafana是最后一步)。
-
-## 构建一个 `Fetcher` 来执行请求
-
-Plugins/core文件夹包含一个 API 客户端,你可以在自己的插件中实现。它有一些方法,比如Get()。<br>
-每个API处理分页的方式不同,所以你可能需要实现一个 "带分页的获取 "方法。有一种方法是使用 "ant" 
包作为管理并发任务的方法:https://github.com/panjf2000/ants
-
-你的 collection 方法可能看起来像这样:
-
-```golang
-func Collect() error {
-       pluginApiClient := CreateApiClient()
-
-       return pluginApiClient.FetchWithPagination("<your_api_url>",
-               func(res *http.Response) error {
-                       pluginApiResponse := &ApiResponse{}
-      // 你必须解除对api的响应,才能使用这些结果
-                       err := helper.UnmarshalResponse(res, pluginApiResponse)
-                       if err != nil {
-                               logger.Error("Error: ", err)
-                               return nil
-                       }
-      // 将获取到的数据保存到数据库中
-                       for _, value := range *pluginApiResponse {
-                               pluginModel := &models.pluginModel{
-                                       pluginId:       value.pluginId,
-                                       Title:          value.Title,
-                                       Message:        value.Message,
-                               }
-
-                               err = lakeModels.Db.Clauses(clause.OnConflict{
-                                       UpdateAll: true,
-                               }).Create(&pluginModel).Error
-
-                               if err != nil {
-                                       logger.Error("Could not upsert: ", err)
-                               }
-                       }
-
-                       return nil
-               })
-}
-```
-
-请注意 "upsert" 的使用。这对于只保存修改过的记录是很有用的。
-
-## 数据处理(Enrichment)
-  
-一旦你通过 API 收集了数据,你可能想通过以下方式来对这些数据做 ETL。比如:
-
-  - 添加你目前没有的字段
-  - 计算你可能需要的指标字段
-  - 消除你不需要的字段
-
-## 你已经完成了!
-
-祝贺你! 你已经创建了你的第一个插件! 🎖
diff --git a/plugins/ae/README.md b/plugins/ae/README.md
deleted file mode 100644
index a744d98d..00000000
--- a/plugins/ae/README.md
+++ /dev/null
@@ -1,97 +0,0 @@
-# Merico Analysis Engine (AE)
-
-THIS PLUGIN IS ONLY FOR MERICO EMPLOYEES AT THIS TIME. SOON IT WILL BE MADE 
PUBLIC.
-
-## Important notes
-
-### Some data looks like it is missing...
-
-The commit data stored in Trino. The files can be deleted by Mino expiration 
strategy over time if they are too old.
-
-### How do I trigger analysis on my project?
-
-Just add DevLake to the Merico Enterprise Edition and triggered an analysis. 
You can find this item by searching "ae staging"? You can log in AE staging 
server and restart an analysis of DevLake. (Login credentials for Merico 
employees are stored in one password)
-
-### Who controls the api for merico analysis engine?
-
-Jingyang Liang and the Merico AE team
-
-### How do I authenticate and why?
-
-AE api use a scheme shares idea of [http mac 
authorization](https://tools.ietf.org/id/draft-ietf-oauth-v2-http-mac-02.html#rfc.section.1.1)
-
-This scheme is to prevent **Replay Attack** while avoiding API Server cache 
blow up, it is required because AE will most likely be deployed without HTTPS, 
and Replay Attacks are expected. Keep in mind, this wouldn't prevent Ear 
Dropping Attack, which can only be solved by HTTPS (in RESTful api context).
-
-1. `nonce`: a random string to identify a unique request, any api request with 
`nonce` already exists in api server cache will be rejected. use only `nonce` 
can prevent Replay Attack, but will blow up api server cache eventually.
-2. `timestamp`: to avoid api server cache being filled up with `nonce` 
strings, a `timestamp` is required, any api request with a timestamp unmatched 
to server current time (with small period of tolerance, like 3 minutes) will be 
rejected immediately, so api server can safely remove those expired `nonce` 
(normally double the period of tolerance). why not use `timestamp` only? well, 
the resolution of unix timestamp is 1sec, api server qps would be throttled to 
1/s without `nonce`.
-3. `secret_key`: in order to authenticate api requests, a shared symmetric 
secret is required between server and client.api client should sign its request 
with this key and server will verify signature with same key. this key should 
be generated by api server and transferred to client via some secure channel, 
and never send this key on any request.
-4. `app_id`: when api server has multiple clients and we want to identify 
different apps, this is for server to know which `secret_key` to use, and it 
should not have any mathematical link to its `secret_key` of course.
-5. `sign`: this one is, only those client with a correct secret pair can 
generate correct signature.
-
-So, `nonce` is for request identity, `timestamp` is for server to eject 
expired key, `sign`  and `app_id` is for authentication, hope that I convey the 
idea of the scheme right.
-
-
-## Data Gathered
-
-*Projects*
-
-```
-[
-  {
-    "id": 0,
-    "git_url": "string",
-    "priority": 0,
-    "create_time": "2021-11-23T17:28:10.286Z",
-    "update_time": "2021-11-23T17:28:10.286Z"
-  }
-]
-```
-
-*Commits*
-
-```
-[
-  {
-    "hexsha": "string",
-    "analysis_id": "string",
-    "author_email": "string",
-    "dev_eq": 0
-  }
-]
-```
-
-The most valuable data here is the dev_eq. This is a Merico owned measurement 
of code value
-
-## Configuration
-
-You will need to set following settings in order to run this plugin.
-
-These can be set in your .env file as
-
-```
-AE_APP_ID=xxx
-AE_SECRET_KEY=xxx
-AE_ENDPOINT=xxx
-```
-
-TBD: How do non merico users get these keys?
-
-## Gathering Data with AE
-
-To collect data on a single project, you can make a POST request to 
`/pipelines`
-
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "ae 20211201",
-    "tasks": [[{
-        "plugin": "ae",
-        "options": {
-            "projectId": <Your project id>
-        }
-    }]]
-}
-'
-    ```
diff --git a/plugins/feishu/README-zh-CN.md b/plugins/feishu/README-zh-CN.md
deleted file mode 100644
index 03523577..00000000
--- a/plugins/feishu/README-zh-CN.md
+++ /dev/null
@@ -1,65 +0,0 @@
-# Feishu 插件
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## 简介
-
-本插件通过 [Feishu 
Openapi](https://open.feishu.cn/document/home/user-identity-introduction/introduction)
 来收集 Feishu 数据。
-
-## 配置
-
-在使用本插件之前,您需要先找到飞书管理员获取app_id和app_secret(请参照 Feishu 
的官方文档中[相关说明](https://open.feishu.cn/document/ukTMukTMukTM/ukDNz4SO0MjL5QzM/auth-v3/auth/tenant_access_token_internal)
-),然后在 `.env` 上面对插件进行配置。
-
-### 编辑.env文件
-
-为了能访问到 Feishu 的 API ,请确保完成以下的必填设置项。目前 Feishu 
只支持单一数据源,列表只会显示一个连接,同时其名称是固定不可修改的。多数据源支持会在不久的将来实现。
-
-FEISHU_APPID=app_id
-
-FEISHU_APPSCRECT=app_secret
-
-## 数据收集及计算
-
-为了触发插件进行数据收集和计算,您需要构造一个 JSON, 通过 `Pipelines` 中的 `Create Pipeline Run` 选项来选择 
`Advanced Mode`, 发送请求触发收集计算任务:
-numOfDaysToCollect: 收集的天数
-rateLimitPerSecond: 每秒发送请求的数量(最大值为8)
-
-```json
-[
-  [
-    {
-      "plugin": "feishu",
-      "options": {
-        "numOfDaysToCollect" : 80,
-        "rateLimitPerSecond" : 5
-      }
-    }
-  ]
-]
-```
-
-你也可以通过向 `/pipelines` 发起一个POST请求来触发数据收集。
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "feishu 20211126",
-    "tasks": [[{
-      "plugin": "feishu",
-      "options": {
-        "numOfDaysToCollect" : 80,
-        "rateLimitPerSecond" : 5
-      }
-    }]]
-}
-'
-```
\ No newline at end of file
diff --git a/plugins/github/README-zh-CN.md b/plugins/github/README-zh-CN.md
deleted file mode 100644
index d294a2e9..00000000
--- a/plugins/github/README-zh-CN.md
+++ /dev/null
@@ -1,97 +0,0 @@
-# Github插件
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## 概述
-
-此插件从`Github`收集数据并通过`Grafana`展示。我们可以为技术领导者回答诸如以下问题:
-- 本月是否比以往更高产?
-- 我们能多快地响应客户需求?
-- 质量是否有提升?
-
-## 指标
-
-以下是几个利用`Github`数据的例子:
-- 每个人的平均需求研发时间
-- 千行代码Bug数
-- 提交数依时间分布
-
-## 截图
-
-![image](https://user-images.githubusercontent.com/27032263/141855099-f218f220-1707-45fa-aced-6742ab4c4286.png)
-
-
-## 配置
-
-### 数据源连接配置
-配置界面需要填入以下字段
-- **Connection Name** [`只读`]
-    - ⚠️ 默认值为 "**Github**" 请不要改动。
-- **Endpoint URL** (REST URL, 以 `https://`或`http://`开头)
-    - 应当填入可用的REST API Endpoint。例如 `https://api.github.com/`
-    - ⚠️url应当以`/`结尾
-- **Auth Token(s)** (Personal Access Token)
-    - 如何创建**personal access token**,请参考官方文档[GitHub Docs on Personal 
Tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token)
-    - 填入至少一个token,可以填入多个token并以英文逗号`,`间隔,填入多个token可以加快数据采集速度
-
-对于使用`Basic Authentication`或者`OAuth`的请求,限制为5000次/小时/token
-- https://docs.github.com/en/rest/overview/resources-in-the-rest-api
-通过在配置文件中设置多个token可以达到更高的请求速率
-
-注意: 如果使用付费的企业版`Github`可以达到15000次/小时/token。
-关于**GitHub REST API**的更多信息请参考官方文档[GitHub Docs on 
REST](https://docs.github.com/en/rest)
-
-点击**Save Connection**保存配置。
-
-
-### 数据源配置
-目前只有一个**可选**配置*Proxy URL*,如果你需要代理才能访问GitHub才需要配置此项
-- **GitHub Proxy URL [`可选`]**
-  - 输入可用的代理服务器地址,例如:`http://your-proxy-server.com:1080`
-
-点击**Save Settings**保存配置。
-
-### 正则配置
-在.env文件中,可以配置
-- GITHUB_PR_BODY_CLOSE_PATTERN: 定义了pr body关联issue的关键字,可查看.env.example里面的示例
-
-## 示例
-为了触发插件进行数据收集和计算,您需要构造一个 JSON, 通过 `Pipelines` 中的 `Create Pipeline Run` 选项来选择 
`Advanced Mode`, 发送请求触发收集计算任务:
-```json
-[
-  [
-    {
-      "plugin": "github",
-      "options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-      }
-    }
-  ]
-]
-```
-
-你也可以通过向 `/pipelines` 发起一个POST请求来触发数据收集。
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "github 20211126",
-    "tasks": [[{
-        "plugin": "github",
-        "options": {
-            "repo": "lake",
-            "owner": "merico-dev"
-        }
-    }]]
-}
-'
-```
diff --git a/plugins/gitlab/README-zh-CN.md b/plugins/gitlab/README-zh-CN.md
deleted file mode 100644
index 0ba46b1d..00000000
--- a/plugins/gitlab/README-zh-CN.md
+++ /dev/null
@@ -1,102 +0,0 @@
-# Gitlab 插件
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## 指标
-此插件通过收集 Gitlab 的数据来计算以下指标。
-
-指标名称 | 描述
-:------------ | :-------------
-代码评审次数 | PR/MR创建的数量
-代码评审通过率 | PR/MR被合并的比率
-代码评审人数 | 评审PR/MR的人数
-代码评审时长 | 从PR/MR创建到被合并的时间
-代码提交人数 | 提交了Commit的人数
-代码提交次数 | 提交Commit的次数
-新增代码行数 | 累积新增的代码行数
-删除代码行数 | 累计删除的代码行数
-代码评审轮数 | PR/MR创建到被合并期间,经过了多少轮的评审
-
-
-## 配置
-
-### 数据源连接配置
-配置界面需要填入以下字段
-- **Connection Name** [`只读`]
-    - ⚠️ 默认值为 "**Gitlab**" 请不要改动。
-- **Endpoint URL** (REST URL, 以 `https://`或`http://`开头)
-    - 应当填入可用的REST API Endpoint。例如 `https://gitlab.com/api/v4/`
-    - ⚠️url应当以`/`结尾
-- **Personal Access Token** (HTTP Basic Auth)
-    - 登录你的Gitlab并创建**Personal Access 
Token**,token长度必须是20个字符。请把生成的token安全保存离开页面后将无法看到。
-
-    1. 右上角选择**avatar**。
-    2. 选择**Edit profile**。
-    3. 在左侧边栏选择**Access Tokens**。
-    4. 输入**name**并且为此token选择**expiry date**。
-    5. 选择你所需的**scopes**。
-    6. 选择**Create personal access token**。
-如何创建**personal access token**,请参考官方文档[GitLab Docs on Personal 
Tokens](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html)
-
-关于**GitLab REST API**的更多信息请参考官方文档[GitLab Docs on 
REST](https://docs.gitlab.com/ee/development/documentation/restful_api_styleguide.html#restful-api)
-
-点击**Save Connection**保存配置。
-
-### 数据源配置
-当前只有一个**可选**配置,它可以让你将JIRA Boards和GitLab Projects关联起来。
-
-- **JIRA Board Mappings [ `可选`]**
-  **Map JIRA Boards to GitLab**。请以以下格式输入映射规则
-```
-# 映射JIRA Board ID 8 ==> Gitlab Projects 8967944,8967945
-<JIRA_BOARD>:<GITLAB_PROJECT_ID>,...; 例如 8:8967944,8967945;9:8967946,8967947
-```
-点击**Save Settings**保存配置。
-
-
-## 收集数据
-
-你可以向 `/pipelines` 发起一个POST请求来触发数据收集。
-
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "gitlab 20211126",
-    "tasks": [[{
-        "plugin": "gitlab",
-        "options": {
-            "projectId": <Your gitlab project id>
-        }
-    }]]
-}
-'
-```
-
-## 如何获取 Gitlab Project ID
-
-要获得一个特定的 Gitlab 仓库的项目ID:
-- 访问 Gitlab 的仓库页面
-- 找到标题下面的项目ID
-
-  ![Screen Shot 2021-08-06 at 4 32 53 
PM](https://user-images.githubusercontent.com/3789273/128568416-a47b2763-51d8-4a6a-8a8b-396512bffb03.png)
-
-- 将此项目ID复制在上方的请求示例中,从这个项目收集数据
-
-### 创建一个 Gitlab API Token <a id="gitlab-api-token"></a>
-
-1. 登录 Gitlab 后,访问 `https://gitlab.com/-/profile/personal_access_tokens`
-2. Token 可以设置任意名称,不要设置过期日期。在设置范围时,去掉“写入”权限
-
-   ![Screen Shot 2021-08-06 at 4 44 01 
PM](https://user-images.githubusercontent.com/3789273/128569148-96f50d4e-5b3b-4110-af69-a68f8d64350a.png)
-
-3. 点击 **Create Personal Access Token** 按钮
-4. 通过 config-ui 或者 直接将 API Token 复制并保存到 `.env` 文件中
diff --git a/plugins/jenkins/README-zh-CN.md b/plugins/jenkins/README-zh-CN.md
deleted file mode 100644
index 3239047c..00000000
--- a/plugins/jenkins/README-zh-CN.md
+++ /dev/null
@@ -1,60 +0,0 @@
-# Jenkins 插件
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## 简介
-
-本插件通过 [Remote Access 
API](https://www.jenkins.io/doc/book/using/remote-access-api/) 来收集 Jenkins 
数据。然后根据收集到的原始数据,计算并展示相关的 devops 指标。
-
-![image](https://user-images.githubusercontent.com/61080/141943122-dcb08c35-cb68-4967-9a7c-87b63c2d6988.png)
-
-## 指标
-
-指标名称 | 说明
-:------------ | :-------------
-构建数量 | 创建的构建数量
-构建成功率 | 成功构建的百分比
-
-
-## 配置
-
-在使用本插件之前,您需要先在 `config-ui` 上面对插件进行配置。
-
-### 通过 `config-ui` 进行配置
-
-为了能访问到 Jenkins 的 API ,请确保完成以下的必填设置项。目前 Jenkins 
只支持单一数据源,列表只会显示一个连接,同时其名称是固定不可修改的。多数据源支持会在不久的将来实现。
-
-- Connection Name [只读]
-  - ⚠️ D默认为 "Jenkins" 且不能修改。
-- Endpoint URL (REST URL, 必须以 `https://` 或  `http://` 开头,`/` 结尾)
-  - 必须指向一个有效的 REST API 端点, 比如 `https://jenkins.example.com/`
-- Username (E-mail)
-  - 该 Jenkins 实例上的有效用户名。
-- Password (密码或 API 的 Acess Token)
-  - 用户名对应的密码
-  - 请参照 Jenkins 的官方文档中关于 "Using Credentials" 的说明
-  - 您可以使用  **API Access Token** 代替密码, 可在 Jenkins 的面板中依次打开 `User` -> 
`Configure` -> `API Token` 进行生成。
-
-完成上述项目设定后,请点击保存按钮更新连接的设置。
-
-## 数据收集及计算
-
-为了触发插件进行数据收集和计算,您需要构造一个 JSON, 通过 `config-ui` 中的 `Triggers` 功能,发送请求触发收集计算任务:
-
-```json
-[
-  [
-    {
-      "plugin": "jenkins",
-      "options": {}
-    }
-  ]
-]
-```
diff --git a/plugins/jira/README-zh-CN.md b/plugins/jira/README-zh-CN.md
deleted file mode 100644
index bc5a947d..00000000
--- a/plugins/jira/README-zh-CN.md
+++ /dev/null
@@ -1,243 +0,0 @@
-# Jira 插件
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## 概述
-
-此插件通过 Jira Cloud REST API 收集 Jira 数据。然后,它从 Jira 数据中计算出各种工程指标并使之可视化。
-
-<img width="2035" alt="Screen Shot 2021-09-10 at 4 01 55 PM" 
src="https://user-images.githubusercontent.com/2908155/132926143-7a31d37f-22e1-487d-92a3-cf62e402e5a8.png";>
-
-## Project Metrics This Covers
-
-指标名称 | 描述
-:------------ | :-------------
-需求数 | 类型为 "需求" 的事务的数量
-需求交付时间 | 类型为 "需求" 的事务的交付时间,即从创建到完成的时间
-需求交付率 | 已交付的需求/所有需求的比率
-需求粒度 | 一个"需求"类型事务的标准故事点
-故障数量 | 类型为 "故障" 的事务数量<br><i>测试中发现的Bug</i>。
-故障修复时间 |类型为 "故障" 的事务的修复时间
-测试故障率(代码行) | 每1000行代码产生的 "故障" 数量<br><i>包括新增和删除的行数</i>
-测试故障数 | 类型为 "故障" 的事务数量<br><i>Incident在生产中运行时发现的问题</i>。
-质量事故数 | "Incident" 类型的事务的准备时间
-质量事故率(代码行) | 每1000行代码产生的 Incident 数量<br><i>包括新增和删除的行数</i>
-
-## 配置
-
-插件运行前,需要在Dev Lake提供的config UI中完成插件设置。在浏览器中打开 `config-ui`,默认的网址是 
`http://localhost:4000`,然后打开 **Data Integrations / JIRA** 页面。JIRA 
插件目前支持多数据源,在设置页面中您可以添加新的连接,以及修改现有连接的设定。
-
-针对每个连接,你需要设定以下条目:
-
-- Connection Name: 连接的名称,用以区别不同的数据源。
-- Endpoint URL: JIRA 实例的 api 网址,如果您使用的是 JIRA 云服务,它的格式为 
`https://<mydomain>.atlassian.net/rest`。devlake 主要支持托管在 atlassian.net 上的 JIRA 
云服务 API,如果您使用的是 Server 版,可能会出现无法使用的情况。
-- Basic Auth Token: 首先,在 JIRA 的面板上为您的账号生成一个 **JIRA API TOKEN** (参见 [生成 API 
Token](#生成-api-token)), 然后,在 `config-ui` 中点击文本框右边的钥匙图标,输入相应的帐号和Token,点击 
Generate 即可为您生成所需的 **Basic Auth Token**
-- Issue Type Mapping:  JIRA 是高度可定制的,因此,每个 JIRA 实例 可以有一套完全不同于其它实例的 Issue 
Type。为了能正确地计算并展示各种指标,您必须将自定义的 Issue Type 映射到系统的标准类型。请参照 [事务类型映射](#事务类型映射) 进行设定。
-- Epic Key: 在 JIRA 中, issue 和 epic 的关联是通过 `customfield` 
实现的,因此这个字段的名称在不同的实例上是不一样的。需要手动指定,请参照 [查找自定义字段的名称](#查找自定义字段名称) 进行设定。
-- Story Point Field: 同上,
-- Remotelink Commit SHA: 
一个对commit链接进行匹配的正则表达式,用于判断一个外部链接是否为指向commit的链接。以gitlab为例,要匹配所有类似于https://gitlab.com/merico-dev/ce/example-repository/-/commit/8ab8fb319930dbd8615830276444b8545fd0ad24
 这样的commit 可以直接使用正则表达式 **/commit/([0-9a-f]{40})$**
-### 生成 API Token
-
-1. 登录Jira后,访问网址 `https://id.atlassian.com/manage-profile/security/api-tokens`
-2. 点击 **Create API Token** 按钮,随便取个标签名
-![image](https://user-images.githubusercontent.com/27032263/129363611-af5077c9-7a27-474a-a685-4ad52366608b.png)
-
-
-### 事务类型映射
-
-Devlake 支持三种标准类型,所有的指标将会基于标准类型进行计算:
-
- - `故障(Bug)`: 在 **测试阶段** 发现的缺陷,未被部署到生产环境中。
- - `事故(Incident)`: 在 **生产环境** 中发现的缺陷。
- - `需求(Requirement)`: 如果您采用了 SCRUM 开发过程,它一般是对应到 `Story` 类型。
-
-您可以映射任意数量的 **自定义类型** 到某一特定的 **标准类型**,举例来说,一般我们会把 `Story` 映射到 `Requirement`, 
但取于具体场景,您也可以选择同时把 `Story` 和 `Task` 都映射到 `Requirement`。对于未做指定的类型,转换器会采用原始的 
**自定义类型** 来填充 **标准类型** 字段,因此,像 "将 Bug 映射 到 Bug" 这种操作是不需要的。
-
-事务类型映射对于一些指标来说是至关重要的,比如**需求数**,请确保正确映射你的自定义类型。
-
-## 查找自定义字段的名称
-
-请遵循此指南,[如何查找 Jira 
的自定义字段的ID?](https://github.com/merico-dev/lake/wiki/How-to-find-the-custom-field-ID-in-Jira)
-
-## 数据收集及计算
-
-为了触发插件进行数据收集和计算,您需要构造一个 JSON, 通过 `config-ui` 中的 `Triggers` 功能,发送请求触发收集计算任务:
-<font color=“red”>警告:数据收集只支持单任务执行,多任务并发执行的结果可能达不到预期。</font>
-
-```json
-[
-  [
-    {
-      "plugin": "jira",
-      "options": {
-        "connectionId": 1,
-        "boardId": 8,
-        "since": "2006-01-02T15:04:05Z"
-      }
-    }
-  ]
-]
-```
-- `connectionId`: 数据源的 ID, 即 **JIRA Integration** 中 Connection 表中的ID列。
-- `boardId`: JIRA board id, 请参照 [Find如何获取 Jira Board IdBoard 
Id](#如何获取-jira-board-id)。
-- `since`: 可选, 仅同步指定日期后有变化的数据。
-
-Board Id 在具体触发时候指定即可,不需要在数据源连接级别进行配置。
-
-### 如何获取 Jira Board Id
-1. 打开浏览器,进入待导入的 Jira 面板
-2. 在 URL 的参数 `?rapidView=` 中获取面板 ID
-
-
-例如: 对于 
`https://<your_jira_endpoint>/secure/RapidBoard.jspa?rapidView=39`,面板的ID是39
-
-![Screen Shot 2021-08-13 at 10 07 19 
AM](https://user-images.githubusercontent.com/27032263/129363083-df0afa18-e147-4612-baf9-d284a8bb7a59.png)
-
-## API
-
-### 数据源(Connection) 管理
-
-#### 数据源
-
-- 获取所有数据源
-```
-GET /plugins/jira/connections
-
-
-[
-  {
-    "ID": 14,
-    "CreatedAt": "2021-10-11T11:49:19.029Z",
-    "UpdatedAt": "2021-10-11T11:49:19.029Z",
-    "name": "test-jira-connection",
-    "endpoint": "https://merico.atlassian.net/rest";,
-    "basicAuthEncoded": "basicAuth",
-    "epicKeyField": "epicKeyField",
-    "storyPointField": "storyPointField",
-  }
-]
-```
-- 创建所有数据源
-```
-POST /plugins/jira/connections
-{
-       "name": "jira data connection name",
-       "endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest";,
-       "basicAuthEncoded": "generated by `echo -n <jira login email>:<jira 
token> | base64`",
-       "epicKeyField": "name of customfield of epic key",
-       "storyPointField": "name of customfield of story point",
-       "typeMappings": { // optional, send empty object to delete all 
typeMappings of the data connection
-               "userType": {
-                       "standardType": "devlake standard type"
-               }
-       }
-}
-```
-- 更新数据源
-```
-PUT /plugins/jira/connections/:connectionId
-{
-       "name": "jira data connection name",
-       "endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest";,
-       "basicAuthEncoded": "generated by `echo -n <jira login email>:<jira 
token> | base64`",
-       "epicKeyField": "name of customfield of epic key",
-       "storyPointField": "name of customfield of story point",
-       "typeMappings": { // optional, send empty object to delete all 
typeMappings of the data connection
-               "userType": {
-                       "standardType": "devlake standard type",
-               }
-       }
-}
-```
-- 获取指定数据源的详细信息
-```
-GET /plugins/jira/connections/:connectionId
-
-
-{
-       "name": "jira data connection name",
-       "endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest";,
-       "basicAuthEncoded": "generated by `echo -n <jira login email>:<jira 
token> | base64`",
-       "epicKeyField": "name of customfield of epic key",
-       "storyPointField": "name of customfield of story point",
-       "typeMappings": { // optional, send empty object to delete all 
typeMappings of the data connection
-               "userType": {
-                       "standardType": "devlake standard type",
-               }
-       }
-}
-```
-- 删除数据源
-```
-DELETE /plugins/jira/connections/:connectionId
-```
-
-#### 事务类型映射
-
-- 获取数据源的所有类型映射
-```
-GET /plugins/jira/connections/:connectionId/type-mappings
-
-
-[
-  {
-    "jiraConnectionId": 16,
-    "userType": "userType",
-    "standardType": "standardType"
-  }
-]
-```
-- 给数据源添加一个新的类型映射
-```
-POST /plugins/jira/connections/:connectionId/type-mappings
-{
-    "userType": "userType",
-    "standardType": "standardType"
-}
-```
-- 更新类型映射
-```
-PUT /plugins/jira/connections/:connectionId/type-mapping/:userType
-{
-    "standardType": "standardTypeUpdated"
-}
-```
-- 删除类型映射
-```
-DELETE /plugins/jira/connections/:connectionId/type-mapping/:userType
-```
-- JIRA API 代理
-```
-GET /plugins/jira/connections/:connectionId/proxy/rest/*path
-
-For example:
-Requests to 
http://your_devlake_host/plugins/jira/connections/1/proxy/rest/agile/1.0/board/8/sprint
-would forward to
-https://your_jira_host/rest/agile/1.0/board/8/sprint
-
-{
-    "maxResults": 1,
-    "startAt": 0,
-    "isLast": false,
-    "values": [
-        {
-            "id": 7,
-            "self": "https://merico.atlassian.net/rest/agile/1.0/sprint/7";,
-            "state": "closed",
-            "name": "EE Sprint 7",
-            "startDate": "2020-06-12T00:38:51.882Z",
-            "endDate": "2020-06-26T00:38:00.000Z",
-            "completeDate": "2020-06-22T05:59:58.980Z",
-            "originBoardId": 8,
-            "goal": ""
-        }
-    ]
-}
-```
diff --git a/plugins/refdiff/README-zh-CN.md b/plugins/refdiff/README-zh-CN.md
deleted file mode 100644
index fb3c935b..00000000
--- a/plugins/refdiff/README-zh-CN.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# RefDiff 插件
-
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-
-## 概述
-
-在分析开发工作产生代码量时,常常需要知道两个版本之间产生了多少个 commit。本插件基于数据库中存储的 commits 父子关系信息,提供了计算两个 
ref(branch/tag) 之间相差 commits 列表的能力。计算的结果回存于数据库中,方便后续的交叉分析。
-
-
-## 配置
-
-本插件基于领域层数据进行数据增强,无需额外配置。
-
-## 如何使用
-
-为了触发数据增强,您需要在 Pipeline 中加入一个新的任务
-
-1. 确保 `commits` 表和 `refs` 的数据已经正确收集,`refs` 表应含有类似下面的数据:
-```
-id                                                  ref_type
-github:GithubRepository:384111310:refs/tags/0.3.5   TAG
-github:GithubRepository:384111310:refs/tags/0.3.6   TAG
-github:GithubRepository:384111310:refs/tags/0.5.0   TAG
-github:GithubRepository:384111310:refs/tags/v0.0.1  TAG
-github:GithubRepository:384111310:refs/tags/v0.2.0  TAG
-github:GithubRepository:384111310:refs/tags/v0.3.0  TAG
-github:GithubRepository:384111310:refs/tags/v0.4.0  TAG
-github:GithubRepository:384111310:refs/tags/v0.6.0  TAG
-github:GithubRepository:384111310:refs/tags/v0.6.1  TAG
-```
-2. 
如果您想要使用calculateIssuesDiff,请在.GITHUB_PR_BODY_CLOSE_PATTERN,可以在.env.example中查看示例(示例为当前默认值,请确认你的表达式带有单引号'')
-3. 
如果您想要使用calculatePrCherryPick,请在.env文件中配置GITHUB_PR_TITLE_PATTERN,可以在.env.example中查看示例(示例为当前默认值,请确认你的表达式带有单引号'')
-4. 然后,通过类似下面的命令触发一个 
pipeline,在tasks中,可以定义想要执行的任务,calculateRefDiff可以计算新老版本间的差了多少个 
commits,creatRefBugStats可以生成新老版本间的issue列表
-```
-curl -v -XPOST http://localhost:8080/pipelines --data @- <<'JSON'
-{
-    "name": "test-refdiff",
-    "tasks": [
-        [
-            {
-                "plugin": "refdiff",
-                "options": {
-                    "repoId": "github:GithubRepository:384111310",
-                    "pairs": [
-                       { "newRef": "refs/tags/v0.6.0", "oldRef": 
"refs/tags/0.5.0" },
-                       { "newRef": "refs/tags/0.5.0", "oldRef": 
"refs/tags/0.4.0" }
-                    ],
-                    "tasks": [
-                        "calculateCommitsDiff",
-                        "calculateIssuesDiff",
-                        "calculatePrCherryPick",
-                    ]
-                }
-            }
-        ]
-    ]
-}
-JSON
-```

Reply via email to