This is an automated email from the ASF dual-hosted git repository.
klesh pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git
The following commit(s) were added to refs/heads/main by this push:
new 74f770ccf feat: move documents to where they bleong (#342)
74f770ccf is described below
commit 74f770ccfe7ba625364b33d68feff585322e1637
Author: Klesh Wong <[email protected]>
AuthorDate: Fri Dec 9 12:59:47 2022 +0800
feat: move documents to where they bleong (#342)
1. move "Security and Authentication" into "Getting Started"
2. rename "Glossary" to "Key Concepts" and move it into "Overview"
3. move "SupportedDataSources" into "Overview"
---
.../Authentication.md | 2 +-
docs/GettingStarted/DockerComposeSetup.md | 2 +-
docs/{Glossary.md => Overview/KeyConcepts.md} | 28 +++++++++++-----------
docs/{ => Overview}/SupportedDataSources.md | 4 ++--
package-lock.json | 2 +-
5 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/docs/UserManuals/Authentication.md
b/docs/GettingStarted/Authentication.md
similarity index 99%
rename from docs/UserManuals/Authentication.md
rename to docs/GettingStarted/Authentication.md
index fe949858a..cd29553a5 100644
--- a/docs/UserManuals/Authentication.md
+++ b/docs/GettingStarted/Authentication.md
@@ -1,6 +1,6 @@
---
title: "Security and Authentication"
-sidebar_position: 6
+sidebar_position: 8
description: How to secure your deployment and enable the Authentication
---
diff --git a/docs/GettingStarted/DockerComposeSetup.md
b/docs/GettingStarted/DockerComposeSetup.md
index a5b1f973e..9c3a258c5 100644
--- a/docs/GettingStarted/DockerComposeSetup.md
+++ b/docs/GettingStarted/DockerComposeSetup.md
@@ -25,7 +25,7 @@ sidebar_position: 1
- Please follow the [tutorial](UserManuals/ConfigUI/Tutorial.md)
- `devlake` container takes a while to fully boot up. If `config-ui`
complains about API being unreachable, please wait a few seconds and refresh
the page.
2. To view dashboards, click *View Dashboards* button in the top left corner,
or visit `localhost:3002` (username: `admin`, password: `admin`).
- - We use [Grafana](https://grafana.com/) to visualize the DevOps
[data](../SupportedDataSources.md) and build dashboards.
+ - We use [Grafana](https://grafana.com/) to visualize the DevOps
[data](/Overview/SupportedDataSources.md) and build dashboards.
- For how to customize and provision dashboards, please see our [Grafana
doc](../UserManuals/Dashboards/GrafanaUserGuide.md).
diff --git a/docs/Glossary.md b/docs/Overview/KeyConcepts.md
similarity index 72%
rename from docs/Glossary.md
rename to docs/Overview/KeyConcepts.md
index c3bad3dcf..5f738d8eb 100644
--- a/docs/Glossary.md
+++ b/docs/Overview/KeyConcepts.md
@@ -1,9 +1,9 @@
---
-sidebar_position: 7
-title: "Glossary"
-linkTitle: "Glossary"
+sidebar_position: 4
+title: "Key Concepts"
+linkTitle: "KeyConepts"
description: >
- DevLake Glossary
+ DevLake Key Concepts
---
*Last updated: May 16 2022*
@@ -15,9 +15,9 @@ The following terms are arranged in the order of their
appearance in the actual
### Blueprints
**A blueprint is the plan that covers all the work to get your raw data ready
for query and metric computation in the dashboards.** Creating a blueprint
consists of four steps:
-1. **Adding [Data Connections](Glossary.md#data-connections)**: For each [data
source](Glossary.md#data-sources), one or more data connections can be added to
a single blueprint, depending on the data you want to sync to DevLake.
-2. **Setting the [Data Scope](Glossary.md#data-scope)**: For each data
connection, you need to configure the scope of data, such as GitHub projects,
Jira boards, and their corresponding [data entities](Glossary.md#data-entities).
-3. **Adding [Transformation Rules](Glossary.md#transformation-rules)
(optional)**: You can optionally apply transformation for the data scope you
have just selected, in order to view more advanced metrics.
+1. **Adding [Data Connections](#data-connections)**: For each [data
source](#data-sources), one or more data connections can be added to a single
blueprint, depending on the data you want to sync to DevLake.
+2. **Setting the [Data Scope](#data-scope)**: For each data connection, you
need to configure the scope of data, such as GitHub projects, Jira boards, and
their corresponding [data entities](#data-entities).
+3. **Adding [Transformation Rules](#transformation-rules) (optional)**: You
can optionally apply transformation for the data scope you have just selected,
in order to view more advanced metrics.
3. **Setting the Sync Frequency**: You can specify the sync frequency for your
blueprint to achieve recurring data syncs and transformation. Alternatively,
you can set the frequency to manual if you wish to run the tasks in the
blueprint manually.
The relationship among Blueprint, Data Connections, Data Scope and
Transformation Rules is explained as follows:
@@ -31,7 +31,7 @@ The relationship among Blueprint, Data Connections, Data
Scope and Transformatio
### Data Sources
**A data source is a specific DevOps tool from which you wish to sync your
data, such as GitHub, GitLab, Jira and Jenkins.**
-DevLake normally uses one [data plugin](Glossary.md#data-plugins) to pull data
for a single data source. However, in some cases, DevLake uses multiple data
plugins for one data source for the purpose of improved sync speed, among many
other advantages. For instance, when you pull data from GitHub or GitLab, aside
from the GitHub or GitLab plugin, Git Extractor is also used to pull data from
the repositories. In this case, DevLake still refers GitHub or GitLab as a
single data source.
+DevLake normally uses one [data plugin](#data-plugins) to pull data for a
single data source. However, in some cases, DevLake uses multiple data plugins
for one data source for the purpose of improved sync speed, among many other
advantages. For instance, when you pull data from GitHub or GitLab, aside from
the GitHub or GitLab plugin, Git Extractor is also used to pull data from the
repositories. In this case, DevLake still refers GitHub or GitLab as a single
data source.
### Data Connections
**A data connection is a specific instance of a data source that stores
information such as `endpoint` and `auth`.** A single data source can have one
or more data connections (e.g. two Jira instances). Currently, DevLake supports
one data connection for GitHub, GitLab and Jenkins, and multiple connections
for Jira.
@@ -39,7 +39,7 @@ DevLake normally uses one [data
plugin](Glossary.md#data-plugins) to pull data f
You can set up a new data connection either during the first step of creating
a blueprint, or in the Connections page that can be accessed from the
navigation bar. Because one single data connection can be reused in multiple
blueprints, you can update the information of a particular data connection in
Connections, to ensure all its associated blueprints will run properly. For
example, you may want to update your GitHub token in a data connection if it
goes expired.
### Data Scope
-**In a blueprint, each data connection can have multiple sets of data scope
configurations, including GitHub or GitLab projects, Jira boards and their
corresponding [data entities](Glossary.md#data-entities).** The fields for data
scope configuration vary according to different data sources.
+**In a blueprint, each data connection can have multiple sets of data scope
configurations, including GitHub or GitLab projects, Jira boards and their
corresponding [data entities](#data-entities).** The fields for data scope
configuration vary according to different data sources.
Each set of data scope refers to one GitHub or GitLab project, or one Jira
board and the data entities you would like to sync for them, for the
convenience of applying transformation in the next step. For instance, if you
wish to sync 5 GitHub projects, you will have 5 sets of data scope for GitHub.
@@ -50,7 +50,7 @@ To learn more about the default data scope of all data
sources and data plugins,
For instance, if you wish to pull Source Code Management data from GitHub and
Issue Tracking data from Jira, you can check the corresponding data entities
during setting the data scope of these two data connections.
-To learn more details, please refer to [Domain Layer
Schema](./DataModels/DevLakeDomainLayerSchema.md).
+To learn more details, please refer to [Domain Layer
Schema](/DataModels/DevLakeDomainLayerSchema.md).
### Transformation Rules
**Transformation rules are a collection of methods that allow you to customize
how DevLake normalizes raw data for query and metric computation.** Each set of
data scope is strictly accompanied with one set of transformation rules.
However, for your convenience, transformation rules can also be duplicated
across different sets of data scope.
@@ -58,12 +58,12 @@ To learn more details, please refer to [Domain Layer
Schema](./DataModels/DevLak
DevLake uses these normalized values in the transformation to design more
advanced dashboards, such as the Weekly Bug Retro dashboard. Although
configuring transformation rules is not mandatory, if you leave the rules blank
or have not configured correctly, only the basic dashboards (e.g. GitHub Basic
Metrics) will be displayed as expected, while the advanced dashboards will not.
### Historical Runs
-**A historical run of a blueprint is an actual execution of the data
collection and transformation [tasks](Glossary.md#tasks) defined in the
blueprint at its creation.** A list of historical runs of a blueprint is the
entire running history of that blueprint, whether executed automatically or
manually. Historical runs can be triggered in three ways:
+**A historical run of a blueprint is an actual execution of the data
collection and transformation [tasks](#tasks) defined in the blueprint at its
creation.** A list of historical runs of a blueprint is the entire running
history of that blueprint, whether executed automatically or manually.
Historical runs can be triggered in three ways:
- By the blueprint automatically according to its schedule in the Regular Mode
of the Configuration UI
- By running the JSON in the Advanced Mode of the Configuration UI
- By calling the API `/pipelines` endpoint manually
-However, the name Historical Runs is only used in the Configuration UI. In
DevLake API, they are called [pipelines](Glossary.md#pipelines).
+However, the name Historical Runs is only used in the Configuration UI. In
DevLake API, they are called [pipelines](#pipelines).
## In Configuration UI (Advanced Mode) and API
@@ -82,7 +82,7 @@ For detailed information about the relationship between data
sources and data pl
### Pipelines
-**A pipeline is an orchestration of [tasks](Glossary.md#tasks) of data
`collection`, `extraction`, `conversion` and `enrichment`, defined in the
DevLake API.** A pipeline is composed of one or multiple
[stages](Glossary.md#stages) that are executed in a sequential order. Any error
occurring during the execution of any stage, task or subtask will cause the
immediate fail of the pipeline.
+**A pipeline is an orchestration of [tasks](#tasks) of data `collection`,
`extraction`, `conversion` and `enrichment`, defined in the DevLake API.** A
pipeline is composed of one or multiple [stages](#stages) that are executed in
a sequential order. Any error occurring during the execution of any stage, task
or subtask will cause the immediate fail of the pipeline.
The composition of a pipeline is explained as follows:

@@ -93,7 +93,7 @@ Notice: **You can manually orchestrate the pipeline in
Configuration UI Advanced
**A stages is a collection of tasks performed by data plugins.** Stages are
executed in a sequential order in a pipeline.
### Tasks
-**A task is a collection of [subtasks](Glossary.md#subtasks) that perform any
of the `collection`, `extraction`, `conversion` and `enrichment` jobs of a
particular data plugin.** Tasks are executed in a parallel order in any stages.
+**A task is a collection of [subtasks](#subtasks) that perform any of the
`collection`, `extraction`, `conversion` and `enrichment` jobs of a particular
data plugin.** Tasks are executed in a parallel order in any stages.
### Subtasks
**A subtask is the minimal work unit in a pipeline that performs in any of the
four roles: `Collectors`, `Extractors`, `Converters` and `Enrichers`.**
Subtasks are executed in sequential orders.
diff --git a/docs/SupportedDataSources.md
b/docs/Overview/SupportedDataSources.md
similarity index 99%
rename from docs/SupportedDataSources.md
rename to docs/Overview/SupportedDataSources.md
index 0bccf2524..938d10f68 100644
--- a/docs/SupportedDataSources.md
+++ b/docs/Overview/SupportedDataSources.md
@@ -2,7 +2,7 @@
title: "Supported Data Sources"
description: >
Data sources that DevLake supports
-sidebar_position: 4
+sidebar_position: 5
---
@@ -24,7 +24,7 @@ Apache DevLake(incubating) supports the following data
sources. The data from ea
## Data Collection Scope By Each Plugin
-This table shows the entities collected by each plugin. Domain layer entities
in this table are consistent with the entities
[here](./DataModels/DevLakeDomainLayerSchema.md).
+This table shows the entities collected by each plugin. Domain layer entities
in this table are consistent with the entities
[here](/DataModels/DevLakeDomainLayerSchema.md).
| Domain Layer Entities | ae | gitextractor | github |
gitlab | jenkins | jira | refdiff | tapd |
| --------------------- | -------------- | ------------ | -------------- |
------- | ------- | ------- | ------- | ------- |
diff --git a/package-lock.json b/package-lock.json
index 9a8c50ab6..2b760ba7a 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -18251,7 +18251,7 @@
},
"dev-website-tailwind-config": {
"version":
"git+ssh://[email protected]/merico-dev/dev-website-tailwind-config.git#62017898d43897acc108183cf8313e96e8083b25",
- "from":
"dev-website-tailwind-config@https://github.com/merico-dev/dev-website-tailwind-config"
+ "from":
"dev-website-tailwind-config@github:merico-dev/dev-website-tailwind-config"
},
"didyoumean": {
"version": "1.2.2",