This is an automated email from the ASF dual-hosted git repository. zky pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git
The following commit(s) were added to refs/heads/main by this push: new 9c12901b6d docs: add failed-deployment-recovery-time (#733) 9c12901b6d is described below commit 9c12901b6d7b7e4fb77cf825c1e4ae0808e5e03d Author: abeizn <zikuan...@merico.dev> AuthorDate: Fri Mar 15 19:42:35 2024 +0800 docs: add failed-deployment-recovery-time (#733) * docs: add failed-deployment-recovery-time * fix: update image * docs: dora add fdrt --- docs/DORA.md | 5 +- docs/Metrics/CFR.md | 52 ++++-- docs/Metrics/DeploymentFrequency.md | 26 +-- docs/Metrics/FailedDeploymentRecoveryTime.md | 201 +++++++++++++++++++++ docs/Metrics/LeadTimeForChanges.md | 28 +-- docs/Metrics/MTTR.md | 46 ++--- static/img/Metrics/cfr-text.png | Bin 0 -> 13018 bytes static/img/Metrics/deployment-frequency-text.png | Bin 0 -> 13305 bytes .../failed-deployment-recovery-time-text.png | Bin 0 -> 15873 bytes .../Metrics/failed-deployment-recovery-time.png | Bin 0 -> 22269 bytes static/img/Metrics/lead-time-for-changes-text.png | Bin 0 -> 16155 bytes static/img/Metrics/mttr-text.png | Bin 0 -> 14746 bytes 12 files changed, 292 insertions(+), 66 deletions(-) diff --git a/docs/DORA.md b/docs/DORA.md index 93d28450e0..114a52fdd7 100644 --- a/docs/DORA.md +++ b/docs/DORA.md @@ -20,10 +20,11 @@ Within velocity are two core metrics: - [Deployment Frequency](./Metrics/DeploymentFrequency.md): Number of successful deployments to production, how rapidly is your team releasing to users? - [Lead Time for Changes](./Metrics/LeadTimeForChanges.md): How long does it take from commit to the code running in production? This is important, as it reflects how quickly your team can respond to user requirements. -Stability is composed of two core metrics: +Stability is composed of some core metrics: -- [Median Time to Restore Service](./Metrics/MTTR.md): How long does it take the team to properly recover from a failure once it is identified? - [Change Failure Rate](./Metrics/CFR.md): How often are your deployments causing a failure? +- [Median Time to Restore Service (2021 dora report)](./Metrics/MTTR.md): How long does it take the team to properly recover from a failure once it is identified? +- [Failed Deployment Recovery Time (2023 dora report)](./Metrics/FailedDeploymentRecoveryTime.md): How long does it take from the time of deployment to the resolution of the incident associated with the deployment?  diff --git a/docs/Metrics/CFR.md b/docs/Metrics/CFR.md index 915961b707..a0f10d2cec 100644 --- a/docs/Metrics/CFR.md +++ b/docs/Metrics/CFR.md @@ -118,10 +118,10 @@ FROM If you want to measure in which category your team falls, run the following SQL in Grafana. - + ``` --- Metric 4: change failure rate +-- Metric 3: change failure rate with _deployments as ( -- When deploying multiple commits in one pipeline, GitLab and BitBucket may generate more than one deployment. However, DevLake consider these deployments as ONE production deployment and use the last one's finished_date as the finished date. SELECT @@ -131,7 +131,7 @@ with _deployments as ( cicd_deployment_commits cdc JOIN project_mapping pm on cdc.cicd_scope_id = pm.row_id and pm.`table` = 'cicd_scopes' WHERE - pm.project_name in (${project:sqlstring}+'') + pm.project_name in (${project}) and cdc.result = 'SUCCESS' and cdc.environment = 'PRODUCTION' GROUP BY 1 @@ -158,30 +158,52 @@ _change_failure_rate as ( else sum(has_incident)/count(deployment_id) end as change_failure_rate FROM _failure_caused_by_deployments +), + +_is_collected_data as( + SELECT + CASE + WHEN COUNT(i.id) = 0 AND COUNT(cdc.id) = 0 THEN 'No All' + WHEN COUNT(i.id) = 0 THEN 'No Incidents' + WHEN COUNT(cdc.id) = 0 THEN 'No Deployments' + END AS is_collected +FROM + (SELECT 1) AS dummy +LEFT JOIN + issues i ON i.type = 'INCIDENT' +LEFT JOIN + cicd_deployment_commits cdc ON 1=1 ) + SELECT CASE - WHEN ('$benchmarks') = '2023 report' THEN + WHEN ('$dora_report') = '2023' THEN CASE - WHEN change_failure_rate <= 5 THEN "0-5%(elite)" - WHEN change_failure_rate <= .10 THEN "5%-10%(high)" - WHEN change_failure_rate <= .15 THEN "10%-15%(medium)" - WHEN change_failure_rate > .15 THEN "> 15%(low)" + WHEN is_collected = "No All" THEN "N/A. Please check if you have collected deployments/incidents." + WHEN is_collected = "No Incidents" THEN "N/A. Please check if you have collected incidents." + WHEN is_collected = "No Deployments" THEN "N/A. Please check if you have collected deployments." + WHEN change_failure_rate <= .05 THEN CONCAT(round(change_failure_rate*100,1), "%(elite)") + WHEN change_failure_rate <= .10 THEN CONCAT(round(change_failure_rate*100,1), "%(high)") + WHEN change_failure_rate <= .15 THEN CONCAT(round(change_failure_rate*100,1), "%(medium)") + WHEN change_failure_rate > .15 THEN CONCAT(round(change_failure_rate*100,1), "%(low)") ELSE "N/A. Please check if you have collected deployments/incidents." END - WHEN ('$benchmarks') = '2021 report' THEN + WHEN ('$dora_report') = '2021' THEN CASE - WHEN change_failure_rate <= .15 THEN "0-15%(elite)" - WHEN change_failure_rate <= .20 THEN "16%-20%(high)" - WHEN change_failure_rate <= .30 THEN "21%-30%(medium)" - WHEN change_failure_rate > .30 THEN "> 30%(low)" + WHEN is_collected = "No All" THEN "N/A. Please check if you have collected deployments/incidents." + WHEN is_collected = "No Incidents" THEN "N/A. Please check if you have collected incidents." + WHEN is_collected = "No Deployments" THEN "N/A. Please check if you have collected deployments." + WHEN change_failure_rate <= .15 THEN CONCAT(round(change_failure_rate*100,1), "%(elite)") + WHEN change_failure_rate <= .20 THEN CONCAT(round(change_failure_rate*100,1), "%(high)") + WHEN change_failure_rate <= .30 THEN CONCAT(round(change_failure_rate*100,1), "%(medium)") + WHEN change_failure_rate > .30 THEN CONCAT(round(change_failure_rate*100,1), "%(low)") ELSE "N/A. Please check if you have collected deployments/incidents." END - ELSE 'Invalid Benchmarks' + ELSE 'Invalid dora report' END AS change_failure_rate FROM - _change_failure_rate + _change_failure_rate, _is_collected_data ``` ## How to improve? diff --git a/docs/Metrics/DeploymentFrequency.md b/docs/Metrics/DeploymentFrequency.md index 780f0916b8..5f6f233c95 100644 --- a/docs/Metrics/DeploymentFrequency.md +++ b/docs/Metrics/DeploymentFrequency.md @@ -103,7 +103,7 @@ FROM If you want to measure in which category your team falls as in the picture shown below, run the following SQL in Grafana. Unlike monthly deployments which are based on the number of deployments, the metric below is based on `deployment days`. - + ``` -- Metric 1: Deployment Frequency @@ -133,7 +133,7 @@ _production_deployment_days as( FROM cicd_deployment_commits cdc JOIN project_mapping pm on cdc.cicd_scope_id = pm.row_id and pm.`table` = 'cicd_scopes' WHERE - pm.project_name in (${project:sqlstring}+'') + pm.project_name in (${project}) and cdc.result = 'SUCCESS' and cdc.environment = 'PRODUCTION' GROUP BY 1 @@ -226,21 +226,21 @@ _median_number_of_deployment_days_per_six_months as( SELECT CASE - WHEN ('$benchmarks') = '2023 report' THEN + WHEN ('$dora_report') = '2023' THEN CASE - WHEN median_number_of_deployment_days_per_week >= 7 THEN 'On-demand(elite)' - WHEN median_number_of_deployment_days_per_week >= 1 THEN 'Between once per day and once per week(high)' - WHEN median_number_of_deployment_days_per_month >= 1 THEN 'Between once per week and once per month(medium)' - WHEN median_number_of_deployment_days_per_month < 1 and is_collected != NULL THEN 'Fewer than once per month(low)' + WHEN median_number_of_deployment_days_per_week >= 7 THEN CONCAT(median_number_of_deployment_days_per_week, ' deployment days per week(elite)') + WHEN median_number_of_deployment_days_per_week >= 1 THEN CONCAT(median_number_of_deployment_days_per_week, ' deployment days per week(high)') + WHEN median_number_of_deployment_days_per_month >= 1 THEN CONCAT(median_number_of_deployment_days_per_month, ' deployment days per month(medium)') + WHEN median_number_of_deployment_days_per_month < 1 and is_collected is not null THEN CONCAT(median_number_of_deployment_days_per_month, ' deployment days per month(low)') ELSE "N/A. Please check if you have collected deployments." END - WHEN ('$benchmarks') = '2021 report' THEN + WHEN ('$dora_report') = '2021' THEN CASE - WHEN median_number_of_deployment_days_per_week >= 7 THEN 'On-demand(elite)' - WHEN median_number_of_deployment_days_per_month >= 1 THEN 'Between once per day and once per month(high)' - WHEN median_number_of_deployment_days_per_six_months >= 1 THEN 'Between once per month and once every 6 months(medium)' - WHEN median_number_of_deployment_days_per_six_months < 1 and is_collected != NULL THEN 'Fewer than once per six months(low)' + WHEN median_number_of_deployment_days_per_week >= 7 THEN CONCAT(median_number_of_deployment_days_per_week, ' deployment days per week(elite)') + WHEN median_number_of_deployment_days_per_month >= 1 THEN CONCAT(median_number_of_deployment_days_per_month, ' deployment days per month(high)') + WHEN median_number_of_deployment_days_persix_months >= 1 THEN CONCAT(median_number_of_deployment_days_per_six_months, ' deployment days per six months(medium)') + WHEN median_number_of_deployment_days_per_six_months < 1 and is_collected is not null THEN CONCAT(median_number_of_deployment_days_per_six_months, ' deployment days per six months(low)') ELSE "N/A. Please check if you have collected deployments." END - ELSE 'Invalid Benchmarks' + ELSE 'Invalid dora report' END AS 'Deployment Frequency' FROM _median_number_of_deployment_days_per_week, _median_number_of_deployment_days_per_month, _median_number_of_deployment_days_per_six_months ``` diff --git a/docs/Metrics/FailedDeploymentRecoveryTime.md b/docs/Metrics/FailedDeploymentRecoveryTime.md new file mode 100644 index 0000000000..e7343d4d3a --- /dev/null +++ b/docs/Metrics/FailedDeploymentRecoveryTime.md @@ -0,0 +1,201 @@ +--- +title: "DORA - Failed Deployment Recovery Time" +description: > + DORA - Failed Deployment Recovery Time +sidebar_position: 28 +--- + +## What is this metric? + +The Time of changes that were made to a code that then resulted in incidents, rollbacks, or any type of production failure. + +## Why is it important? + +This metric is crucial in evaluating the resilience and efficiency of a team's deployment process. A shorter recovery time indicates a team's ability to swiftly detect issues, troubleshoot them, and restore the system to a functional state, minimizing downtime and impact on end-users. + +## Which dashboard(s) does it exist in + +DORA dashboard. See [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1). + +## How is it calculated? + +The time from deployment to the incident corresponding to deployment is resolved. For example, if a deployment at 10:00 AM and incident caused by deployment is resolved at 11:00 AM, the failed deployment recovery time is one hour. + +Below are the 2023 DORA benchmarks for different development teams from Google's report. However, it's difficult to tell which group a team falls into when the team's failed deployment recovery time is `between one week and six months`. Therefore, DevLake provides its own benchmarks to address this problem: + +| Groups | Benchmarks | DevLake Benchmarks | +| ----------------- | ---------------------------- | ---------------------------- | +| Elite performers | Less than one hour | Less than one hour | +| High performers | Less than one day | Less than one day | +| Medium performers | Between one day and one week | Between one day and one week | +| Low performers | More than six months | More than one week | + + +<b>Data Sources Required</b> + +- `Deployments` from Jenkins, GitLab CI, GitHub Action, BitBucket Pipelines, or Webhook, etc. +- `Incidents` from Jira issues, GitHub issues, TAPD issues, PagerDuty Incidents, etc. + +<b>Transformation Rules Required</b> + +Define `deployment` and `incident` in [data transformations](https://devlake.apache.org/docs/Configuration/Tutorial#step-3---add-transformations-optional) while configuring the blueprint of a project to let DevLake know what CI/issue records can be regarded as deployments or incidents. + + +<b>SQL Queries</b> + +If you want to measure the monthly trend of the Failed Deployment Recovery Time as the picture shown below, run the following SQL in Grafana. + + + +``` +-- ***** 2023 report ***** -- +-- Metric 4: Failed deployment recovery time +with _deployments as ( + SELECT + cdc.cicd_deployment_id as deployment_id, + max(cdc.finished_date) as deployment_finished_date + FROM + cicd_deployment_commits cdc + JOIN project_mapping pm on cdc.cicd_scope_id = pm.row_id and pm.`table` = 'cicd_scopes' + WHERE + pm.project_name in ($project) + and cdc.result = 'SUCCESS' + and cdc.environment = 'PRODUCTION' + GROUP BY 1 + HAVING $__timeFilter(max(cdc.finished_date)) +), + +_incidents_for_deployments as ( + SELECT + i.id as incident_id, + i.created_date as incident_create_date, + i.resolution_date as incident_resolution_date, + fd.deployment_id as caused_by_deployment, + fd.deployment_finished_date, + date_format(fd.deployment_finished_date,'%y/%m') as deployment_finished_month + FROM + issues i + left join project_issue_metrics pim on i.id = pim.id + join _deployments fd on pim.deployment_id = fd.deployment_id + WHERE + i.type = 'INCIDENT' + and $__timeFilter(i.resolution_date) +), + +_recovery_time_ranks as ( + SELECT *, percent_rank() over(PARTITION BY deployment_finished_month order by TIMESTAMPDIFF(MINUTE, deployment_finished_date, incident_resolution_date)) as ranks + FROM _incidents_for_deployments +), + +_median_recovery_time as ( + SELECT deployment_finished_month, max(TIMESTAMPDIFF(MINUTE, deployment_finished_date, incident_resolution_date)) as median_recovery_time + FROM _recovery_time_ranks + WHERE ranks <= 0.5 + GROUP BY deployment_finished_month +), + +_metric_recovery_time_2023_report as ( + SELECT + cm.month, + case + when m.median_recovery_time is null then 0 + else m.median_recovery_time/60 + end as median_recovery_time_in_hour + FROM + calendar_months cm + LEFT JOIN _median_recovery_time m on cm.month = m.deployment_finished_month + WHERE $__timeFilter(cm.month_timestamp) +) + +SELECT + cm.month, + CASE + WHEN '${dora_report}' = '2023' THEN mrt.median_recovery_time_in_hour + END AS '${title_value} In Hours' +FROM + calendar_months cm + LEFT JOIN _metric_recovery_time_2023_report mrt ON cm.month = mrt.month +WHERE + $__timeFilter(cm.month_timestamp) +``` + +If you want to measure in which category your team falls into as in the picture shown below, run the following SQL in Grafana. + + + +``` +-- ***** 2023 report ***** -- +-- Metric 4: Failed deployment recovery time +with _deployments as ( + SELECT + cdc.cicd_deployment_id as deployment_id, + max(cdc.finished_date) as deployment_finished_date + FROM + cicd_deployment_commits cdc + JOIN project_mapping pm on cdc.cicd_scope_id = pm.row_id and pm.`table` = 'cicd_scopes' + WHERE + pm.project_name in ($project) + and cdc.result = 'SUCCESS' + and cdc.environment = 'PRODUCTION' + GROUP BY 1 + HAVING $__timeFilter(max(cdc.finished_date)) +), + +_incidents_for_deployments as ( + SELECT + i.id as incident_id, + i.created_date as incident_create_date, + i.resolution_date as incident_resolution_date, + fd.deployment_id as caused_by_deployment, + fd.deployment_finished_date, + date_format(fd.deployment_finished_date,'%y/%m') as deployment_finished_month + FROM + issues i + left join project_issue_metrics pim on i.id = pim.id + join _deployments fd on pim.deployment_id = fd.deployment_id + WHERE + i.type = 'INCIDENT' + and $__timeFilter(i.resolution_date) +), + +_recovery_time_ranks as ( + SELECT *, percent_rank() over(order by TIMESTAMPDIFF(MINUTE, deployment_finished_date, incident_resolution_date)) as ranks + FROM _incidents_for_deployments +), + +_median_recovery_time as ( + SELECT max(TIMESTAMPDIFF(MINUTE, deployment_finished_date, incident_resolution_date)) as median_recovery_time + FROM _recovery_time_ranks + WHERE ranks <= 0.5 +), + +_metric_recovery_time_2023_report as( + SELECT + CASE + WHEN ('$dora_report') = '2023' THEN + CASE + WHEN median_recovery_time < 60 THEN CONCAT(round(median_recovery_time/60,1), "(elite)") + WHEN median_recovery_time < 24 * 60 THEN CONCAT(round(median_recovery_time/60,1), "(high)") + WHEN median_recovery_time < 7 * 24 * 60 THEN CONCAT(round(median_recovery_time/60,1), "(medium)") + WHEN median_recovery_time >= 7 * 24 * 60 THEN CONCAT(round(median_recovery_time/60,1), "(low)") + ELSE "N/A. Please check if you have collected incidents." + END + END AS median_recovery_time + FROM + _median_recovery_time +) + +SELECT + median_recovery_time AS median_time_in_hour +FROM + _metric_recovery_time_2023_report +WHERE + ('$dora_report') = '2023' + +``` + +## How to improve? + +- Add unit tests for all new feature +- "Shift left", start QA early and introduce more automated tests +- Enforce code review if it's not strictly executed diff --git a/docs/Metrics/LeadTimeForChanges.md b/docs/Metrics/LeadTimeForChanges.md index 84ff0d0ade..f9ae70910b 100644 --- a/docs/Metrics/LeadTimeForChanges.md +++ b/docs/Metrics/LeadTimeForChanges.md @@ -114,7 +114,7 @@ FROM If you want to measure in which category your team falls as in the picture shown below, run the following SQL in Grafana. - + ``` -- Metric 2: median lead time for changes @@ -129,7 +129,7 @@ with _pr_stats as ( join project_mapping pm on pr.base_repo_id = pm.row_id and pm.`table` = 'repos' join cicd_deployment_commits cdc on ppm.deployment_commit_id = cdc.id WHERE - pm.project_name in (${project:sqlstring}+'') + pm.project_name in (${project}) and pr.merged_date is not null and ppm.pr_cycle_time is not null and $__timeFilter(cdc.finished_date) @@ -149,23 +149,23 @@ _median_change_lead_time as( SELECT CASE - WHEN ('$benchmarks') = '2023 report' THEN + WHEN ('$dora_report') = '2023' THEN CASE - WHEN median_change_lead_time < 24 * 60 THEN "Less than one day(elite)" - WHEN median_change_lead_time < 7 * 24 * 60 THEN "Between one day and one week(high)" - WHEN median_change_lead_time < 30 * 24 * 60 THEN "Between one week and one month(medium)" - WHEN median_change_lead_time >= 30 * 24 * 60 THEN "More than one month(low)" + WHEN median_change_lead_time < 24 * 60 THEN CONCAT(round(median_change_lead_time/60,1), "(elite)") + WHEN median_change_lead_time < 7 * 24 * 60 THEN CONCAT(round(median_change_lead_time/60,1), "(high)") + WHEN median_change_lead_time < 30 * 24 * 60 THEN CONCAT(round(median_change_lead_time/60,1), "(medium)") + WHEN median_change_lead_time >= 30 * 24 * 60 THEN CONCAT(round(median_change_lead_time/60,1), "(low)") ELSE "N/A. Please check if you have collected deployments/pull_requests." END - WHEN ('$benchmarks') = '2021 report' THEN + WHEN ('$dora_report') = '2021' THEN CASE - WHEN median_change_lead_time < 60 THEN "Less than one hour(elite)" - WHEN median_change_lead_time < 7 * 24 * 60 THEN "Less than one week(high)" - WHEN median_change_lead_time < 180 * 24 * 60 THEN "Between one week and six months(medium)" - WHEN median_change_lead_time >= 180 * 24 * 60 THEN "More than six months(low)" - ELSE "N/A. Please check if you have collected deployments/incidents." + WHEN median_change_lead_time < 60 THEN CONCAT(round(median_change_lead_time/60,1), "(elite)") + WHEN median_change_lead_time < 7 * 24 * 60 THEN CONCAT(round(median_change_lead_time/60,1), "(high)") + WHEN median_change_lead_time < 180 * 24 * 60 THEN CONCAT(round(median_change_lead_time/60,1), "(medium)") + WHEN median_change_lead_time >= 180 * 24 * 60 THEN CONCAT(round(median_change_lead_time/60,1), "(low)") + ELSE "N/A. Please check if you have collected deployments/pull_requests." END - ELSE 'Invalid Benchmarks' + ELSE 'Invalid dora report' END AS median_change_lead_time FROM _median_change_lead_time ``` diff --git a/docs/Metrics/MTTR.md b/docs/Metrics/MTTR.md index a5f83407e1..3511e10e28 100644 --- a/docs/Metrics/MTTR.md +++ b/docs/Metrics/MTTR.md @@ -106,10 +106,11 @@ FROM If you want to measure in which category your team falls into as in the picture shown below, run the following SQL in Grafana. - + ``` --- Metric 3: Median time to restore service +-- ***** 2023 report ***** -- +-- Metric 4: Failed deployment recovery time with _incidents as ( -- get the incidents created within the selected time period in the top-right corner SELECT @@ -121,7 +122,7 @@ with _incidents as ( join boards b on bi.board_id = b.id join project_mapping pm on b.id = pm.row_id and pm.`table` = 'boards' WHERE - pm.project_name in (${project:sqlstring}+'') + pm.project_name in (${project}) and i.type = 'INCIDENT' and $__timeFilter(i.created_date) ), @@ -135,30 +136,31 @@ _median_mttr as( SELECT max(lead_time_minutes) as median_time_to_resolve FROM _median_mttr_ranks WHERE ranks <= 0.5 -) +), -SELECT - CASE - WHEN ('$benchmarks') = '2023 report' THEN - CASE - WHEN median_time_to_resolve < 60 THEN "Less than one hour(elite)" - WHEN median_time_to_resolve < 24 * 60 THEN "Less than one day(high)" - WHEN median_time_to_resolve < 7 * 24 * 60 THEN "Between one day and one week(medium)" - WHEN median_time_to_resolve >= 7 * 24 * 60 THEN "More than one week(low)" - ELSE "N/A. Please check if you have collected incidents." - END - WHEN ('$benchmarks') = '2021 report' THEN +_metric_mttr_2021_report as( + SELECT + CASE + WHEN ('$dora_report') = '2021' THEN CASE - WHEN median_time_to_resolve < 60 THEN "Less than one hour(elite)" - WHEN median_time_to_resolve < 24 * 60 THEN "Less than one day(high)" - WHEN median_time_to_resolve < 7 * 24 * 60 THEN "Between one day and one week(medium)" - WHEN median_time_to_resolve >= 7 * 24 * 60 THEN "More than one week(low)" + WHEN median_time_to_resolve < 60 THEN CONCAT(round(median_time_to_resolve/60,1), "(elite)") + WHEN median_time_to_resolve < 24 * 60 THEN CONCAT(round(median_time_to_resolve/60,1), "(high)") + WHEN median_time_to_resolve < 7 * 24 * 60 THEN CONCAT(round(median_time_to_resolve/60,1), "(medium)") + WHEN median_time_to_resolve >= 7 * 24 * 60 THEN CONCAT(round(median_time_to_resolve/60,1), "(low)") ELSE "N/A. Please check if you have collected incidents." - END - ELSE 'Invalid Benchmarks' + END END AS median_time_to_resolve + FROM + _median_mttr +) + +SELECT + median_time_to_resolve AS median_time_to_resolve FROM - _median_mttr + _metric_mttr_2021_report +WHERE + ('$dora_report') = '2021' + ``` ## How to improve? diff --git a/static/img/Metrics/cfr-text.png b/static/img/Metrics/cfr-text.png new file mode 100644 index 0000000000..01a65bf2aa Binary files /dev/null and b/static/img/Metrics/cfr-text.png differ diff --git a/static/img/Metrics/deployment-frequency-text.png b/static/img/Metrics/deployment-frequency-text.png new file mode 100644 index 0000000000..18e43bd945 Binary files /dev/null and b/static/img/Metrics/deployment-frequency-text.png differ diff --git a/static/img/Metrics/failed-deployment-recovery-time-text.png b/static/img/Metrics/failed-deployment-recovery-time-text.png new file mode 100644 index 0000000000..87230ccd1e Binary files /dev/null and b/static/img/Metrics/failed-deployment-recovery-time-text.png differ diff --git a/static/img/Metrics/failed-deployment-recovery-time.png b/static/img/Metrics/failed-deployment-recovery-time.png new file mode 100644 index 0000000000..61636345a7 Binary files /dev/null and b/static/img/Metrics/failed-deployment-recovery-time.png differ diff --git a/static/img/Metrics/lead-time-for-changes-text.png b/static/img/Metrics/lead-time-for-changes-text.png new file mode 100644 index 0000000000..029474bb96 Binary files /dev/null and b/static/img/Metrics/lead-time-for-changes-text.png differ diff --git a/static/img/Metrics/mttr-text.png b/static/img/Metrics/mttr-text.png new file mode 100644 index 0000000000..79253d301b Binary files /dev/null and b/static/img/Metrics/mttr-text.png differ