This is an automated email from the ASF dual-hosted git repository.

henrik pushed a commit to branch readme-hunter-cleansing
in repository https://gitbox.apache.org/repos/asf/otava.git

commit c7516c30a1199a4c03b010d7f3beae93f9bf9a53
Author: Henrik Ingo <[email protected]>
AuthorDate: Wed Mar 19 22:16:41 2025 +0200

    Review documentation, mainly to remove "Hunter" inspired language
    
    Actually there was only one "Hunts performance regressions" reference.
    The rest are just small editorial changes.
    
    Fixes: #39
---
 README.md               | 13 ++++++-------
 docs/BASICS.md          |  5 +++--
 docs/GETTING_STARTED.md |  4 ++--
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/README.md b/README.md
index cddc7c4..75b7e4d 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
-Otava – Hunts Performance Regressions
-======================================
+Otava – Change Detection for Continuous Performance Engineering
+===============================================================
 
 Otava performs statistical analysis of performance test results stored
 in CSV files, PostgreSQL, BigQuery, or Graphite database. It finds 
change-points and notifies about
@@ -7,7 +7,7 @@ possible performance regressions.
 
 A typical use-case of otava is as follows:
 
-- A set of performance tests is scheduled repeatedly.
+- A set of performance tests is scheduled repeatedly, such as after each 
commit is pushed.
 - The resulting metrics of the test runs are stored in a time series database 
(Graphite)
    or appended to CSV files.
 - Otava is launched by a Jenkins/Cron job (or an operator) to analyze the 
recorded
@@ -23,10 +23,9 @@ under test or in the environment.
 Unlike in threshold-based performance monitoring systems, there is no need to 
setup fixed warning
 threshold levels manually for each recorded metric. The level of accepted 
probability of
 false-positives, as well as the minimal accepted magnitude of changes are 
tunable. Otava is
-also capable of comparingthe level of performance recorded in two different 
periods of time – which
-is useful for e.g. validating the performance of the release candidate vs the 
previous release of your product.
-
-Backward compatibility may be broken any time.
+also capable of comparing the level of performance recorded in two different 
git histories.
+This can be used for example to validate a feature branch against the main 
branch, perhaps
+integrated with a pull request.
 
 See the documentation in [docs/README.md](docs/README.md).
 
diff --git a/docs/BASICS.md b/docs/BASICS.md
index 8fda0fe..66f2960 100644
--- a/docs/BASICS.md
+++ b/docs/BASICS.md
@@ -174,8 +174,9 @@ $ otava regressions <test or group> --branch <branch> 
--since-version <version>
 $ otava regressions <test or group> --branch <branch> --since-commit <commit>
 ```
 
-Sometimes when working on a feature branch, you may run the tests multiple 
times,
-creating more than one data point. To ignore the previous test results, and 
compare
+When comparing two branches, you generally want to compare the tails of both 
test histories, and
+specifically a stable sequence from the end that doesn't contain any changes 
in itself.
+To ignore the older test results, and compare
 only the last few points on the branch with the tail of the main branch,
 use the `--last <n>` selector. E.g. to check regressions on the last run of 
the tests
 on the feature branch:
diff --git a/docs/GETTING_STARTED.md b/docs/GETTING_STARTED.md
index 1f1754a..ff2bbf1 100644
--- a/docs/GETTING_STARTED.md
+++ b/docs/GETTING_STARTED.md
@@ -111,8 +111,8 @@ This command prints interesting results of all runs of the 
test and a list of ch
 A change-point is a moment when a metric value starts to differ significantly 
from the values of the earlier runs and
 when the difference is statistically significant.
 
-Otava calculates the probability (P-value) that the change point was caused by 
chance - the closer to zero, the more
-"sure" it is about the regression or performance improvement. The smaller is 
the actual magnitude of the change, the
+Otava calculates the probability (P-value) that the change point was not 
caused by chance - the closer to zero, the more
+certain it is about the regression or performance improvement. The smaller the 
magnitude of the change, the
 more data points are needed to confirm the change, therefore Otava may not 
notice the regression immediately after the first run
 that regressed.
 

Reply via email to