This is an automated email from the ASF dual-hosted git repository.

potiuk pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/airflow.git


The following commit(s) were added to refs/heads/main by this push:
     new ea9e1ec3b5e Fix typos and improve phrasing in unit testing guide 
(#58765)
ea9e1ec3b5e is described below

commit ea9e1ec3b5e956269e9142fcb7cab445fde666f8
Author: Parker Chang <[email protected]>
AuthorDate: Sun Nov 30 07:21:06 2025 +0800

    Fix typos and improve phrasing in unit testing guide (#58765)
    
    Refined the English grammar throughout the document to make it more natural 
and professional. Fixed specific issues like "That mean" -> "This means" and 
corrected various formatting inconsistencies.
---
 contributing-docs/testing/unit_tests.rst | 751 ++++++++++++++-----------------
 1 file changed, 349 insertions(+), 402 deletions(-)

diff --git a/contributing-docs/testing/unit_tests.rst 
b/contributing-docs/testing/unit_tests.rst
index f7d1e83d668..837ddd4269d 100644
--- a/contributing-docs/testing/unit_tests.rst
+++ b/contributing-docs/testing/unit_tests.rst
@@ -20,30 +20,27 @@ Airflow Unit Tests
 
 All unit tests for Apache Airflow are run using `pytest 
<http://doc.pytest.org/en/latest/>`_.
 
-**The outline for this document in GitHub is available at top-right corner 
button (with 3-dots and 3 lines).**
+**The outline for this document in GitHub is available via the button in the 
top-right corner (icon with 3 dots and 3 lines).**
 
 Writing Unit Tests
 ------------------
 
-Follow the guidelines when writing unit tests:
+Follow these guidelines when writing unit tests:
 
-* For standard unit tests that do not require integrations with external 
systems, make sure to simulate all communications.
-* All Airflow tests are run with ``pytest``. Make sure to set your IDE/runners 
(see below) to use ``pytest`` by default.
-* For tests, use standard "asserts" of Python and ``pytest`` 
decorators/context managers for testing
-  rather than ``unittest`` ones. See `pytest docs 
<http://doc.pytest.org/en/latest/assert.html>`__ for details.
-* Use a ``pytest.mark.parametrize`` marker for tests that have variations in 
parameters.
-  See `pytest docs 
<https://docs.pytest.org/en/latest/how-to/parametrize.html>`__ for details.
-* Use with ``pytest.warn`` to capture warnings rather than ``recwarn`` 
fixture. We are aiming for 0-warning in our
-  tests, so we run Pytest with ``--disable-warnings`` but instead we have 
custom warning capture system.
+* For standard unit tests that do not require integration with external 
systems, ensure all communications are simulated (mocked).
+* All Airflow tests are run with ``pytest``. Ensure your IDE or runners (see 
below) are configured to use ``pytest`` by default.
+* For tests, use standard Python "asserts" and ``pytest`` decorators/context 
managers for testing rather than ``unittest`` ones. See `pytest docs 
<http://doc.pytest.org/en/latest/assert.html>`__ for details.
+* Use the ``pytest.mark.parametrize`` marker for tests that have variations in 
parameters. See `pytest docs 
<https://docs.pytest.org/en/latest/how-to/parametrize.html>`__ for details.
+* Use ``pytest.warns`` to capture warnings instead of the ``recwarn`` fixture. 
We aim for zero warnings in our tests; therefore, we run pytest with 
``--disable-warnings`` and utilize a custom warning capture system.
 
 Handling warnings
 .................
 
-By default, in the new tests selected warnings are prohibited:
+By default, specific warnings are prohibited in new tests:
 
 * ``airflow.exceptions.AirflowProviderDeprecationWarning``
 
-That mean if one of this warning appear during test run and do not captured 
the test will failed.
+Any test triggering this warning without capturing it will fail.
 
 .. code-block:: console
 
@@ -51,17 +48,13 @@ That mean if one of this warning appear during test run and 
do not captured the
     ...
     FAILED tests/models/test_dag.py::TestDag::test_clear_dag[None-None] - 
airflow.exceptions.RemovedInAirflow3Warning: Calling `DAG.create_dagrun()` 
without an explicit data interval is deprecated
 
-**NOTE:** As of Airflow 3.0 the test file ``tests/models/test_dag.py`` has 
been relocated to ``airflow-core/tests/unit/models/test_dag.py``.
+**NOTE:** As of Airflow 3.0, the test file ``tests/models/test_dag.py`` has 
been relocated to ``airflow-core/tests/unit/models/test_dag.py``.
 
-For avoid this make sure:
+To avoid this, ensure that:
 
-* You do not use deprecated method, classes and arguments in your test cases;
-* Your change do not affect other component, e.g. deprecate one part of 
Airflow Core or one of Community Supported
-  Providers might be a reason for new deprecation warnings. In this case 
changes should be also made in all affected
-  components in backward compatible way.
-* You use ``pytest.warn`` (see `pytest doc 
<https://docs.pytest.org/en/latest/how-to/capture-warnings.html#warns>`__
-  context manager for catch warning during the test deprecated components.
-  Yes we still need to test legacy/deprecated stuff until it completely 
removed)
+* Do not use deprecated methods, classes, or arguments in your test cases.
+* Your changes do not affect other components. For example, deprecating a part 
of Airflow Core or a Community Supported Provider might trigger new deprecation 
warnings. In this case, changes should also be made in all affected components 
in a backward-compatible way.
+* Use ``pytest.warns``. See `pytest doc 
<https://docs.pytest.org/en/latest/how-to/capture-warnings.html#warns>`__ 
context manager to catch warnings when testing deprecated components. (Yes, we 
still need to test legacy/deprecated features until they are completely 
removed).
 
 .. code-block:: python
 
@@ -76,8 +69,8 @@ Mocking sleep calls
 ...................
 
 To speed up test execution and avoid unnecessary delays, you should mock sleep 
calls in tests or set the sleep time to 0.
-If the method you're testing includes a call to ``time.sleep()`` or 
``asyncio.sleep()``, mock these calls instead.
-How to mock ``sleep()`` depends on how it's imported:
+If the method you are testing includes a call to ``time.sleep()`` or 
``asyncio.sleep()``, mock these calls.
+How to mock ``sleep()`` depends on how it is imported:
 
 * If ``time.sleep`` is imported as ``import time``:
 
@@ -95,16 +88,16 @@ How to mock ``sleep()`` depends on how it's imported:
     def test_your_test():
         pass
 
-For methods that use ``asyncio`` for async sleep calls you can proceed 
identically.
+For methods that use ``asyncio`` for async sleep calls, the process is 
identical.
 
-**NOTE:** There are certain cases in which the method functioning correctly 
depends on actual time passing.
-In those cases the test with the mock will fail. Then it's okay to leave it 
unmocked.
+**NOTE:** There are certain cases where the method's correct functioning 
depends on actual time passing.
+In those cases, the test with the mock will fail. It is acceptable to leave it 
unmocked in such scenarios.
 Use your judgment and prefer mocking whenever possible.
 
 Controlling date and time
 .........................
 
-Some features rely on the current date and time, e.g a function that generates 
timestamps, or passing of time.
+Some features rely on the current date and time, e.g., a function that 
generates timestamps or measures the passing of time.
 To test such features reliably, we use the ``time-machine`` library to control 
the system's time:
 
 .. code-block:: python
@@ -122,15 +115,15 @@ If you want time to progress from a fixed starting point, 
you can set ``tick=Tru
 Airflow configuration for unit tests
 ------------------------------------
 
-Some of the unit tests require special configuration set as the ``default``. 
This is done automatically by
-adding ``AIRFLOW__CORE__UNIT_TEST_MODE=True`` to the environment variables in 
Pytest auto-used
-fixture. This in turn makes Airflow load test configuration from the file
+Some unit tests require special configuration set as the ``default``. This is 
done automatically by
+adding ``AIRFLOW__CORE__UNIT_TEST_MODE=True`` to the environment variables in 
a Pytest auto-use
+fixture. This, in turn, makes Airflow load test configuration from the file
 ``airflow/config_templates/unit_tests.cfg``. Test configuration from there 
replaces the original
-defaults from ``airflow/config_templates/config.yml``. If you want to add some 
test-only configuration,
-as default for all tests you should add the value to this file.
+defaults from ``airflow/config_templates/config.yml``. If you want to add some 
test-only configuration
+as a default for all tests, you should add the value to this file.
 
-You can also - of course - override the values in individual test by patching 
environment variables following
-the usual ``AIRFLOW__SECTION__KEY`` pattern or ``conf_vars`` context manager.
+You can also override the values in individual tests by patching environment 
variables following
+the usual ``AIRFLOW__SECTION__KEY`` pattern or using the ``conf_vars`` context 
manager.
 
 Airflow unit test types
 -----------------------
@@ -138,172 +131,146 @@ Airflow unit test types
 Airflow tests in the CI environment are split into several test types. You can 
narrow down which
 test types you want to use in various ``breeze testing`` sub-commands in three 
ways:
 
-* via specifying the ``--test-type`` when you run single test type in ``breeze 
testing core-tests``.
-  ``breeze testing providers-tests`` ``breeze testing integration-tests`` 
commands
-* via specifying space separating list of test types via 
``--parallel-test-types`` or
-  ``--excluded-parallel-test-types`` options when you run tests in parallel 
(in several testing commands)
+* By specifying the ``--test-type`` when running a single test type in 
``breeze testing core-tests``, ``breeze testing providers-tests``, or ``breeze 
testing integration-tests`` commands.
+* By specifying a space-separated list of test types via the 
``--parallel-test-types`` or ``--excluded-parallel-test-types`` options when 
running tests in parallel.
 
-Those test types are defined:
+The defined test types are:
 
-* ``Always`` - those are tests that should be always executed (always 
sub-folder)
-* ``API`` - Tests for the Airflow API (api, api_internal, api_fastapi 
sub-folders)
-* ``CLI`` - Tests for the Airflow CLI (cli folder)
-* ``Core`` - for the core Airflow functionality (core, executors, jobs, 
models, ti_deps, utils sub-folders)
-* ``Operators`` - tests for the operators (operators folder)
-* ``WWW`` - Tests for the Airflow webserver (www folder)
-* ``Providers`` - Tests for all Providers of Airflow (providers folder)
-* ``Other`` - all other tests remaining after the above tests are selected
+* ``Always`` - Tests that should always be executed (always sub-folder).
+* ``API`` - Tests for the Airflow API (api, api_internal, api_fastapi 
sub-folders).
+* ``CLI`` - Tests for the Airflow CLI (cli folder).
+* ``Core`` - Tests for core Airflow functionality (core, executors, jobs, 
models, ti_deps, utils sub-folders).
+* ``Operators`` - Tests for operators (operators folder).
+* ``WWW`` - Tests for the Airflow webserver (www folder).
+* ``Providers`` - Tests for all Airflow Providers (providers folder).
+* ``Other`` - All other tests remaining after the above tests are selected.
 
-We have also tests that run "all" tests (so they do not look at the folder, 
but at the ``pytest`` markers
-the tests are marked with to run with some filters applied.
+We also have types that run "all" tests (ignoring folders, but looking at 
``pytest`` markers with filters applied):
 
-* ``All-Postgres`` - tests that require Postgres database. They are only run 
when backend is Postgres (``backend("postgres")`` marker)
-* ``All-MySQL`` - tests that require MySQL database. They are only run when 
backend is MySQL (``backend("mysql")`` marker)
-* ``All-Quarantined`` - tests that are flaky and need to be fixed 
(``quarantined`` marker)
-* ``All`` - all tests are run (this is the default)
+* ``All-Postgres`` - Tests that require a Postgres database. Only run when the 
backend is Postgres (``backend("postgres")`` marker).
+* ``All-MySQL`` - Tests that require a MySQL database. Only run when the 
backend is MySQL (``backend("mysql")`` marker).
+* ``All-Quarantined`` - Tests that are flaky and need to be fixed 
(``quarantined`` marker).
+* ``All`` - All tests are run (this is the default).
 
+We also have ``Integration`` tests that run with external software via the 
``--integration`` flag in the ``breeze`` environment (via ``breeze testing 
integration-tests``).
 
-We also have ``Integration`` tests that are running Integration tests with 
external software that is run
-via ``--integration`` flag in ``breeze`` environment - via ``breeze testing 
integration-tests``.
+* ``Integration`` - Tests that require external integration images running in 
docker-compose.
 
-* ``Integration`` - tests that require external integration images running in 
docker-compose
+This structure exists for two reasons:
 
-This is done for two reasons:
-
-1. in order to selectively run only subset of the test types for some PRs
-2. in order to allow efficient parallel test execution of the tests on 
Self-Hosted runners
-
-For case 2. We can utilize memory and CPUs available on both CI and local 
development machines to run
-test in parallel, but we cannot use pytest xdist plugin for that - we need to 
split the tests into test
-types and run each test type with their own instance of database and separate 
container where the tests
-in each type are run with exclusive access to their database and each test 
within test type runs sequentially.
-By the nature of those tests - they rely on shared databases - and they 
update/reset/cleanup data in the
-databases while they are executing.
+1. To allow selectively running only a subset of test types for some PRs.
+2. To allow efficient parallel execution of tests on Self-Hosted runners.
 
+For case 2: We can utilize the memory and CPUs available on both CI and local 
development machines to run
+tests in parallel. However, we cannot use the pytest xdist plugin for this. 
Instead, we split the tests into test
+types and run each type with its own database instance and separate container. 
The tests in each type run with exclusive access to their database, and tests 
within a type run sequentially.
+This is necessary because these tests rely on shared databases and 
update/reset/cleanup data during execution.
 
 DB and non-DB tests
 -------------------
 
-There are two kinds of unit tests in Airflow - DB and non-DB tests. This 
chapter describe the differences
-between those two types.
+There are two kinds of unit tests in Airflow: DB and non-DB tests. This 
chapter describes the differences
+between these two types.
 
 Airflow non-DB tests
 ....................
 
-For the Non-DB tests, they are run once for each tested Python version with 
``none`` database backend (which
-causes any database access to fail. Those tests are run with ``pytest-xdist`` 
plugin in parallel which
-means that we can efficiently utilised multi-processor machines (including 
``self-hosted`` runners with
-8 CPUS we have to run the tests with maximum parallelism).
+Non-DB tests are run once for each tested Python version with the ``none`` 
database backend (which
+causes any database access to fail). These tests are run with the 
``pytest-xdist`` plugin in parallel, which
+means we can efficiently utilize multi-processor machines (including 
``self-hosted`` runners with
+8 CPUs, where we run tests with maximum parallelism).
 
-It's usually straightforward to run those tests in local virtualenv because 
they do not require any
-setup or running database. They also run much faster than DB tests. You can 
run them with ``pytest`` command
-or with ``breeze`` that has all the dependencies needed to run all tests 
automatically installed. Of course
-you can also select just specific test or folder or module for the Pytest to 
collect/run tests from there,
-the example below shows how to run all tests, parallelizing them with 
``pytest-xdist``
-(by specifying ``tests`` folder):
+It is usually straightforward to run these tests in a local virtualenv because 
they do not require any
+database setup. They also run much faster than DB tests. You can run them with 
the ``pytest`` command
+or with ``breeze`` (which has all dependencies automatically installed). You 
can also select specific tests, folders, or modules for Pytest to collect/run.
+The example below shows how to run all tests, parallelizing them with 
``pytest-xdist`` (by specifying the ``tests`` folder):
 
 .. code-block:: bash
 
     pytest airflow-core/tests --skip-db-tests -n auto
 
-
 The ``--skip-db-tests`` flag will only run tests that are not marked as DB 
tests.
 
-
-You can also run ``breeze`` command to run all the tests (they will run in a 
separate container,
-the selected python version and without access to any database). Adding 
``--use-xdist`` flag will run all
-tests in parallel using ``pytest-xdist`` plugin.
+You can also use the ``breeze`` command to run all the tests (they will run in 
a separate container,
+with the selected Python version and without access to any database). Adding 
the ``--use-xdist`` flag will run all
+tests in parallel using the ``pytest-xdist`` plugin.
 
 You can run parallel commands via ``breeze testing core-tests`` or ``breeze 
testing providers-tests``
-- by adding the parallel flags:
+by adding the parallel flags:
 
 .. code-block:: bash
 
     breeze testing core-tests --skip-db-tests --backend none --use-xdist
 
-You can pass ``--parallel-test-type`` list of test types to execute or 
``--exclude--parallel-test-types``
-to exclude them from the default set:.
+You can pass a list of test types to execute via ``--parallel-test-type`` or 
exclude them via ``--exclude-parallel-test-types``:
 
 .. code-block:: bash
 
     breeze testing providers-tests --run-in-parallel --skip-db-tests --backend 
none --parallel-test-types "Providers[google] Providers[amazon]"
 
-Also you can enter interactive shell with ``breeze`` and run tests from there 
if you want to iterate
-with the tests. Source files in ``breeze`` are mounted as volumes so you can 
modify them locally and
-rerun in Breeze as you will (``-n auto`` will parallelize tests using 
``pytest-xdist`` plugin):
+Additionally, you can enter an interactive shell with ``breeze`` and run tests 
from there to iterate. Source files in ``breeze`` are mounted as volumes, so 
you can modify them locally and
+rerun in Breeze as needed (``-n auto`` will parallelize tests using the 
``pytest-xdist`` plugin):
 
 .. code-block:: bash
 
     breeze shell --backend none --python 3.10
     > pytest airflow-core/tests --skip-db-tests -n auto
 
-
 Airflow DB tests
 ................
 
-Some of the tests of Airflow require a database to connect to in order to run. 
Those tests store and read data
-from Airflow DB using Airflow's core code and it's crucial to run the tests 
against all real databases
-that Airflow supports in order to check if the SQLAlchemy queries are correct 
and if the database schema is
-correct.
+Some Airflow tests require a database connection. These tests store and read 
data
+from the Airflow DB using Airflow's core code. It is crucial to run these 
tests against all real databases
+that Airflow supports to check if SQLAlchemy queries and the database schema 
are correct.
 
-Those tests should be marked with ``@pytest.mark.db`` decorator on one of the 
levels:
+These tests should be marked with the ``@pytest.mark.db_test`` decorator at 
one of the following levels:
 
-* test method can be marked with ``@pytest.mark.db`` decorator
-* test class can be marked with ``@pytest.mark.db`` decorator
-* test module can be marked with ``pytestmark = pytest.mark.db`` at the top 
level of the module
+* Test method level
+* Test class level
+* Module level (using ``pytestmark = pytest.mark.db_test`` at the top of the 
module)
 
-For the DB tests, they are run against the multiple databases Airflow support, 
multiple versions of those
-and multiple Python versions it supports. In order to save time for testing 
not all combinations are
-tested but enough various combinations are tested to detect potential problems.
+DB tests are run against multiple supported databases, database versions, and 
Python versions. To save time, not all combinations are
+tested, but enough variations are covered to detect potential problems.
 
-By default, the DB tests will use sqlite and the "airflow.db" database created 
and populated in the
-``${AIRFLOW_HOME}`` folder. You do not need to do anything to get the database 
created and initialized,
-but if you need to clean and restart the db, you can run tests with 
``-with-db-init`` flag - then the
-database will be re-initialized. You can also set 
``AIRFLOW__DATABASE__SQL_ALCHEMY_CONN`` environment
-variable to point to supported database (Postgres, MySQL, etc.) and the tests 
will use that database. You
+By default, DB tests use SQLite and the "airflow.db" database created in the 
``${AIRFLOW_HOME}`` folder. You do not need to do anything to create or 
initialize the database.
+However, if you need to clean and restart the DB, you can run tests with the 
``--with-db-init`` flag to re-initialize it. You can also set the 
``AIRFLOW__DATABASE__SQL_ALCHEMY_CONN`` environment
+variable to point to a supported database (Postgres, MySQL, etc.), and the 
tests will use that database. You
 might need to run ``airflow db reset`` to initialize the database in that case.
 
-The "non-DB" tests are perfectly fine to run when you have database around but 
if you want to just run
-DB tests (as happens in our CI for the ``Database`` runs) you can use 
``--run-db-tests-only`` flag to filter
-out non-DB tests (and obviously you can specify not only on the whole 
``tests`` directory but on any
-folders/files/tests selection, ``pytest`` supports).
+It is perfectly fine to run "non-DB" tests when you have a database 
configured. However, if you want to run *only*
+DB tests (as done in our CI for ``Database`` runs), you can use the 
``--run-db-tests-only`` flag to filter
+out non-DB tests. (You can specify the whole ``tests`` directory or any 
specific folder/file selection).
 
 .. code-block:: bash
 
     pytest airflow-core/tests --run-db-tests-only
 
-You can also run DB tests with ``breeze`` dockerized environment. You can 
choose backend to use with
-``--backend`` flag. The default is ``sqlite`` but you can also use others such 
as ``postgres`` or ``mysql``.
-You can also select backend version and Python version to use. You can specify 
the ``test-type`` to run -
-breeze will list the test types you can run with ``--help`` and provide 
auto-complete for them. Example
-below runs the ``Core`` tests with ``postgres`` backend and ``3.10`` Python 
version
+You can also run DB tests within the ``breeze`` dockerized environment. You 
can choose the backend with the
+``--backend`` flag. The default is ``sqlite``, but you can also use 
``postgres`` or ``mysql``.
+You can also select the backend version and Python version. Breeze will list 
the available test types via ``--help`` and provide auto-complete.
+The example below runs ``Core`` tests with the ``postgres`` backend and Python 
``3.10``:
 
 You can also run the commands via ``breeze testing core-tests`` or ``breeze 
testing providers-tests``
-- by adding the parallel flags manually:
+by adding the parallel flags manually:
 
 .. code-block:: bash
 
     breeze testing core-tests --run-db-tests-only --backend postgres 
--run-in-parallel
 
-You can pass ``--parallel-test-type`` list of test types to execute or 
``--exclude--parallel-test-types``
-to exclude them from the default set:.
+You can pass a list of test types to execute via ``--parallel-test-type`` or 
exclude them via ``--exclude-parallel-test-types``:
 
 .. code-block:: bash
 
     breeze testing providers-tests --run-in-parallel --run-db-tests-only 
--parallel-test-types "Providers[google] Providers[amazon]"
 
-
-Also - if you want to iterate with the tests you can enter interactive shell 
and run the tests iteratively -
-either by package/module/test or by test type - whatever ``pytest`` supports.
+If you want to iterate on tests, you can enter the interactive shell and run 
tests iteratively—either by package/module/test or by test type, whatever 
``pytest`` supports.
 
 .. code-block:: bash
 
     breeze shell --backend postgres --python 3.10
     > pytest airflow-core/tests --run-db-tests-only
 
-As explained before, you cannot run DB tests in parallel using 
``pytest-xdist`` plugin, but ``breeze`` has
-support to split all the tests into test-types to run in separate containers 
and with separate databases
-and you can run the tests using ``--run-in-parallel`` flag.
+As explained before, you cannot run DB tests in parallel using the 
``pytest-xdist`` plugin. However, ``breeze`` supports splitting all tests into 
test-types to run in separate containers with separate databases using the 
``--run-in-parallel`` flag.
 
 .. code-block:: bash
 
@@ -312,16 +279,14 @@ and you can run the tests using ``--run-in-parallel`` 
flag.
 Examples of marking test as DB test
 ...................................
 
-You can apply the marker on method/function/class level with 
``@pytest.mark.db_test`` decorator or
-at the module level with ``pytestmark = pytest.mark.db_test`` at the top level 
of the module.
+You can apply the marker on the method/function/class level with the 
``@pytest.mark.db_test`` decorator or
+at the module level with ``pytestmark = pytest.mark.db_test`` at the top of 
the module.
 
-It's up to the author to decide whether to mark the test, class, or module as 
"DB-test" - generally the
-less DB tests - the better and if we can clearly separate the parts that are 
DB from non-DB, we should,
-but also it's ok if few tests are marked as DB tests when they are not but 
they are part of the class
-or module that is "mostly-DB".
+It is up to the author to decide whether to mark the test, class, or module as 
a "DB-test". Generally, the fewer DB tests, the better. If we can clearly 
separate DB parts from non-DB parts, we should.
+However, it is acceptable if a few non-DB tests are marked as DB tests because 
they are part of a class or module that is "mostly-DB".
 
-Sometimes, when your class can be clearly split to DB and non-DB parts, it's 
better to split the class
-into two separate classes and mark only the DB class as DB test.
+Sometimes, when a class can be clearly split into DB and non-DB parts, it is 
better to split the class
+into two separate classes and mark only the DB class as a DB test.
 
 Method level:
 
@@ -361,22 +326,14 @@ Module level (at the top of the module):
 Best practices for DB tests
 ...........................
 
-Usually when you add new tests you add tests "similar" to the ones that are 
already there. In most cases,
-therefore you do not have to worry about the test type - it will be 
automatically selected for you by the
-fact that the Test Class that you add the tests or the whole module will be 
marked with ``db_test`` marker.
+Usually, when adding new tests, you create tests similar to the existing ones. 
In most cases,
+you do not have to worry about the test type. It will be automatically 
selected for you because the Test Class or module you are adding to is already 
marked with the ``db_test`` marker.
 
-You should strive to write "pure" non-db unit tests (i.e. DB tests) but 
sometimes it's just better to plug-in
-the existing framework of DagRuns, Dags, Connections and Variables to use the 
Database directly rather
-than having to mock the DB access for example. It's up to you to decide.
+You should strive to write "pure" non-DB unit tests. However, sometimes it is 
better to plug into the existing framework of DagRuns, Dags, Connections, and 
Variables to use the Database directly rather than mocking all DB access. This 
decision is up to you.
 
-However, if you choose to write DB tests you have to make sure you add the 
``db_test`` marker - either to
-the test method, class (with decorator) or whole module (with pytestmark at 
the top level of the module).
+However, if you choose to write DB tests, you must ensure the ``db_test`` 
marker is added either to the test method, class (with decorator), or whole 
module (with pytestmark).
 
-In most cases when you add tests to existing modules or classes, you follow 
similar tests so you do not
-have to do anything, but in some cases you need to decide if your test should 
be marked as DB test or
-whether it should be changed to not use the database at all.
-
-If your test accesses the database but is not marked properly the Non-DB test 
in CI will fail with this message:
+If your test accesses the database but is not marked properly, the Non-DB test 
run in CI will fail with this message:
 
 .. code ::
 
@@ -384,31 +341,31 @@ If your test accesses the database but is not marked 
properly the Non-DB test in
     Either make sure your test does not use database or mark your test with 
`@pytest.mark.db_test`.
 
 
-How to verify if DB test is correctly classified
-................................................
+How to verify if a DB test is correctly classified
+..................................................
 
-If you want to see if your DB test is correctly classified, you can run the 
test or group
-of tests with ``--skip-db-tests`` flag.
+If you want to verify if your DB test is correctly classified, you can run the 
test or group
+of tests with the ``--skip-db-tests`` flag.
 
-You can run the all (or subset of) test types if you want to make sure all of 
the problems are fixed
+You can run all (or a subset of) test types to ensure all problems are fixed:
 
   .. code-block:: bash
 
      breeze testing core-tests --skip-db-tests tests/your_test.py
 
-For the whole test suite you can run:
+For the whole test suite:
 
   .. code-block:: bash
 
      breeze testing core-tests --skip-db-tests
 
-For selected test types (example - the tests will run for 
``Providers/API/CLI`` code only:
+For selected test types (e.g., only ``Providers/API/CLI`` code):
 
   .. code-block:: bash
 
      breeze testing providers-tests --skip-db-tests --parallel-test-types 
"Providers[google] Providers[amazon]"
 
-You can also enter interactive shell with ``--skip-db-tests`` flag and run the 
tests iteratively
+You can also enter the interactive shell with the ``--skip-db-tests`` flag and 
run tests iteratively:
 
   .. code-block:: bash
 
@@ -419,36 +376,33 @@ You can also enter interactive shell with 
``--skip-db-tests`` flag and run the t
 How to make your test not depend on DB
 ......................................
 
-This is tricky and there is no single solution. Sometimes we can mock-out the 
methods that require
-DB access or objects that normally require database. Sometimes we can decide 
to test just single method
-of class rather than more complex set of steps. Generally speaking it's good 
to have as many "pure"
-unit tests that require no DB as possible comparing to DB tests. They are 
usually faster an more
-reliable as well.
-
+This is tricky and there is no single solution. Sometimes we can mock out 
methods that require
+DB access or objects that normally require a database. Sometimes we can decide 
to test just a single method
+of a class rather than a complex set of steps. Generally speaking, it is 
better to have as many "pure"
+unit tests (requiring no DB) as possible compared to DB tests. They are 
usually faster and more reliable.
 
 Special cases
 .............
 
 There are some tricky test cases that require special handling. Here are some 
of them:
 
-
 Parameterized tests stability
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The parameterized tests require stable order of parameters if they are run via 
``xdist`` - because the parameterized
-tests are distributed among multiple processes and handled separately. In some 
cases the parameterized tests
-have undefined / random order (or parameters are not hashable - for example 
set of enums). In such cases
-the xdist execution of the tests will fail and you will get an error 
mentioning "Known Limitations of xdist".
-You can see details about the limitation `here 
<https://pytest-xdist.readthedocs.io/en/latest/known-limitations.html>`_
+Parameterized tests require a stable order of parameters if they are run via 
``xdist``. This is because parameterized
+tests are distributed among multiple processes and handled separately. In some 
cases, parameterized tests
+have an undefined/random order (or parameters are not hashable, e.g., a set of 
enums). In such cases,
+the xdist execution will fail, and you will get an error mentioning "Known 
Limitations of xdist".
+You can see details about the limitation `here 
<https://pytest-xdist.readthedocs.io/en/latest/known-limitations.html>`_.
 
 The error in this case will look similar to:
 
 .. code-block::
 
-   Different tests were collected between gw0 and gw7. The difference is:
+    Different tests were collected between gw0 and gw7. The difference is:
 
 
-The fix for that is to sort the parameters in ``parametrize``. For example 
instead of this:
+The fix is to sort the parameters in ``parametrize``. For example, instead of 
this:
 
 .. code-block:: python
 
@@ -456,7 +410,7 @@ The fix for that is to sort the parameters in 
``parametrize``. For example inste
    def test_method(): ...
 
 
-do that:
+do this:
 
 
 .. code-block:: python
@@ -464,8 +418,8 @@ do that:
    @pytest.mark.parametrize("status", sorted(ALL_STATES))
    def test_method(): ...
 
-Similarly if your parameters are defined as result of ``utcnow()`` or other 
dynamic method - you should
-avoid that, or assign unique IDs for those parametrized tests. Instead of this:
+Similarly, if your parameters are defined as a result of ``utcnow()`` or 
another dynamic method, you should
+avoid that or assign unique IDs for those parametrized tests. Instead of this:
 
 .. code-block:: python
 
@@ -515,17 +469,15 @@ Problems with Non-DB test collection
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Sometimes, even if the whole module is marked as ``@pytest.mark.db_test``, 
parsing the file and collecting
-tests will fail when ``--skip-db-tests`` is used because some of the imports 
or objects created in the
-module will read the database.
-
-Usually what helps is to move such initialization code to inside the tests or 
pytest fixtures (and pass
-objects needed by tests as fixtures rather than importing them from the 
module). Similarly you might
-use DB - bound objects (like Connection) in your ``parametrize`` specification 
- this will also fail pytest
-collection. Move creation of such objects to inside the tests:
+tests will fail when ``--skip-db-tests`` is used because some imports or 
objects created in the
+module read from the database.
 
-Moving object creation from top-level to inside tests. This code will break 
collection of tests even if
-the test is marked as DB test:
+Moving such initialization code to inside the tests or pytest fixtures (and 
passing objects needed by tests as fixtures rather than importing them from the 
module) usually helps. Similarly, you might
+use DB-bound objects (like Connection) in your ``parametrize`` 
specification—this will also fail pytest
+collection. Move the creation of such objects to inside the tests.
 
+Example: Moving object creation from top-level to inside tests. This code will 
break test collection even if
+the test is marked as a DB test:
 
 .. code-block:: python
 
@@ -570,9 +522,8 @@ the test is marked as DB test:
        def test_from_json(self, input, request_class): ...
 
 
-Instead - this will not break collection. The ``TaskInstance`` is not 
initialized when the module is parsed,
-it will only be initialized when the test gets executed because we moved 
initialization of it from
-top level / parametrize to inside the test:
+Instead, do this (which will not break collection). The ``TaskInstance`` is 
not initialized when the module is parsed;
+it will only be initialized when the test gets executed:
 
 .. code-block:: python
 
@@ -625,8 +576,8 @@ top level / parametrize to inside the test:
 
 Sometimes it is difficult to rewrite the tests, so you might add conditional 
handling and mock out some
 database-bound methods or objects to avoid hitting the database during test 
collection. The code below
-will hit the Database while parsing the tests, because this is what 
``Variable.setdefault`` does when
-parametrize specification is being parsed - even if test is marked as DB test.
+will hit the Database while parsing the tests because this is what 
``Variable.setdefault`` does when
+the parametrize specification is parsed—even if the test is marked as a DB 
test.
 
 
 .. code-block:: python
@@ -666,7 +617,7 @@ parametrize specification is being parsed - even if test is 
marked as DB test.
     def test_rendered_task_detail_env_secret(patch_app, admin_client, request, 
env, expected): ...
 
 
-You can make the code conditional and mock out the ``Variable`` to avoid 
hitting the database.
+You can make the code conditional and mock out ``Variable`` to avoid hitting 
the database.
 
 
 .. code-block:: python
@@ -710,7 +661,7 @@ You can make the code conditional and mock out the 
``Variable`` to avoid hitting
     )
     def test_rendered_task_detail_env_secret(patch_app, admin_client, request, 
env, expected): ...
 
-You can also use fixture to create object that needs database just like this.
+You can also use a fixture to create an object that needs the database.
 
 
 .. code-block:: python
@@ -747,7 +698,7 @@ Running Unit tests
 Running Unit Tests from PyCharm IDE
 ...................................
 
-To run unit tests from the PyCharm IDE, create the `local virtualenv 
<../07_local_virtualenv.rst>`_,
+To run unit tests from the PyCharm IDE, create a `local virtualenv 
<../07_local_virtualenv.rst>`_,
 select it as the default project's environment, then configure your test 
runner:
 
 .. image:: images/pycharm/configure_test_runner.png
@@ -760,21 +711,21 @@ and run unit tests as follows:
     :align: center
     :alt: Running unit tests
 
-**NOTE:** You can run the unit tests in the standalone local virtualenv
+**NOTE:** You can run unit tests in the standalone local virtualenv
 (with no Breeze installed) if they do not have dependencies such as
 Postgres/MySQL/Hadoop/etc.
 
 Running Unit Tests from PyCharm IDE using Breeze
 ................................................
 
-Ideally, all unit tests should be run using the standardized Breeze 
environment.  While not
+Ideally, all unit tests should be run using the standardized Breeze 
environment. While not
 as convenient as the one-click "play button" in PyCharm, the IDE can be 
configured to do
 this in two clicks.
 
 1. Add Breeze as an "External Tool":
 
-   a. From the settings menu, navigate to ``Tools > External Tools``
-   b. Click the little plus symbol to open the ``Create Tool`` popup and fill 
it out:
+   a. From the settings menu, navigate to ``Tools > External Tools``.
+   b. Click the plus symbol to open the ``Create Tool`` popup and fill it out:
 
 .. image:: images/pycharm/pycharm_create_tool.png
     :align: center
@@ -782,34 +733,34 @@ this in two clicks.
 
 2. Add the tool to the context menu:
 
-   a. From the settings menu, navigate to ``Appearance & Behavior > Menus & 
Toolbars > Project View Popup Menu``
-   b. Click on the list of entries where you would like it to be added.  Right 
above or below ``Project View Popup Menu Run Group`` may be a good choice, you 
can drag and drop this list to rearrange the placement later as desired.
-   c. Click the little plus at the top of the popup window
-   d. Find your ``External Tool`` in the new ``Choose Actions to Add`` popup 
and click OK.  If you followed the image above, it will be at ``External Tools 
> External Tools > Breeze``
+   a. From the settings menu, navigate to ``Appearance & Behavior > Menus & 
Toolbars > Project View Popup Menu``.
+   b. Click on the list of entries where you would like it to be added. Right 
above or below ``Project View Popup Menu Run Group`` may be a good choice; you 
can drag and drop this list to rearrange the placement later.
+   c. Click the plus symbol at the top of the popup window.
+   d. Find your ``External Tool`` in the new ``Choose Actions to Add`` popup 
and click OK. If you followed the image above, it will be at ``External Tools > 
External Tools > Breeze``.
 
-**Note:** That only adds the option to that one menu.  If you would like to 
add it to the context menu
+**Note:** This only adds the option to that specific menu. If you would like 
to add it to the context menu
 when right-clicking on a tab at the top of the editor, for example, follow the 
steps above again
-and place it in the ``Editor Tab Popup Menu``
+and place it in the ``Editor Tab Popup Menu``.
 
 .. image:: images/pycharm/pycharm_add_to_context.png
     :align: center
     :alt: Installing Python extension
 
-3. To run tests in Breeze, right click on the file or directory in the 
``Project View`` and click Breeze.
+3. To run tests in Breeze, right-click on the file or directory in the 
``Project View`` and click Breeze.
 
 
 Running Unit Tests from Visual Studio Code
 ..........................................
 
-To run unit tests from the Visual Studio Code:
+To run unit tests from Visual Studio Code:
 
-1. Using the ``Extensions`` view install Python extension, reload if required
+1. Using the ``Extensions`` view, install the Python extension. Reload if 
required.
 
 .. image:: images/vscode_install_python_extension.png
     :align: center
     :alt: Installing Python extension
 
-2. Using the ``Testing`` view click on ``Configure Python Tests`` and select 
``pytest`` framework
+2. Using the ``Testing`` view, click on ``Configure Python Tests`` and select 
the ``pytest`` framework.
 
 .. image:: images/vscode_configure_python_tests.png
     :align: center
@@ -819,13 +770,13 @@ To run unit tests from the Visual Studio Code:
     :align: center
     :alt: Selecting pytest framework
 
-3. Open ``/.vscode/settings.json`` and add ``"python.testing.pytestArgs": 
["tests"]`` to enable tests discovery
+3. Open ``/.vscode/settings.json`` and add ``"python.testing.pytestArgs": 
["tests"]`` to enable test discovery.
 
 .. image:: images/vscode_add_pytest_settings.png
     :align: center
     :alt: Enabling tests discovery
 
-4. Now you are able to run and debug tests from both the ``Testing`` view and 
test files
+4. Now you are able to run and debug tests from both the ``Testing`` view and 
test files.
 
 .. image:: images/vscode_run_tests.png
     :align: center
@@ -834,12 +785,12 @@ To run unit tests from the Visual Studio Code:
 Running Unit Tests in local virtualenv
 ......................................
 
-To run unit, integration, and system tests from the Breeze and your
+To run unit, integration, and system tests from Breeze and your
 virtualenv, you can use the `pytest <http://doc.pytest.org/en/latest/>`_ 
framework.
 
-Custom ``pytest`` plugin runs ``airflow db init`` and ``airflow db reset`` the 
first
+The custom ``pytest`` plugin runs ``airflow db init`` and ``airflow db reset`` 
the first
 time you launch them. So, you can count on the database being initialized. 
Currently,
-when you run tests not supported **in the local virtualenv, they may either 
fail
+when you run tests that are not supported **in the local virtualenv, they may 
either fail
 or provide an error message**.
 
 There are many available options for selecting a specific test in ``pytest``. 
Details can be found
@@ -849,8 +800,8 @@ in the official documentation, but here are a few basic 
examples:
 
     pytest airflow-core/tests/unit/core -k "TestCore and not check"
 
-This runs the ``TestCore`` class but skips tests of this class that include 
'check' in their names.
-For better performance (due to a test collection), run:
+This runs the ``TestCore`` class but skips tests in this class that include 
'check' in their names.
+For better performance (due to test collection), run:
 
 .. code-block:: bash
 
@@ -874,7 +825,7 @@ To run the whole test class, enter:
 
     pytest airflow-core/tests/unit/core/test_core.py::TestCore
 
-You can use all available ``pytest`` flags. For example, to increase a log 
level
+You can use all available ``pytest`` flags. For example, to increase the log 
level
 for debugging purposes, enter:
 
 .. code-block:: bash
@@ -886,24 +837,24 @@ Running Tests using Breeze interactive shell
 ............................................
 
 You can run tests interactively using regular pytest commands inside the 
Breeze shell. This has the
-advantage, that Breeze container has all the dependencies installed that are 
needed to run the tests
-and it will ask you to rebuild the image if it is needed and some new 
dependencies should be installed.
+advantage that the Breeze container has all the dependencies installed that 
are needed to run the tests,
+and it will ask you to rebuild the image if needed (e.g., if new dependencies 
should be installed).
 
-By using interactive shell and iterating over the tests, you can iterate and 
re-run tests one-by-one
-or group by group right after you modified them.
+By using the interactive shell and iterating over tests, you can re-run tests 
one-by-one
+or group-by-group immediately after modifying them.
 
 Entering the shell is as easy as:
 
 .. code-block:: bash
 
-     breeze
+      breeze
 
 This should drop you into the container.
 
-You can also use other switches (like ``--backend`` for example) to configure 
the environment for your
-tests (and for example to switch to different database backend - see 
``--help`` for more details).
+You can also use other switches (like ``--backend``) to configure the 
environment for your
+tests (e.g., to switch to a different database backend - see ``--help`` for 
more details).
 
-Once you enter the container, you might run regular pytest commands. For 
example:
+Once inside the container, you can run regular pytest commands. For example:
 
 .. code-block:: bash
 
@@ -913,11 +864,11 @@ Once you enter the container, you might run regular 
pytest commands. For example
 Running Tests using Breeze from the Host
 ........................................
 
-If you wish to only run tests and not to drop into the shell, apply the 
``tests`` command.
+If you wish to only run tests and not drop into the shell, use the ``tests`` 
command.
 You can add extra targets and pytest flags after the ``tests`` command. Note 
that
-often you want to run the tests with a clean/reset db, so usually you want to 
add ``--db-reset`` flag
-to breeze command. The Breeze image usually will have all the dependencies 
needed and it
-will ask you to rebuild the image if it is needed and some new dependencies 
should be installed.
+often you want to run the tests with a clean/reset DB, so usually, you want to 
add the ``--db-reset`` flag
+to the breeze command. The Breeze image will usually have all the dependencies 
needed, and it
+will ask you to rebuild the image if needed.
 
 .. code-block:: bash
 
@@ -941,34 +892,34 @@ You can also specify individual tests or a group of tests:
 
     breeze testing core-tests --db-reset 
airflow-core/tests/unit/core/test_core.py::TestCore
 
-You can also limit the tests to execute to specific group of tests
+You can also limit the tests to execute to a specific group of tests:
 
 .. code-block:: bash
 
     breeze testing core-tests --test-type Other
 
-In case of Providers tests, you can run tests for all providers
+In the case of Providers tests, you can run tests for all providers:
 
 .. code-block:: bash
 
     breeze testing providers-tests --test-type Providers
 
-You can limit the set of providers you would like to run tests of
+You can limit the set of providers you would like to run tests for:
 
 .. code-block:: bash
 
     breeze testing providers-tests --test-type "Providers[airbyte,http]"
 
-You can also run all providers but exclude the providers you would like to skip
+You can also run all providers but exclude specific ones:
 
 .. code-block:: bash
 
     breeze testing providers-tests --test-type "Providers[-amazon,google]"
 
 
-Sometimes you need to inspect docker compose after tests command complete,
-for example when test environment could not be properly set due to
-failed health-checks. This can be achieved with ``--skip-docker-compose-down``
+Sometimes you need to inspect docker-compose after the tests command completes,
+for example, when the test environment could not be properly set due to
+failed health checks. This can be achieved with the 
``--skip-docker-compose-down``
 flag:
 
 .. code-block:: bash
@@ -993,14 +944,14 @@ This approach allows for a massive speedup in full test 
execution. On a machine
 
 .. note::
 
-  On MacOS you might have less CPUs and less memory available to run the tests 
than you have in the host,
-  simply because your Docker engine runs in a Linux Virtual Machine 
under-the-hood. If you want to make
-  use of the parallelism and memory usage for the CI tests you might want to 
increase the resources available
+  On MacOS, you might have fewer CPUs and less memory available to run tests 
than you have on the host,
+  simply because your Docker engine runs in a Linux Virtual Machine under the 
hood. If you want to make
+  use of parallelism and memory usage for CI tests, you might want to increase 
the resources available
   to your docker engine. See the `Resources 
<https://docs.docker.com/docker-for-mac/#resources>`_ chapter
   in the ``Docker for Mac`` documentation on how to do it.
 
-You can also limit the parallelism by specifying the maximum number of 
parallel jobs via
-``MAX_PARALLEL_TEST_JOBS`` variable. If you set it to "1", all the test types 
will be run sequentially.
+You can also limit the parallelism by specifying the maximum number of 
parallel jobs via the
+``MAX_PARALLEL_TEST_JOBS`` variable. If you set it to "1", all test types will 
be run sequentially.
 
 .. code-block:: bash
 
@@ -1008,10 +959,10 @@ You can also limit the parallelism by specifying the 
maximum number of parallel
 
 .. note::
 
-  In case you would like to cleanup after execution of such tests you might 
have to cleanup
-  some of the docker containers running in case you use ctrl-c to stop 
execution. You can easily do it by
-  running this command (it will kill all docker containers running so do not 
use it if you want to keep some
-  docker containers running):
+  In case you would like to cleanup after execution of such tests, you might 
have to cleanup
+  some of the docker containers running if you use ctrl-c to stop execution. 
You can easily do it by
+  running this command (it will kill all running docker containers, so do not 
use it if you want to keep some
+  containers running):
 
   .. code-block:: bash
 
@@ -1020,10 +971,10 @@ You can also limit the parallelism by specifying the 
maximum number of parallel
 Running Backend-Specific Tests
 ..............................
 
-Tests that are using a specific backend are marked with a custom pytest marker 
``pytest.mark.backend``.
-The marker has a single parameter - the name of a backend. It corresponds to 
the ``--backend`` switch of
+Tests that use a specific backend are marked with a custom pytest marker 
``pytest.mark.backend``.
+The marker has a single parameter - the name of the backend. It corresponds to 
the ``--backend`` switch of
 the Breeze environment (one of ``mysql``, ``sqlite``, or ``postgres``). 
Backend-specific tests only run when
-the Breeze environment is running with the right backend. If you specify more 
than one backend
+the Breeze environment is running with the correct backend. If you specify 
more than one backend
 in the marker, the test runs for all specified backends.
 
 Example of the ``postgres`` only test:
@@ -1034,7 +985,7 @@ Example of the ``postgres`` only test:
     def test_copy_expert(self): ...
 
 
-Example of the ``postgres,mysql`` test (they are skipped with the ``sqlite`` 
backend):
+Example of the ``postgres,mysql`` test (skipped with the ``sqlite`` backend):
 
 .. code-block:: python
 
@@ -1042,7 +993,7 @@ Example of the ``postgres,mysql`` test (they are skipped 
with the ``sqlite`` bac
     def test_celery_executor(self): ...
 
 
-You can use the custom ``--backend`` switch in pytest to only run tests 
specific for that backend.
+You can use the custom ``--backend`` switch in pytest to only run tests 
specific to that backend.
 Here is an example of running only postgres-specific backend tests:
 
 .. code-block:: bash
@@ -1052,20 +1003,19 @@ Here is an example of running only postgres-specific 
backend tests:
 Running Long-running tests
 ..........................
 
-Some of the tests rung for a long time. Such tests are marked with 
``@pytest.mark.long_running`` annotation.
-Those tests are skipped by default. You can enable them with 
``--include-long-running`` flag. You
-can also decide to only run tests with ``-m long-running`` flags to run only 
those tests.
+Some tests run for a long time. Such tests are marked with the 
``@pytest.mark.long_running`` annotation.
+These tests are skipped by default. You can enable them with the 
``--include-long-running`` flag. You
+can also decide to run *only* those tests with the ``-m long-running`` flag.
 
 Running Quarantined tests
 .........................
 
-Some of our tests are quarantined. This means that this test will be run in 
isolation and that it will be
-re-run several times. Also when quarantined tests fail, the whole test suite 
will not fail. The quarantined
-tests are usually flaky tests that need some attention and fix.
+Some of our tests are quarantined. This means that the test will be run in 
isolation and re-run several times.
+Also, when quarantined tests fail, the whole test suite will not fail. 
Quarantined tests are usually flaky tests that need attention and fixing.
 
-Those tests are marked with ``@pytest.mark.quarantined`` annotation.
-Those tests are skipped by default. You can enable them with 
``--include-quarantined`` flag. You
-can also decide to only run tests with ``-m quarantined`` flag to run only 
those tests.
+These tests are marked with the ``@pytest.mark.quarantined`` annotation.
+They are skipped by default. You can enable them with the 
``--include-quarantined`` flag. You
+can also decide to run *only* those tests with the ``-m quarantined`` flag.
 
 
 Compatibility Provider unit tests against older Airflow releases
@@ -1074,42 +1024,42 @@ Compatibility Provider unit tests against older Airflow 
releases
 Why we run provider compatibility tests
 .......................................
 
-Our CI runs provider tests for providers with previous compatible Airflow 
releases. This allows to check
-if the providers still work when installed for older Airflow versions.
+Our CI runs provider tests for providers with previous compatible Airflow 
releases. This allows checking
+if the providers still work when installed on older Airflow versions.
 
-The back-compatibility tests based on the configuration specified in the
+The back-compatibility tests are based on the configuration specified in the
 ``PROVIDERS_COMPATIBILITY_TESTS_MATRIX`` constant in the 
``./dev/breeze/src/airflow_breeze/global_constants.py``
-file - where we specify:
+file - which specifies:
 
 * Python version
 * Airflow version
-* which providers should be removed for the tests (exclusions)
-* whether to run tests for this Airflow/Python version
+* Which providers should be removed for the tests (exclusions)
+* Whether to run tests for this Airflow/Python version
 
-Those tests can be used to test compatibility of the providers with past (and 
future!) releases of airflow.
-For example it could be used to run latest provider versions with released or 
main
+These tests can be used to test the compatibility of providers with past (and 
future!) releases of Airflow.
+For example, it could be used to run the latest provider versions with 
released or main
 Airflow 3 if they are developed independently.
 
-The tests use the current source version of ``tests`` folder and current 
``providers`` - so care should be
-taken that the tests implemented for providers in the sources allow to run it 
against previous versions
-of Airflow and against Airflow installed from PyPI package rather than from 
the sources.
+The tests use the current source version of the ``tests`` folder and current 
``providers``, so care should be
+taken that the tests implemented for providers in the sources allow running 
against previous versions
+of Airflow and against Airflow installed from the PyPI package rather than 
from sources.
 
 Running the compatibility tests locally
 .......................................
 
-Running tests can be easily done locally by running appropriate ``breeze`` 
command. In CI the command
-is slightly different as it is run using providers build using wheel packages, 
but it is faster
-to run it locally and easier to iterate if you need to fix provider using 
provider sources mounted
+Running tests locally is easy with the appropriate ``breeze`` command. In CI, 
the command
+is slightly different as it runs using providers built using wheel packages, 
but it is faster
+to run it locally and easier to iterate if you need to fix a provider using 
provider sources mounted
 directly to the container.
 
-1. Make sure to build latest Breeze ci image
+1. Make sure to build the latest Breeze CI image:
 
 .. code-block:: bash
 
    breeze ci-image build --python 3.9
 
-2. Enter breeze environment by selecting the appropriate Airflow version and 
choosing
-   ``providers-and-tests`` option for ``--mount-sources`` flag.
+2. Enter the breeze environment by selecting the appropriate Airflow version 
and choosing
+   the ``providers-and-tests`` option for the ``--mount-sources`` flag.
 
 .. code-block:: bash
 
@@ -1121,38 +1071,38 @@ directly to the container.
 
    pytest providers/<provider>/tests/.../test.py
 
-4. Iterate with the tests and providers. Both providers and tests are mounted 
from local sources so
-   changes you do locally in both - tests and provider sources are immediately 
reflected inside the
-   breeze container and you can re-run the tests inside ``breeze`` container 
without restarting the
-   container (which makes it faster to iterate).
+4. Iterate with the tests and providers. Both providers and tests are mounted 
from local sources, so
+   changes you make locally in both tests and provider sources are immediately 
reflected inside the
+   breeze container, and you can re-run the tests inside the ``breeze`` 
container without restarting it
+   (which makes it faster to iterate).
 
 .. note::
 
    Since providers are installed from sources rather than from packages, 
plugins from providers are not
-   recognised by ProvidersManager for Airflow < 2.10 and tests that expect 
plugins to work might not work.
-   In such case you should follow the ``CI`` way of running the tests (see 
below).
+   recognized by ProvidersManager for Airflow < 2.10, and tests that expect 
plugins to work might not work.
+   In such cases, you should follow the ``CI`` way of running the tests (see 
below).
 
 Implementing compatibility for provider tests for older Airflow versions
 ........................................................................
 
-When you implement tests for providers, you should make sure that they are 
compatible with older Airflow versions.
+When you implement tests for providers, you should ensure that they are 
compatible with older Airflow versions.
 
-Note that some of the tests, if written without taking care about the 
compatibility, might not work with older
-versions of Airflow - this is because of refactorings, renames, and tests 
relying on internals of Airflow that
-are not part of the public API. We deal with it in one of the following ways:
+Note that some tests, if written without taking care about compatibility, 
might not work with older
+versions of Airflow. This is due to refactoring, renaming, and tests relying 
on Airflow internals that
+are not part of the public API. We deal with this in one of the following ways:
 
-1) If the whole provider is supposed to only work for later Airflow version, 
we remove the whole provider
-   by excluding it from compatibility test configuration (see below)
+1) If the whole provider is supposed to only work for a later Airflow version, 
we remove the whole provider
+   by excluding it from the compatibility test configuration (see below).
 
 2) Some compatibility shims are defined in 
``devel-common/src/tests_common/test_utils/compat.py`` - and
-   they can be used to make the tests compatible - for example importing 
``ParseImportError`` after the
-   exception has been renamed from ``ImportError`` and it would fail in 
Airflow 2.9, but we have a fallback
-   import in ``compat.py`` that falls back to old import automatically, so all 
tests testing / expecting
-   ``ParseImportError`` should import it from the 
``tests_common.tests_utils.compat`` module. There are few
-   other compatibility shims defined there and you can add more if needed in a 
similar way.
-
-3) If only some tests are not compatible and use features that are available 
only in newer Airflow version,
-   we can mark those tests with appropriate ``AIRFLOW_V_3_X_PLUS`` boolean 
constant defined
+   they can be used to make the tests compatible. For example, importing 
``ParseImportError`` after the
+   exception has been renamed from ``ImportError``. This would fail in Airflow 
2.9, but we have a fallback
+   import in ``compat.py`` that falls back to the old import automatically. 
So, all tests testing/expecting
+   ``ParseImportError`` should import it from the 
``tests_common.tests_utils.compat`` module. There are a few
+   other compatibility shims defined there, and you can add more if needed in 
a similar way.
+
+3) If only some tests are not compatible and use features that are available 
only in a newer Airflow version,
+   we can mark those tests with the appropriate ``AIRFLOW_V_3_X_PLUS`` boolean 
constant defined
    in ``version_compat.py``. For example:
 
 .. code-block:: python
@@ -1165,7 +1115,7 @@ are not part of the public API. We deal with it in one of 
the following ways:
       pass
 
 4) Sometimes, the tests should only be run when Airflow is installed from the 
sources in main.
-   In this case you can add conditional ``skipif`` markerfor 
``RUNNING_TESTS_AGAINST_AIRFLOW_PACKAGES``
+   In this case, you can add the conditional ``skipif`` marker for 
``RUNNING_TESTS_AGAINST_AIRFLOW_PACKAGES``
    to the test. For example:
 
 .. code-block:: python
@@ -1179,12 +1129,11 @@ are not part of the public API. We deal with it in one 
of the following ways:
   def test_plugin():
       pass
 
-5) Sometimes Pytest collection fails to work, when certain imports used by the 
tests either do not exist
-   or fail with RuntimeError about compatibility ("minimum Airflow version is 
required") or because they
-   raise AirflowOptionalProviderFeatureException. In such case you should wrap 
the imports in
+5) Sometimes Pytest collection fails because certain imports used by the tests 
either do not exist,
+   fail with a RuntimeError about compatibility ("minimum Airflow version is 
required"), or raise ``AirflowOptionalProviderFeatureException``. In such 
cases, you should wrap the imports in the
    ``ignore_provider_compatibility_error`` context manager adding the 
``__file__``
-   module name as parameter.  This will stop failing pytest collection and 
automatically skip the whole
-   module from unit.
+   module name as a parameter. This will stop failing pytest collection and 
automatically skip the whole
+   module from unit tests.
 
    For example:
 
@@ -1193,27 +1142,26 @@ are not part of the public API. We deal with it in one 
of the following ways:
    with ignore_provider_compatibility_error("2.8.0", __file__):
        from airflow.providers.common.io.xcom.backend import 
XComObjectStorageBackend
 
-6) In some cases in order to enable collection of pytest on older Airflow 
version you might need to convert
-   top-level import into a local import, so that Pytest parser does not fail 
on collection.
+6) In some cases, to enable pytest collection on older Airflow versions, you 
might need to convert
+   a top-level import into a local import so that the Pytest parser does not 
fail on collection.
 
 Running provider compatibility tests in CI
 ..........................................
 
-In CI those tests are run in a slightly more complex way because we want to 
run them against the build
-providers, rather than mounted from sources.
+In CI, these tests are run in a slightly more complex way because we want to 
run them against the built
+providers rather than those mounted from sources.
 
-In case of canary runs we add ``--clean-airflow-installation`` flag that 
removes all packages before
-installing older Airflow version, and then installs development dependencies
-from latest Airflow - in order to avoid case where a provider depends on a new 
dependency added in latest
-version of Airflow. This clean removal and re-installation takes quite some 
time though and in order to
-speed up the tests in regular PRs we only do that in the canary runs.
+In case of canary runs, we add the ``--clean-airflow-installation`` flag that 
removes all packages before
+installing the older Airflow version, and then installs development 
dependencies
+from the latest Airflow. This avoids cases where a provider depends on a new 
dependency added in the latest
+version of Airflow. This clean removal and re-installation takes quite some 
time, so to speed up the tests in regular PRs, we only do this in canary runs.
 
-The exact way CI tests are run can be reproduced locally building providers 
from selected tag/commit and
+The exact way CI tests are run can be reproduced locally by building providers 
from a selected tag/commit and
 using them to install and run tests against the selected Airflow version.
 
-Herr id how to reproduce it.
+Here is how to reproduce it:
 
-1. Make sure to build latest Breeze ci image
+1. Make sure to build the latest Breeze CI image:
 
 .. code-block:: bash
 
@@ -1227,24 +1175,24 @@ Herr id how to reproduce it.
    breeze release-management prepare-provider-distributions 
--include-not-ready-providers \
       --skip-tag-check --distribution-format wheel
 
-3. Prepare provider constraints
+3. Prepare provider constraints:
 
 .. code-block:: bash
 
    breeze release-management generate-constraints --airflow-constraints-mode 
constraints-source-providers --answer yes
 
-4. Remove providers that are not compatible with Airflow version installed by 
default. You can look up
+4. Remove providers that are not compatible with the Airflow version installed 
by default. You can look up
    the incompatible providers in the ``PROVIDERS_COMPATIBILITY_TESTS_MATRIX`` 
constant in the
    ``./dev/breeze/src/airflow_breeze/global_constants.py`` file.
 
-5. Enter breeze environment, installing selected Airflow version and the 
providers prepared from main
+5. Enter the breeze environment, installing the selected Airflow version and 
the providers prepared from main:
 
 .. code-block:: bash
 
   breeze shell --use-distributions-from-dist --distribution-format wheel 
--use-airflow-version 2.9.1  \
    --install-airflow-with-constraints --providers-skip-constraints 
--mount-sources tests
 
-In case you want to reproduce canary run, you need to add 
``--clean-airflow-installation`` flag:
+In case you want to reproduce a canary run, you need to add the 
``--clean-airflow-installation`` flag:
 
 .. code-block:: bash
 
@@ -1258,19 +1206,19 @@ In case you want to reproduce canary run, you need to 
add ``--clean-airflow-inst
 
    pytest providers/<provider>/tests/.../test.py
 
-7. Iterate with the tests
+7. Iterate with the tests.
 
 The tests are run using:
 
 * Airflow installed from PyPI
-* tests coming from the current Airflow sources (they are mounted inside the 
breeze image)
-* providers built from the current Airflow sources and placed in dist
+* Tests coming from the current Airflow sources (mounted inside the breeze 
image)
+* Providers built from the current Airflow sources and placed in dist
 
-This means that you can modify and run tests and re-run them because sources 
are mounted from the host,
-but if you want to modify provider code you need to exit breeze, rebuild the 
provider package and
+This means that you can modify and run tests and re-run them because sources 
are mounted from the host.
+However, if you want to modify provider code, you need to exit breeze, rebuild 
the provider package, and
 restart breeze using the command above.
 
-Rebuilding single provider package can be done using this command:
+Rebuilding a single provider package can be done using this command:
 
 .. code-block:: bash
 
@@ -1281,35 +1229,35 @@ Lowest direct dependency resolution tests
 -----------------------------------------
 
 We have special tests that run with the lowest direct resolution of 
dependencies for Airflow and providers.
-This is run in order to check whether we are not using a feature that is not 
available in an
+This is run to check whether we are using a feature that is not available in an
 older version of some dependencies.
 
 Tests with lowest-direct dependency resolution for Airflow
 ..........................................................
 
-You can test minimum dependencies that are installed by Airflow by running 
(for example to run "Core" tests):
+You can test minimum dependencies installed by Airflow by running (for 
example, to run "Core" tests):
 
 .. code-block:: bash
 
     breeze testing core-tests --force-lowest-dependencies --test-type "Core"
 
-You can also iterate on the tests and versions of the dependencies by entering 
breeze shell and
+You can also iterate on the tests and versions of the dependencies by entering 
the breeze shell and
 running the tests from there, after manually downgrading the dependencies:
 
 .. code-block:: bash
 
-    breeze shell   # enter the container
+    breeze shell    # enter the container
     cd airflow-core
     uv sync --resolution lowest-direct
 
-or run ``--force-lowest-dependencies`` switch directly from the breeze command 
line:
+or run the ``--force-lowest-dependencies`` switch directly from the breeze 
command line:
 
 .. code-block:: bash
 
     breeze shell --force-lowest-dependencies --test-type "Core"
 
 
-The way it works - after you enter breeze container, you run the uv-sync in 
the airflow-core
+The way it works: after you enter the breeze container, you run ``uv-sync`` in 
the airflow-core
 folder to downgrade the dependencies to the lowest version that is compatible
 with the dependencies specified in airflow-core dependencies. You will see it 
in the output of the breeze
 command as a sequence of downgrades like this:
@@ -1325,19 +1273,19 @@ command as a sequence of downgrades like this:
 Tests with lowest-direct dependency resolution for a Provider
 .............................................................
 
-Similarly we can test if the provider tests are working for lowest 
dependencies of specific provider.
+Similarly, we can test if the provider tests are working for the lowest 
dependencies of a specific provider.
 
-Those tests can be easily run locally with breeze (replace PROVIDER_ID with id 
of the provider):
+These tests can be easily run locally with breeze (replace PROVIDER_ID with 
the id of the provider):
 
 .. code-block:: bash
 
     breeze testing providers-tests --force-lowest-dependencies --test-type 
"Providers[PROVIDER_ID]"
 
-If you find that the tests are failing for some dependencies, make sure to add 
minimum version for
-the dependency in the provider.yaml file of the appropriate provider and 
re-run it.
+If you find that the tests are failing for some dependencies, make sure to add 
the minimum version for
+the dependency in the ``provider.yaml`` file of the appropriate provider and 
re-run it.
 
-You can also iterate on the tests and versions of the dependencies by entering 
breeze shell and
-manually downgrading dependencies for the provider and running the tests after 
that:
+You can also iterate on the tests and versions of the dependencies by entering 
the breeze shell,
+manually downgrading dependencies for the provider, and running the tests 
after that:
 
 .. code-block:: bash
 
@@ -1346,17 +1294,17 @@ manually downgrading dependencies for the provider and 
running the tests after t
     uv sync --resolution lowest-direct
 
 
-or run ``--force-lowest-dependencies`` switch directly from the breeze command 
line:
+or run the ``--force-lowest-dependencies`` switch directly from the breeze 
command line:
 
 .. code-block:: bash
 
     breeze shell --force-lowest-dependencies --test-type "Providers[google]"
 
-Similarly as in case of "Core" tests, the dependencies will be downgraded to 
the lowest version that is
-compatible with the dependencies specified in the provider dependencies and 
you will see the list of
-downgrades in the output of the breeze command. Note that this will be 
combined downgrades of both
-Airflow and selected provider dependencies, so the list will be longer than in 
case of "Core" tests
-and longer than **just** dependencies of the provider. For example for a 
``google`` provider, part of the
+Similarly to "Core" tests, the dependencies will be downgraded to the lowest 
version that is
+compatible with the dependencies specified in the provider dependencies, and 
you will see the list of
+downgrades in the output of the breeze command. Note that this will include 
combined downgrades of both
+Airflow and selected provider dependencies, so the list will be longer than in 
"Core" tests
+and longer than **just** the dependencies of the provider. For example, for a 
``google`` provider, part of the
 downgraded dependencies will contain both Airflow and Google Provider 
dependencies:
 
 .. code-block:: diff
@@ -1374,14 +1322,14 @@ downgraded dependencies will contain both Airflow and 
Google Provider dependenci
  - gcloud-aio-storage==9.2.0
 
 You can also (if your local virtualenv can install the dependencies for the 
provider)
-reproduce the same set of dependencies in your local virtual environment by:
+reproduce the same set of dependencies in your local virtual environment by 
running:
 
 .. code-block:: bash
 
     cd airflow-core
     uv sync --resolution lowest-direct
 
-for Airflow core, and
+for Airflow core, and:
 
 .. code-block:: bash
 
@@ -1393,19 +1341,19 @@ for the providers.
 How to fix failing lowest-direct dependency resolution tests
 ............................................................
 
-When your tests pass in regular test, but fail in "lowest-direct" dependency 
resolution tests, you need
+When your tests pass in regular tests but fail in "lowest-direct" dependency 
resolution tests, you need
 to figure out one of the problems:
 
-* lower-bindings missing in the ``pyproject.toml`` file (in ``airflow-core`` 
or corresponding provider).
-  This is usually a very easy thing that takes a little bit of time to figure 
out especially if you
-  just added new feature from a library that you use, just check in the 
release notes what is the minimum
-  version of the library that you can use and set it as the ``>=VERSION`` in 
the ``pyproject.toml``.
+* Lower-bindings missing in the ``pyproject.toml`` file (in ``airflow-core`` 
or the corresponding provider).
+  This is usually a very easy thing that takes a little bit of time to figure 
out. Especially if you
+  just added a new feature from a library you use, check the release notes for 
the minimum
+  version of the library you can use and set it as ``>=VERSION`` in the 
``pyproject.toml``.
 
-* figuring out if airflow-core or the provider needs additional providers or 
additional dependencies in dev
-  dependency group for the provider - sometimes tests need another provider to 
be installed that is not
-  normally needed as required dependencies of the provider being tested. Those 
dependencies
-  should be added after the ``# Additional devel dependencies`` comment in 
case of providers. Adding the
-  dependencies here means that when ``uv sync`` is run, the packages and its 
dependencies will be installed.
+* Figuring out if airflow-core or the provider needs additional providers or 
additional dependencies in the dev
+  dependency group for the provider. Sometimes tests need another provider to 
be installed that is not
+  normally needed as a required dependency of the provider being tested. Those 
dependencies
+  should be added after the ``# Additional devel dependencies`` comment in the 
case of providers. Adding the
+  dependencies here means that when ``uv sync`` is run, the packages and their 
dependencies will be installed.
 
 .. code-block:: toml
 
@@ -1421,34 +1369,33 @@ to figure out one of the problems:
         "apache-airflow-providers-microsoft-azure",
     ]
 
-Sometimes it might get a bit tricky to know what is the minimum version of the 
library you should be using
-but in this case you can easily find it by looking at the error and list of 
downgraded packages and
-guessing which one is the one that is causing the problem. You can then look 
at the release notes of the
-library and find the minimum version but also you can revert to technique 
known as bisecting which allows
+Sometimes it might get a bit tricky to know the minimum version of the library 
you should be using.
+In this case, you can easily find it by looking at the error and list of 
downgraded packages and
+guessing which one is causing the problem. You can then look at the release 
notes of the
+library and find the minimum version. Alternatively, you can use the technique 
known as bisecting, which allows
 you to quickly figure out the right version without knowing the root cause of 
the problem.
 
-Assume you suspect library "foo" that was downgraded from 1.0.0 to 0.1.0 is 
causing the problem. Bisecting
-technique looks like follows:
+Assume you suspect library "foo", which was downgraded from 1.0.0 to 0.1.0, is 
causing the problem. The bisecting
+technique looks like this:
 
-* Run ``uv sync --resolution lowest-direct``(the ``foo`` library is downgraded 
to 0.1.0). Your test should
+* Run ``uv sync --resolution lowest-direct`` (the ``foo`` library is 
downgraded to 0.1.0). Your test should
   fail.
-* make sure that just upgrading the ``foo`` library to 1.0.0 -> re-run failing 
test (with ``pytest <test>``)
+* Make sure that just upgrading the ``foo`` library to 1.0.0 -> re-run failing 
test (with ``pytest <test>``)
   and see that it passes.
-* downgrade the ``foo`` library to 0.1.0 -> re-run failing test (with ``pytest 
<test>``) and see that it
+* Downgrade the ``foo`` library to 0.1.0 -> re-run failing test (with ``pytest 
<test>``) and see that it
   fails.
-* look at the list of versions available for the library between 0.1.0 and 
1.0.0 (for example via
-  `<https://pypi.org/project/foo/#history>`_ link - where ``foo`` is your 
library.
-* find a middle version between the 1.0.0 and 0.1.0 and upgrade the library to 
this version - see if the
-  test passes or fails - if it passes, continue with finding the middle 
version between the current version
-  and lower version, if it fails, continue with finding the middle version 
between the current version and
-  higher version.
-* continue that way until you find the version that is the lowest version that 
passes the test.
-* set this version in ``pyproject.toml`` file, run ``uv sync --resolution 
lowest-direct`` and see if the test
+* Look at the list of versions available for the library between 0.1.0 and 
1.0.0 (for example, via
+  `<https://pypi.org/project/foo/#history>`_ link - where ``foo`` is your 
library).
+* Find a middle version between 1.0.0 and 0.1.0 and upgrade the library to 
this version - see if the
+  test passes or fails. If it passes, continue finding the middle version 
between the current version
+  and the lower version. If it fails, continue finding the middle version 
between the current version and
+  the higher version.
+* Continue this way until you find the lowest version that passes the test.
+* Set this version in the ``pyproject.toml`` file, run ``uv sync --resolution 
lowest-direct``, and see if the test
   passes. If it does, you are done. If it does not, repeat the process.
 
-You can also skip some of the tests to be run when force lowest dependencies 
are used when tests are run in
-breeze by adding the marker below. This is sometimes needed if your "core" or 
"provider" tests depend on
-all or many providers to be installed (for example tests loading multiple 
examples or connections):
+You can also skip some tests when force lowest dependencies are used in breeze 
by adding the marker below. This is sometimes needed if your "core" or 
"provider" tests depend on
+all or many providers to be installed (for example, tests loading multiple 
examples or connections):
 
 .. code-block:: python
 
@@ -1459,8 +1406,8 @@ all or many providers to be installed (for example tests 
loading multiple exampl
     def test_my_test_that_should_be_skipped():
         assert 1 == 1
 
-And you can locally also set ``FORCE_LOWEST_DEPENDENCIES`` to ``true`` 
environment variable before
-running ``pytest`` to also skip the tests when running them locally.
+You can also locally set the ``FORCE_LOWEST_DEPENDENCIES`` environment 
variable to ``true`` before
+running ``pytest`` to skip these tests when running locally.
 
 Other Settings
 --------------
@@ -1468,11 +1415,11 @@ Other Settings
 Enable masking secrets in tests
 ...............................
 
-By default masking secrets in test disabled because it might have side effects
-into the other tests which intends to check ``logging/stdout/stderr`` values
+By default, masking secrets in tests is disabled because it might have side 
effects
+on other tests that intend to check ``logging/stdout/stderr`` values.
 
-If you need to test masking secrets in test cases
-you have to apply ``pytest.mark.enable_redact`` to the specific test case, 
class or module.
+If you need to test masking secrets in test cases,
+you have to apply ``pytest.mark.enable_redact`` to the specific test case, 
class, or module.
 
 
 .. code-block:: python
@@ -1487,19 +1434,19 @@ you have to apply ``pytest.mark.enable_redact`` to the 
specific test case, class
 Skip test on unsupported platform / environment
 ...............................................
 
-You can apply the marker ``pytest.mark.platform(name)`` to the specific test 
case, class or module
-for prevent to run on unsupported platform.
+You can apply the marker ``pytest.mark.platform(name)`` to a specific test 
case, class, or module
+to prevent it from running on an unsupported platform.
 
-- ``linux``: Run test only on linux platform
-- ``breeze``: Run test only inside of Breeze container, it might be useful in 
case of run
-  some potential dangerous things in tests or if it expects to use common 
Breeze things.
+- ``linux``: Run test only on Linux platform.
+- ``breeze``: Run test only inside the Breeze container. This might be useful 
if you run
+  potentially dangerous things in tests or if it expects to use common Breeze 
utilities.
 
 Warnings capture system
 .......................
 
-By default, all warnings captured during the test runs are saved into the 
``tests/warnings.txt``.
+By default, all warnings captured during the test runs are saved into 
``tests/warnings.txt``.
 
-If required, you could change the path by providing ``--warning-output-path`` 
as pytest CLI arguments
+If required, you can change the path by providing ``--warning-output-path`` as 
a pytest CLI argument
 or by setting the environment variable ``CAPTURE_WARNINGS_OUTPUT``.
 
 .. code-block:: console
@@ -1517,9 +1464,9 @@ or by setting the environment variable 
``CAPTURE_WARNINGS_OUTPUT``.
 
     ================================= short test summary info 
=================================
 
-You might also disable capture warnings by providing 
``--disable-capture-warnings`` as pytest CLI arguments
-or by setting `global warnings filter 
<https://docs.python.org/3/library/warnings.html#the-warnings-filter>`__
-to **ignore**, e.g. set ``PYTHONWARNINGS`` environment variable to ``ignore``.
+You might also disable warning capture by providing 
``--disable-capture-warnings`` as a pytest CLI argument
+or by setting the `global warnings filter 
<https://docs.python.org/3/library/warnings.html#the-warnings-filter>`__
+to **ignore**, e.g., set the ``PYTHONWARNINGS`` environment variable to 
``ignore``.
 
 .. code-block:: bash
 
@@ -1528,14 +1475,14 @@ to **ignore**, e.g. set ``PYTHONWARNINGS`` environment 
variable to ``ignore``.
 Keep tests using environment variables
 ......................................
 
-By default, all environment variables related to Airflow (starting by 
``AIRFLOW__``) are all cleared before running tests
-to avoid potential side effect. However, in some scenarios you might want to 
disable this mechanism and keep the
+By default, all environment variables related to Airflow (starting with 
``AIRFLOW__``) are cleared before running tests
+to avoid potential side effects. However, in some scenarios, you might want to 
disable this mechanism and keep the
 environment variables you defined to configure your Airflow environment. For 
example, you might want to run tests
-against a specific database configured through the environment variable 
``AIRFLOW__DATABASE__SQL_ALCHEMY_CONN``.
-Or running tests using a specific executor to run tasks configured through 
``AIRFLOW__CORE__EXECUTOR``.
+against a specific database configured through the environment variable 
``AIRFLOW__DATABASE__SQL_ALCHEMY_CONN``,
+or run tests using a specific executor configured through 
``AIRFLOW__CORE__EXECUTOR``.
 
 To keep using environment variables you defined in your environment, you need 
to provide ``--keep-env-variables`` as
-pytest CLI argument.
+a pytest CLI argument.
 
 .. code-block:: bash
 
@@ -1550,13 +1497,13 @@ This parameter is also available in Breeze.
 Disable database cleanup before each test module
 ................................................
 
-By default, the database is cleared from all items before running tests. This 
is to avoid potential conflicts with
-existing resources in the database when running tests using the database. 
However, in some scenarios you might want to
+By default, the database is cleared of all items before running tests. This is 
to avoid potential conflicts with
+existing resources in the database when running tests using the database. 
However, in some scenarios, you might want to
 disable this mechanism and keep the database as is. For example, you might 
want to run tests in parallel against the
-same database. In that case, you need to disable the database cleanup, 
otherwise the tests are going to conflict with
+same database. In that case, you need to disable the database cleanup, 
otherwise, the tests will conflict with
 each other (one test will delete the resources that another one is creating).
 
-To disable the database cleanup, you need to provide ``--no-db-cleanup`` as 
pytest CLI argument.
+To disable the database cleanup, you need to provide ``--no-db-cleanup`` as a 
pytest CLI argument.
 
 .. code-block:: bash
 
@@ -1603,8 +1550,8 @@ c. After execution, run the following commands from the 
repository root
 .. note::
 
    You no longer need to start the Airflow web server to view the
-   coverage report.  The lightweight HTTP server above is sufficient and
-   avoids an extra service.  If port 25555 on the host is already in use,
+   coverage report. The lightweight HTTP server above is sufficient and
+   avoids an extra service. If port 25555 on the host is already in use,
    adjust the container-to-host mapping with
    ``BREEZE_PORTS_EXTRA="<host_port>:5555" breeze start-airflow``.
 
@@ -1625,15 +1572,15 @@ You can run tests with SQL statements tracking. To do 
this, use the ``--trace-sq
 columns to be displayed as an argument. Each query will be displayed on a 
separate line.
 Supported values:
 
-* ``num`` -  displays the query number;
-* ``time`` - displays the query execution time;
-* ``trace`` - displays the simplified (one-line) stack trace;
-* ``sql`` - displays the SQL statements;
-* ``parameters`` - display SQL statement parameters.
+* ``num`` - Displays the query number.
+* ``time`` - Displays the query execution time.
+* ``trace`` - Displays the simplified (one-line) stack trace.
+* ``sql`` - Displays the SQL statements.
+* ``parameters`` - Displays SQL statement parameters.
 
 If you only provide ``num``, then only the final number of queries will be 
displayed.
 
-By default, pytest does not display output for successful tests, if you still 
want to see them, you must
+By default, pytest does not display output for successful tests. If you still 
want to see them, you must
 pass the ``--capture=no`` option.
 
 If you run the following command:
@@ -1643,11 +1590,11 @@ If you run the following command:
     pytest --trace-sql=num,sql,parameters --capture=no \
       airflow-core/tests/unit/jobs/test_scheduler_job.py -k 
test_process_dags_queries_count_05
 
-On the screen you will see database queries for the given test.
+You will see database queries for the given test on the screen.
 
 SQL query tracking does not work properly if your test runs subprocesses. Only 
queries from the main process
 are tracked.
 
 -----
 
-For other kinds of tests look at `Testing document <../09_testing.rst>`__
+For other kinds of tests, look at the `Testing document <../09_testing.rst>`__


Reply via email to