Parkerhiphop commented on code in PR #58765:
URL: https://github.com/apache/airflow/pull/58765#discussion_r2567718093


##########
contributing-docs/testing/unit_tests.rst:
##########
@@ -1193,27 +1142,26 @@ are not part of the public API. We deal with it in one 
of the following ways:
    with ignore_provider_compatibility_error("2.8.0", __file__):
        from airflow.providers.common.io.xcom.backend import 
XComObjectStorageBackend
 
-6) In some cases in order to enable collection of pytest on older Airflow 
version you might need to convert
-   top-level import into a local import, so that Pytest parser does not fail 
on collection.
+6) In some cases, to enable pytest collection on older Airflow versions, you 
might need to convert
+   a top-level import into a local import so that the Pytest parser does not 
fail on collection.
 
 Running provider compatibility tests in CI
 ..........................................
 
-In CI those tests are run in a slightly more complex way because we want to 
run them against the build
-providers, rather than mounted from sources.
+In CI, these tests are run in a slightly more complex way because we want to 
run them against the built
+providers rather than those mounted from sources.
 
-In case of canary runs we add ``--clean-airflow-installation`` flag that 
removes all packages before
-installing older Airflow version, and then installs development dependencies
-from latest Airflow - in order to avoid case where a provider depends on a new 
dependency added in latest
-version of Airflow. This clean removal and re-installation takes quite some 
time though and in order to
-speed up the tests in regular PRs we only do that in the canary runs.
+In case of canary runs, we add the ``--clean-airflow-installation`` flag that 
removes all packages before
+installing the older Airflow version, and then installs development 
dependencies
+from the latest Airflow. This avoids cases where a provider depends on a new 
dependency added in the latest
+version of Airflow. This clean removal and re-installation takes quite some 
time, so to speed up the tests in regular PRs, we only do this in canary runs.
 
-The exact way CI tests are run can be reproduced locally building providers 
from selected tag/commit and
+The exact way CI tests are run can be reproduced locally by building providers 
from a selected tag/commit and
 using them to install and run tests against the selected Airflow version.
 
-Herr id how to reproduce it.
+Here is how to reproduce it:

Review Comment:
   Fixed typo: 'Herr id' -> 'Here is'.



##########
contributing-docs/testing/unit_tests.rst:
##########
@@ -20,48 +20,41 @@ Airflow Unit Tests
 
 All unit tests for Apache Airflow are run using `pytest 
<http://doc.pytest.org/en/latest/>`_.
 
-**The outline for this document in GitHub is available at top-right corner 
button (with 3-dots and 3 lines).**
+**The outline for this document in GitHub is available via the button in the 
top-right corner (icon with 3 dots and 3 lines).**
 
 Writing Unit Tests
 ------------------
 
-Follow the guidelines when writing unit tests:
+Follow these guidelines when writing unit tests:
 
-* For standard unit tests that do not require integrations with external 
systems, make sure to simulate all communications.
-* All Airflow tests are run with ``pytest``. Make sure to set your IDE/runners 
(see below) to use ``pytest`` by default.
-* For tests, use standard "asserts" of Python and ``pytest`` 
decorators/context managers for testing
-  rather than ``unittest`` ones. See `pytest docs 
<http://doc.pytest.org/en/latest/assert.html>`__ for details.
-* Use a ``pytest.mark.parametrize`` marker for tests that have variations in 
parameters.
-  See `pytest docs 
<https://docs.pytest.org/en/latest/how-to/parametrize.html>`__ for details.
-* Use with ``pytest.warn`` to capture warnings rather than ``recwarn`` 
fixture. We are aiming for 0-warning in our
-  tests, so we run Pytest with ``--disable-warnings`` but instead we have 
custom warning capture system.
+* For standard unit tests that do not require integration with external 
systems, ensure all communications are simulated (mocked).
+* All Airflow tests are run with ``pytest``. Ensure your IDE or runners (see 
below) are configured to use ``pytest`` by default.
+* For tests, use standard Python "asserts" and ``pytest`` decorators/context 
managers for testing rather than ``unittest`` ones. See `pytest docs 
<http://doc.pytest.org/en/latest/assert.html>`__ for details.
+* Use the ``pytest.mark.parametrize`` marker for tests that have variations in 
parameters. See `pytest docs 
<https://docs.pytest.org/en/latest/how-to/parametrize.html>`__ for details.
+* Use ``pytest.warns`` to capture warnings instead of the ``recwarn`` fixture. 
We aim for zero warnings in our tests; therefore, we run pytest with 
``--disable-warnings`` and utilize a custom warning capture system.
 
 Handling warnings
 .................
 
-By default, in the new tests selected warnings are prohibited:
+By default, specific warnings are prohibited in new tests:
 
 * ``airflow.exceptions.AirflowProviderDeprecationWarning``
 
-That mean if one of this warning appear during test run and do not captured 
the test will failed.
+Any test triggering this warning without capturing it will fail.

Review Comment:
   Refactored this paragraph for clarity. The original sentence structure 
('That mean if one of this warning appear...') was confusing and grammatically 
incorrect.



##########
contributing-docs/testing/unit_tests.rst:
##########
@@ -20,48 +20,41 @@ Airflow Unit Tests
 
 All unit tests for Apache Airflow are run using `pytest 
<http://doc.pytest.org/en/latest/>`_.
 
-**The outline for this document in GitHub is available at top-right corner 
button (with 3-dots and 3 lines).**
+**The outline for this document in GitHub is available via the button in the 
top-right corner (icon with 3 dots and 3 lines).**
 
 Writing Unit Tests
 ------------------
 
-Follow the guidelines when writing unit tests:
+Follow these guidelines when writing unit tests:
 
-* For standard unit tests that do not require integrations with external 
systems, make sure to simulate all communications.
-* All Airflow tests are run with ``pytest``. Make sure to set your IDE/runners 
(see below) to use ``pytest`` by default.
-* For tests, use standard "asserts" of Python and ``pytest`` 
decorators/context managers for testing
-  rather than ``unittest`` ones. See `pytest docs 
<http://doc.pytest.org/en/latest/assert.html>`__ for details.
-* Use a ``pytest.mark.parametrize`` marker for tests that have variations in 
parameters.
-  See `pytest docs 
<https://docs.pytest.org/en/latest/how-to/parametrize.html>`__ for details.
-* Use with ``pytest.warn`` to capture warnings rather than ``recwarn`` 
fixture. We are aiming for 0-warning in our
-  tests, so we run Pytest with ``--disable-warnings`` but instead we have 
custom warning capture system.
+* For standard unit tests that do not require integration with external 
systems, ensure all communications are simulated (mocked).
+* All Airflow tests are run with ``pytest``. Ensure your IDE or runners (see 
below) are configured to use ``pytest`` by default.
+* For tests, use standard Python "asserts" and ``pytest`` decorators/context 
managers for testing rather than ``unittest`` ones. See `pytest docs 
<http://doc.pytest.org/en/latest/assert.html>`__ for details.
+* Use the ``pytest.mark.parametrize`` marker for tests that have variations in 
parameters. See `pytest docs 
<https://docs.pytest.org/en/latest/how-to/parametrize.html>`__ for details.
+* Use ``pytest.warns`` to capture warnings instead of the ``recwarn`` fixture. 
We aim for zero warnings in our tests; therefore, we run pytest with 
``--disable-warnings`` and utilize a custom warning capture system.

Review Comment:
   Fixed technical typo: pytest.warn is not a valid context manager in pytest. 
Changed to pytest.warns



##########
contributing-docs/testing/unit_tests.rst:
##########
@@ -122,188 +115,162 @@ If you want time to progress from a fixed starting 
point, you can set ``tick=Tru
 Airflow configuration for unit tests
 ------------------------------------
 
-Some of the unit tests require special configuration set as the ``default``. 
This is done automatically by
-adding ``AIRFLOW__CORE__UNIT_TEST_MODE=True`` to the environment variables in 
Pytest auto-used
-fixture. This in turn makes Airflow load test configuration from the file
+Some unit tests require special configuration set as the ``default``. This is 
done automatically by
+adding ``AIRFLOW__CORE__UNIT_TEST_MODE=True`` to the environment variables in 
a Pytest auto-use
+fixture. This, in turn, makes Airflow load test configuration from the file
 ``airflow/config_templates/unit_tests.cfg``. Test configuration from there 
replaces the original
-defaults from ``airflow/config_templates/config.yml``. If you want to add some 
test-only configuration,
-as default for all tests you should add the value to this file.
+defaults from ``airflow/config_templates/config.yml``. If you want to add some 
test-only configuration
+as a default for all tests, you should add the value to this file.
 
-You can also - of course - override the values in individual test by patching 
environment variables following
-the usual ``AIRFLOW__SECTION__KEY`` pattern or ``conf_vars`` context manager.
+You can also override the values in individual tests by patching environment 
variables following
+the usual ``AIRFLOW__SECTION__KEY`` pattern or using the ``conf_vars`` context 
manager.
 
 Airflow unit test types
 -----------------------
 
 Airflow tests in the CI environment are split into several test types. You can 
narrow down which
 test types you want to use in various ``breeze testing`` sub-commands in three 
ways:
 
-* via specifying the ``--test-type`` when you run single test type in ``breeze 
testing core-tests``.
-  ``breeze testing providers-tests`` ``breeze testing integration-tests`` 
commands
-* via specifying space separating list of test types via 
``--parallel-test-types`` or
-  ``--excluded-parallel-test-types`` options when you run tests in parallel 
(in several testing commands)
+* By specifying the ``--test-type`` when running a single test type in 
``breeze testing core-tests``, ``breeze testing providers-tests``, or ``breeze 
testing integration-tests`` commands.
+* By specifying a space-separated list of test types via the 
``--parallel-test-types`` or ``--excluded-parallel-test-types`` options when 
running tests in parallel.
 
-Those test types are defined:
+The defined test types are:
 
-* ``Always`` - those are tests that should be always executed (always 
sub-folder)
-* ``API`` - Tests for the Airflow API (api, api_internal, api_fastapi 
sub-folders)
-* ``CLI`` - Tests for the Airflow CLI (cli folder)
-* ``Core`` - for the core Airflow functionality (core, executors, jobs, 
models, ti_deps, utils sub-folders)
-* ``Operators`` - tests for the operators (operators folder)
-* ``WWW`` - Tests for the Airflow webserver (www folder)
-* ``Providers`` - Tests for all Providers of Airflow (providers folder)
-* ``Other`` - all other tests remaining after the above tests are selected
+* ``Always`` - Tests that should always be executed (always sub-folder).
+* ``API`` - Tests for the Airflow API (api, api_internal, api_fastapi 
sub-folders).
+* ``CLI`` - Tests for the Airflow CLI (cli folder).
+* ``Core`` - Tests for core Airflow functionality (core, executors, jobs, 
models, ti_deps, utils sub-folders).
+* ``Operators`` - Tests for operators (operators folder).
+* ``WWW`` - Tests for the Airflow webserver (www folder).
+* ``Providers`` - Tests for all Airflow Providers (providers folder).
+* ``Other`` - All other tests remaining after the above tests are selected.
 
-We have also tests that run "all" tests (so they do not look at the folder, 
but at the ``pytest`` markers
-the tests are marked with to run with some filters applied.
+We also have types that run "all" tests (ignoring folders, but looking at 
``pytest`` markers with filters applied):
 
-* ``All-Postgres`` - tests that require Postgres database. They are only run 
when backend is Postgres (``backend("postgres")`` marker)
-* ``All-MySQL`` - tests that require MySQL database. They are only run when 
backend is MySQL (``backend("mysql")`` marker)
-* ``All-Quarantined`` - tests that are flaky and need to be fixed 
(``quarantined`` marker)
-* ``All`` - all tests are run (this is the default)
+* ``All-Postgres`` - Tests that require a Postgres database. Only run when the 
backend is Postgres (``backend("postgres")`` marker).
+* ``All-MySQL`` - Tests that require a MySQL database. Only run when the 
backend is MySQL (``backend("mysql")`` marker).
+* ``All-Quarantined`` - Tests that are flaky and need to be fixed 
(``quarantined`` marker).
+* ``All`` - All tests are run (this is the default).
 
+We also have ``Integration`` tests that run with external software via the 
``--integration`` flag in the ``breeze`` environment (via ``breeze testing 
integration-tests``).
 
-We also have ``Integration`` tests that are running Integration tests with 
external software that is run
-via ``--integration`` flag in ``breeze`` environment - via ``breeze testing 
integration-tests``.
+* ``Integration`` - Tests that require external integration images running in 
docker-compose.
 
-* ``Integration`` - tests that require external integration images running in 
docker-compose
+This structure exists for two reasons:
 
-This is done for two reasons:
-
-1. in order to selectively run only subset of the test types for some PRs
-2. in order to allow efficient parallel test execution of the tests on 
Self-Hosted runners
-
-For case 2. We can utilize memory and CPUs available on both CI and local 
development machines to run
-test in parallel, but we cannot use pytest xdist plugin for that - we need to 
split the tests into test
-types and run each test type with their own instance of database and separate 
container where the tests
-in each type are run with exclusive access to their database and each test 
within test type runs sequentially.
-By the nature of those tests - they rely on shared databases - and they 
update/reset/cleanup data in the
-databases while they are executing.
+1. To allow selectively running only a subset of test types for some PRs.
+2. To allow efficient parallel execution of tests on Self-Hosted runners.
 
+For case 2: We can utilize the memory and CPUs available on both CI and local 
development machines to run
+tests in parallel. However, we cannot use the pytest xdist plugin for this. 
Instead, we split the tests into test
+types and run each type with its own database instance and separate container. 
The tests in each type run with exclusive access to their database, and tests 
within a type run sequentially.
+This is necessary because these tests rely on shared databases and 
update/reset/cleanup data during execution.
 
 DB and non-DB tests
 -------------------
 
-There are two kinds of unit tests in Airflow - DB and non-DB tests. This 
chapter describe the differences
-between those two types.
+There are two kinds of unit tests in Airflow: DB and non-DB tests. This 
chapter describes the differences
+between these two types.
 
 Airflow non-DB tests
 ....................
 
-For the Non-DB tests, they are run once for each tested Python version with 
``none`` database backend (which
-causes any database access to fail. Those tests are run with ``pytest-xdist`` 
plugin in parallel which
-means that we can efficiently utilised multi-processor machines (including 
``self-hosted`` runners with
-8 CPUS we have to run the tests with maximum parallelism).
+Non-DB tests are run once for each tested Python version with the ``none`` 
database backend (which
+causes any database access to fail). These tests are run with the 
``pytest-xdist`` plugin in parallel, which
+means we can efficiently utilize multi-processor machines (including 
``self-hosted`` runners with
+8 CPUs, where we run tests with maximum parallelism).
 
-It's usually straightforward to run those tests in local virtualenv because 
they do not require any
-setup or running database. They also run much faster than DB tests. You can 
run them with ``pytest`` command
-or with ``breeze`` that has all the dependencies needed to run all tests 
automatically installed. Of course
-you can also select just specific test or folder or module for the Pytest to 
collect/run tests from there,
-the example below shows how to run all tests, parallelizing them with 
``pytest-xdist``
-(by specifying ``tests`` folder):
+It is usually straightforward to run these tests in a local virtualenv because 
they do not require any
+database setup. They also run much faster than DB tests. You can run them with 
the ``pytest`` command
+or with ``breeze`` (which has all dependencies automatically installed). You 
can also select specific tests, folders, or modules for Pytest to collect/run.
+The example below shows how to run all tests, parallelizing them with 
``pytest-xdist`` (by specifying the ``tests`` folder):
 
 .. code-block:: bash
 
     pytest airflow-core/tests --skip-db-tests -n auto
 
-
 The ``--skip-db-tests`` flag will only run tests that are not marked as DB 
tests.
 
-
-You can also run ``breeze`` command to run all the tests (they will run in a 
separate container,
-the selected python version and without access to any database). Adding 
``--use-xdist`` flag will run all
-tests in parallel using ``pytest-xdist`` plugin.
+You can also use the ``breeze`` command to run all the tests (they will run in 
a separate container,
+with the selected Python version and without access to any database). Adding 
the ``--use-xdist`` flag will run all
+tests in parallel using the ``pytest-xdist`` plugin.
 
 You can run parallel commands via ``breeze testing core-tests`` or ``breeze 
testing providers-tests``
-- by adding the parallel flags:
+by adding the parallel flags:
 
 .. code-block:: bash
 
     breeze testing core-tests --skip-db-tests --backend none --use-xdist
 
-You can pass ``--parallel-test-type`` list of test types to execute or 
``--exclude--parallel-test-types``
-to exclude them from the default set:.
+You can pass a list of test types to execute via ``--parallel-test-type`` or 
exclude them via ``--exclude-parallel-test-types``:
 
 .. code-block:: bash
 
     breeze testing providers-tests --run-in-parallel --skip-db-tests --backend 
none --parallel-test-types "Providers[google] Providers[amazon]"
 
-Also you can enter interactive shell with ``breeze`` and run tests from there 
if you want to iterate
-with the tests. Source files in ``breeze`` are mounted as volumes so you can 
modify them locally and
-rerun in Breeze as you will (``-n auto`` will parallelize tests using 
``pytest-xdist`` plugin):
+Additionally, you can enter an interactive shell with ``breeze`` and run tests 
from there to iterate. Source files in ``breeze`` are mounted as volumes, so 
you can modify them locally and
+rerun in Breeze as needed (``-n auto`` will parallelize tests using the 
``pytest-xdist`` plugin):
 
 .. code-block:: bash
 
     breeze shell --backend none --python 3.10
     > pytest airflow-core/tests --skip-db-tests -n auto
 
-
 Airflow DB tests
 ................
 
-Some of the tests of Airflow require a database to connect to in order to run. 
Those tests store and read data
-from Airflow DB using Airflow's core code and it's crucial to run the tests 
against all real databases
-that Airflow supports in order to check if the SQLAlchemy queries are correct 
and if the database schema is
-correct.
+Some Airflow tests require a database connection. These tests store and read 
data
+from the Airflow DB using Airflow's core code. It is crucial to run these 
tests against all real databases
+that Airflow supports to check if SQLAlchemy queries and the database schema 
are correct.
 
-Those tests should be marked with ``@pytest.mark.db`` decorator on one of the 
levels:
+These tests should be marked with the ``@pytest.mark.db_test`` decorator at 
one of the following levels:

Review Comment:
   The documentation text mentioned @pytest.mark.db, but the examples below and 
the codebase use @pytest.mark.db_test. I've standardized the text to match the 
examples



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to