potiuk commented on a change in pull request #11541:
URL: https://github.com/apache/airflow/pull/11541#discussion_r507187279
##########
File path: CI.rst
##########
@@ -629,6 +628,86 @@ delete old artifacts that are > 7 days old. It only runs
for the 'apache/airflow
We also have a script that can help to clean-up the old artifacts:
`remove_artifacts.sh <dev/remove_artifacts.sh>`_
+Selective CI Checks
+===================
+
+In order to optimise our CI builds, we've implemented optimisations to only
run selected checks for some
+kind of changes. The logic implemented reflects the internal architecture of
Airflow 2.0 packages
+and it is helps to keep down both the usage of jobs in GitHub Actions as well
as CI feedback time to
+contributors in case of simpler changes.
+
+We have the following test types (separated by packages in which they are):
+
+* Core - for the core Airflow functionality (core folder)
+* API - Tests for the API of Airflow (api and api_connexion folders)
+* CLI - Tests for the CLI of Airflow (cli folder)
+* WWW - Tests for the webserver of Airflow (www and www_rbac in 1.10 folders)
+* Providers - Tests for all Providers of Airflow (providers folder)
+* Other - all other tests (all other folders that are not part of any of the
above)
Review comment:
Not at all. First of all I think it was not the root cause (we will
start hitting it anyway in weekdays) but most of all I already changed the
approach a lot - learning from the "too many jobs". We still separate tests but
rather than run them in separate jobs. Right now we run all test types
belonging to the same backend/python version in a loop - each of the test
types in a separate `docker run` - cleaning the docker remnants between each
type.
So if there is a decision to run Core, API and CLI for example, they will
run one after the other - each as a separate "docker run" - but they will be
run on the same machine. There is a cleanup between each type and we print also
the resources available to see if the previous type did not take too much of
resources for the next one for whatever reason. This works like that since
Wednesday I think. This PR is mainly about also limiting when docker image
build is needed at all - there are some cases (like when you change only an .md
file) that no CI Image/PROD image are even needed. In this case we skip
building and downloading the image and we only run a limited set of "static
checks".
I think it is not an often case, but It is often-enough to get it down from
20 minutes to 20 seconds sometimes.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]