This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v3-0-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit b71c0f57930e579d21a9e46336993f8c4bd2bef9
Author: Ryan Hatter <[email protected]>
AuthorDate: Sat Apr 19 06:16:44 2025 -0400

    Edit airflow 3 migration guide (#49414)
    
    (cherry picked from commit 1b44508b342c4c94284d854b981316cf2022c6d9)
---
 .../docs/installation/upgrading_to_airflow3.rst    | 69 +++++++++++-----------
 1 file changed, 33 insertions(+), 36 deletions(-)

diff --git a/airflow-core/docs/installation/upgrading_to_airflow3.rst 
b/airflow-core/docs/installation/upgrading_to_airflow3.rst
index 208f4ba5cd4..e30a0a41039 100644
--- a/airflow-core/docs/installation/upgrading_to_airflow3.rst
+++ b/airflow-core/docs/installation/upgrading_to_airflow3.rst
@@ -18,52 +18,49 @@
 Upgrading to Airflow 3
 =======================
 
-Apache Airflow 3 is a major release. This guide walks you through the steps 
required to upgrade from Airflow 2.x to Airflow 3.0.
+Apache Airflow 3 is a major release and contains :ref:`breaking 
changes<breaking-changes>`. This guide walks you through the steps required to 
upgrade from Airflow 2.x to Airflow 3.0.
 
 Step 1: Take care of prerequisites
 ----------------------------------
 
 - Make sure that you are on Airflow 2.7 or later.
 - Make sure that your Python version is in the supported list. Airflow 3.0.0 
supports the following Python versions: Python 3.9, 3.10, 3.11 and 3.12.
-- Ensure that you are not using SubDAGs. These were deprecated in Airflow 2.0 
and removed in Airflow 3.
-- For a complete list of breaking changes, which you should note before the 
upgrade, please check the breaking changes section below.
+- Ensure that you are not using any features or functionality that have been 
:ref:`removed in Airflow 3<breaking-changes>`.
+
 
 Step 2: Clean and back up your existing Airflow Instance
 ---------------------------------------------------------
 
-- It is highly recommended to make a backup of your Airflow instance 
specifically including your Airflow metadata DB before starting the migration 
process.
-- If you do not have a "hot backup" capability for your DB, you should do it 
after shutting down your Airflow instances, so that the backup of your database 
will be consistent.
-- If you did not make a backup and your migration fails, you might end up in a 
half-migrated state and restoring DB from backup and repeating the migration
-  might be the only easy way out. This can for example be caused by a broken 
network connection between your CLI and the database while the migration 
happens, so taking a
-  backup is an important precaution to avoid problems like this.
-- A long running Airflow instance can accumulate a certain amount of silt, in 
the form of old database entries, which are no longer
-  required. This is typically in the form of old XCom data which is no longer 
required, and so on. As part of the Airflow 3 upgrade
-  process, there will be schema changes. Based on the size of the Airflow 
meta-database this can be somewhat time
-  consuming. For a faster, safer migration, we recommend that you clean up 
your Airflow meta-database before the upgrade.
-  You can use ``airflow db clean`` command for that.
+- It is highly recommended that you make a backup of your Airflow instance, 
specifically your Airflow metadata database before starting the migration 
process.
+
+    - If you do not have a "hot backup" capability for your database, you 
should do it after shutting down your Airflow instances, so that the backup of 
your database will be consistent. For example, if you don't turn off your 
Airflow instance, the backup of the database will not include all TaskInstances 
or DagRuns.
+
+    - If you did not make a backup and your migration fails, you might end up 
in a half-migrated state. This can be caused by, for example, a broken network 
connection between your Airflow CLI and the database during the migration. 
Having a backup is an important precaution to avoid problems like this.
+
+- A long running Airflow instance can accumulate a substantial amount of data 
that are no longer required (for example, old XCom data). Schema changes will 
be a part of the Airflow 3
+  upgrade process. These schema changes can take a long time if the database 
is large. For a faster, safer migration, we recommend that you clean up your 
Airflow meta-database before the upgrade.
+  You can use the ``airflow db clean`` :ref:`Airflow CLI 
command<cli-db-clean>` to trim your Airflow database.
+
 
 Step 3: DAG Authors - Check your Airflow DAGs for compatibility
 ----------------------------------------------------------------
 
-To minimize friction for users upgrading from prior versions of Airflow, we 
have created a DAG upgrade check utility using `Ruff 
<https://docs.astral.sh/ruff/>`_.
+To minimize friction for users upgrading from prior versions of Airflow, we 
have created a dag upgrade check utility using `Ruff 
<https://docs.astral.sh/ruff/>`_.
 
-Use the latest available ``ruff`` version to get updates to the rules but at 
the very least use ``0.11.6``:
+The latest available ``ruff`` version will have the most up-to-date rules, but 
be sure to use at least version ``0.11.6``. The below example demonstrates how 
to check
+for dag incompatibilities that will need to be fixed before they will work as 
expected on Airflow 3.
 
 .. code-block:: bash
 
     ruff check dag/ --select AIR301
 
-This command above shows you all the errors which need to be fixed before 
these DAGs can be used on Airflow 3.
-
-Some of these changes are automatically fixable and you can also rerun the 
command above with the auto-fix option as shown below.
-
-To preview the changes:
+To preview the recommended fixes, run the following command:
 
 .. code-block:: bash
 
     ruff check dag/ --select AIR301 --show-fixes
 
-To auto-fix:
+Some changes can be automatically fixed. To do so, run the following command:
 
 .. code-block:: bash
 
@@ -72,15 +69,16 @@ To auto-fix:
 Step 4: Install the Standard Providers
 --------------------------------------
 
-- Some of the commonly used Operators which were bundled as part of the Core 
Airflow OSS package such as the
-  Bash and Python Operators have now been split out into a separate package: 
``apache-airflow-providers-standard``.
-- For user convenience, this package can also be installed on Airflow 2.x 
versions, so that DAGs can be modified to reference these Operators from the 
Standard Provider package instead of Airflow Core.
+- Some of the commonly used Operators which were bundled as part of the 
``airflow-core`` package (for example ``BashOperator`` and ``PythonOperator``)
+  have now been split out into a separate package: 
``apache-airflow-providers-standard``.
+- For convenience, this package can also be installed on Airflow 2.x versions, 
so that DAGs can be modified to reference these Operators from the standard 
provider
+  package instead of Airflow Core.
 
 
 Step 5: Deployment Managers - Upgrade your Airflow Instance
 ------------------------------------------------------------
 
-For an easier and safer upgrade process, we have also created a utility to 
upgrade your Airflow instance configuration as a deployment manager.
+For an easier and safer upgrade process, we have also created a utility to 
upgrade your Airflow instance configuration.
 
 The first step is to run this configuration check utility as shown below:
 
@@ -97,8 +95,7 @@ This configuration utility can also update your configuration 
to automatically b
     airflow config update --fix
 
 
-The biggest part of an Airflow upgrade is the database upgrade. The database 
upgrade process for Airflow 3 is the same as for Airflow 2.7 or later.
-
+The biggest part of an Airflow upgrade is the database upgrade. The database 
upgrade process for Airflow 3 is the same as for Airflow 2.7 or later:
 
 .. code-block:: bash
 
@@ -111,19 +108,21 @@ You should now be able to start up your Airflow 3 
instance.
 Step 6: Changes to your startup scripts
 ---------------------------------------
 
-- In Airflow 3, the Webserver has now become a generic API-server. The 
api-server can be started up using the following command:
+In Airflow 3, the Webserver has become a generic API server. The API server 
can be started up using the following command:
 
 .. code-block:: bash
 
     airflow api-server
 
-- The DAG processor must now be started independently, even for local or 
development setups.
+The dag processor must now be started independently, even for local or 
development setups:
 
 .. code-block:: bash
 
     airflow dag-processor
 
 
+.. _breaking-changes:
+
 Breaking Changes
 ================
 
@@ -133,9 +132,8 @@ These include:
 - **SubDAGs**: Replaced by TaskGroups, Datasets, and Data Aware Scheduling.
 - **Sequential Executor**: Replaced by LocalExecutor, which can be used with 
SQLite for local development use cases.
 - **SLAs**: Deprecated and removed; Will be replaced by forthcoming `Deadline 
Alerts <https://cwiki.apache.org/confluence/x/tglIEw>`_.
-- **Subdir**: Used as an argument on many CLI commands (``--subdir`` or ``-S`` 
has been superseded by DAG bundles.
-- **Following keys are no longer available in task context. If not replaced, 
will cause DAG errors**:
-
+- **Subdir**: Used as an argument on many CLI commands, ``--subdir`` or ``-S`` 
has been superseded by :doc:`DAG bundles 
</administration-and-deployment/dag-bundles>`.
+- **Some Airflow context variables**: The following keys are no longer 
available in a :ref:`task instance's context <templates:variables>`. If not 
replaced, will cause dag errors:
   - ``tomorrow_ds``
   - ``tomorrow_ds_nodash``
   - ``yesterday_ds``
@@ -148,10 +146,9 @@ These include:
   - ``next_ds_nodash``
   - ``next_ds``
   - ``execution_date``
-
-- ``catchup_by_default`` is now ``False`` by default.
-- ``create_cron_data_intervals`` is now ``False``. This means that the 
``CronTriggerTimetable`` will be used by default instead of the 
``CronDataIntervalTimetable``
-- **Simple Auth** is now default ``auth_manager``. To continue using FAB as 
the Auth Manager, please install the FAB provider and set ``auth_manager`` to
+- The ``catchup_by_default`` dag parameter is now ``False`` by default.
+- The ``create_cron_data_intervals`` configuration is now ``False`` by 
default. This means that the ``CronTriggerTimetable`` will be used by default 
instead of the ``CronDataIntervalTimetable``
+- **Simple Auth** is now default ``auth_manager``. To continue using FAB as 
the Auth Manager, please install the FAB provider and set ``auth_manager`` to 
``FabAuthManager``:
 
   .. code-block:: ini
 

Reply via email to