jedcunningham commented on code in PR #29078:
URL: https://github.com/apache/airflow/pull/29078#discussion_r1083318697
##########
docs/helm-chart/index.rst:
##########
@@ -140,3 +140,22 @@ will not start as the migrations will not be run:
This is so these CI/CD services can perform updates without issues and
preserve the immutability of Kubernetes Job manifests.
This also applies if you install the chart using ``--wait`` in your ``helm
install`` command.
+
+.. note::
+ While deploying this Helm chart with Argo, you might encounter issues with
database migrations not running automatically on upgrade.
+
+To ensure database migrations with Argo CD, you will need to add:
+
+.. code-block::yaml
+
+ migrateDatabaseJob:
+ jobAnnotations:
+ "argocd.argoproj.io/hook": Sync
+
+and, equivalent configurations in case of Flux CD
+
+.. note::
+ This will ensure database migrations run when the Airflow Docker image is
upgraded. This approach has a limitation in that the database migrations will
run every time there is a ``Sync`` event in Argo. This is a trade-off for
automation at the cost of some computational loss.
+
+.. warning::
+ While deploying with Argo CD, you may encounter issues with (built-in)
Redis secrets (when you use the Celery(Kubernetes)Executor) not being rotated
properly after Airflow pods restart. It's recommended that you bring your own
Redis. See `Celery Backend <production-guide#celery-backend>`__
Review Comment:
And for the Argo CD case, instead of jumping straight to suggesting BYO
redis, it might be better to suggest folks set `redis.passwordSecretName` and
`data.brokerUrlSecretName`, or `redis.password` as a first option?
##########
docs/helm-chart/index.rst:
##########
@@ -140,3 +140,22 @@ will not start as the migrations will not be run:
This is so these CI/CD services can perform updates without issues and
preserve the immutability of Kubernetes Job manifests.
This also applies if you install the chart using ``--wait`` in your ``helm
install`` command.
+
+.. note::
+ While deploying this Helm chart with Argo, you might encounter issues with
``DB Migrations`` not running automatically on upgrade.
+
+To ensure DB Migrations with Argo CD, you will need to:
+
+.. code-block::yaml
+
+ migrateDatabaseJob:
+ jobAnnotations:
+ "argocd.argoproj.io/hook": Sync
+
+and, equivalent configurations in case of Flux CD
Review Comment:
I'd suggest we drop this line if we don't know what we need for flux.
##########
docs/helm-chart/index.rst:
##########
@@ -140,3 +140,22 @@ will not start as the migrations will not be run:
This is so these CI/CD services can perform updates without issues and
preserve the immutability of Kubernetes Job manifests.
This also applies if you install the chart using ``--wait`` in your ``helm
install`` command.
+
+.. note::
+ While deploying this Helm chart with Argo, you might encounter issues with
database migrations not running automatically on upgrade.
+
+To ensure database migrations with Argo CD, you will need to add:
+
+.. code-block::yaml
+
+ migrateDatabaseJob:
+ jobAnnotations:
+ "argocd.argoproj.io/hook": Sync
+
+and, equivalent configurations in case of Flux CD
+
+.. note::
+ This will ensure database migrations run when the Airflow Docker image is
upgraded. This approach has a limitation in that the database migrations will
run every time there is a ``Sync`` event in Argo. This is a trade-off for
automation at the cost of some computational loss.
Review Comment:
```suggestion
This will ensure database migrations run when the Airflow Docker image is
upgraded. This approach has a limitation in that the database migrations will
run every time there is a ``Sync`` event in Argo. This is a trade-off for
automation at the cost of some computational loss.
```
Probably doesn't need to be a note, as it's just a continuation of the stuff
right above it.
Is this actually tied to image changes, or would any change kick it off (I'm
not very familiar with Argo CD)?
##########
docs/helm-chart/production-guide.rst:
##########
@@ -295,6 +295,10 @@ By default, the chart will deploy Redis. However, you can
use any supported Cele
data:
brokerUrl: redis://redis-user:password@redis-host:6379/0
+.. note::
+ It is recommended that you bring your own Redis when using it in production.
The built-in Redis instance is more suitable (and intended) for
experimentation, enabling you to deploy Airflow quickly.
Review Comment:
Our usage of redis is so simple, I'm not sure this is actually the case. My
2c.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]