An official helm chart is something our community needs! Using your
chart as the official makes a lot of sens to me because as you
mentioned - it's battle tested.

One question: what Airflow image do you use? Also, would you mind
sharing a link to the chart?

Tomek


On Tue, Mar 24, 2020 at 2:07 PM Greg Neiheisel
<g...@astronomer.io.invalid> wrote:
>
> Hey everyone,
>
> Over the past few years at Astronomer, we’ve created, managed, and hardened
> a production-ready Helm Chart for Airflow (
> https://github.com/astronomer/airflow-chart) that is being used by both our
> SaaS and Enterprise customers. This chart is battle-tested and running
> hundreds of Airflow deployments of varying sizes and runtime environments.
> It’s been built up to encapsulate the issues that Airflow users run into in
> the real world.
>
> While this chart was originally developed internally for our Astronomer
> Platform, we’ve recently decoupled the chart from the rest of our platform
> to make it usable by the greater Airflow community. With these changes in
> mind, we want to start a conversation about donating this chart to the
> Airflow community.
>
> Some of the main features of the chart are:
>
>    - It works out of the box. With zero configuration, a user will get a
>    postgres database, a default user and the KubernetesExecutor ready to run
>    DAGs.
>    - Support for Local, Celery (w/ optional KEDA autoscaling) and
>    Kubernetes executors.
>
> Support for optional pgbouncer. We use this to share a configurable
> connection pool size per deployment. Useful for limiting connections to the
> metadata database.
>
>    - Airflow migration support. A user can push a newer version of Airflow
>    into an existing release and migrations will automatically run cleanly.
>    - Prometheus support. Optionally install and configure a statsd-exporter
>    to ingest Airflow metrics and expose them to Prometheus automatically.
>    - Resource control. Optionally control the ResourceQuotas and
>    LimitRanges for each deployment so that no deployment can overload a
>    cluster.
>    - Simple optional Elasticsearch support.
>    - Optional namespace cleanup. Sometimes KubernetesExecutor and
>    KubernetesPodOperator pods fail for reasons other than the actual task.
>    This feature helps keep things clean in Kubernetes.
>    - Support for running locally in KIND (Kubernetes in Docker).
>    - Automatically tested across many Kubernetes versions with Helm 2 and 3
>    support.
>
> We’ve found that the cleanest and most reliable way to deploy DAGs to
> Kubernetes and manage them at scale is to package them into the actual
> docker image, so we have geared this chart towards that method of
> operation, though adding other methods should be straightforward.
>
> We would love thoughts from the community and would love to see this chart
> help others to get up and running on Kubernetes!
>
> --
> *Greg Neiheisel* / Chief Architect Astronomer.io

Reply via email to