+1. And it should be paired with the official image we have work in
progress on. I looked a lot at the Astronomer's image while preparing my
draft and we can make any adjustments needed to make it works with the helm
chart - and I am super happy to collaborate on that.

PR here: https://github.com/apache/airflow/pull/7832

J.


On Tue, Mar 24, 2020 at 3:15 PM Kaxil Naik <kaxiln...@gmail.com> wrote:

> @Tomasz Urbaszek <tomasz.urbas...@polidea.com> :
> Helm Chart Link: https://github.com/astronomer/airflow-chart
>
> On Tue, Mar 24, 2020 at 2:13 PM Tomasz Urbaszek <turbas...@apache.org>
> wrote:
>
> > An official helm chart is something our community needs! Using your
> > chart as the official makes a lot of sens to me because as you
> > mentioned - it's battle tested.
> >
> > One question: what Airflow image do you use? Also, would you mind
> > sharing a link to the chart?
> >
> > Tomek
> >
> >
> > On Tue, Mar 24, 2020 at 2:07 PM Greg Neiheisel
> > <g...@astronomer.io.invalid> wrote:
> > >
> > > Hey everyone,
> > >
> > > Over the past few years at Astronomer, we’ve created, managed, and
> > hardened
> > > a production-ready Helm Chart for Airflow (
> > > https://github.com/astronomer/airflow-chart) that is being used by
> both
> > our
> > > SaaS and Enterprise customers. This chart is battle-tested and running
> > > hundreds of Airflow deployments of varying sizes and runtime
> > environments.
> > > It’s been built up to encapsulate the issues that Airflow users run
> into
> > in
> > > the real world.
> > >
> > > While this chart was originally developed internally for our Astronomer
> > > Platform, we’ve recently decoupled the chart from the rest of our
> > platform
> > > to make it usable by the greater Airflow community. With these changes
> in
> > > mind, we want to start a conversation about donating this chart to the
> > > Airflow community.
> > >
> > > Some of the main features of the chart are:
> > >
> > >    - It works out of the box. With zero configuration, a user will get
> a
> > >    postgres database, a default user and the KubernetesExecutor ready
> to
> > run
> > >    DAGs.
> > >    - Support for Local, Celery (w/ optional KEDA autoscaling) and
> > >    Kubernetes executors.
> > >
> > > Support for optional pgbouncer. We use this to share a configurable
> > > connection pool size per deployment. Useful for limiting connections to
> > the
> > > metadata database.
> > >
> > >    - Airflow migration support. A user can push a newer version of
> > Airflow
> > >    into an existing release and migrations will automatically run
> > cleanly.
> > >    - Prometheus support. Optionally install and configure a
> > statsd-exporter
> > >    to ingest Airflow metrics and expose them to Prometheus
> automatically.
> > >    - Resource control. Optionally control the ResourceQuotas and
> > >    LimitRanges for each deployment so that no deployment can overload a
> > >    cluster.
> > >    - Simple optional Elasticsearch support.
> > >    - Optional namespace cleanup. Sometimes KubernetesExecutor and
> > >    KubernetesPodOperator pods fail for reasons other than the actual
> > task.
> > >    This feature helps keep things clean in Kubernetes.
> > >    - Support for running locally in KIND (Kubernetes in Docker).
> > >    - Automatically tested across many Kubernetes versions with Helm 2
> > and 3
> > >    support.
> > >
> > > We’ve found that the cleanest and most reliable way to deploy DAGs to
> > > Kubernetes and manage them at scale is to package them into the actual
> > > docker image, so we have geared this chart towards that method of
> > > operation, though adding other methods should be straightforward.
> > >
> > > We would love thoughts from the community and would love to see this
> > chart
> > > help others to get up and running on Kubernetes!
> > >
> > > --
> > > *Greg Neiheisel* / Chief Architect Astronomer.io
> >
>


-- 

Jarek Potiuk
Polidea <https://www.polidea.com/> | Principal Software Engineer

M: +48 660 796 129 <+48660796129>
[image: Polidea] <https://www.polidea.com/>

Reply via email to