This is a different tack entirely, but how we do it is bake our DAGs etc
into the images we use for our deploy, which we push to Google Cloud
Registry. We use helm, an extra package on top of kubernetes to interpolate
the image name into the deployment files on release. We basically have a
`Make deploy` type situation that 1. builds the containers with the code
from a certain commit baked into them 2. pushes them to GCR with an
environment and git hash 3. instantiates a helm upgrade with the name of
that image interpolated in the right spots. So, not helpful regarding git
syncing but is possibly another deployment tack to try that's been
successful for us on GKE.

Laura

On Mon, Jan 15, 2018 at 2:53 PM, jordan.zuc...@gmail.com <
jordan.zuc...@gmail.com> wrote:

>
>
> On 2018-01-12 16:17, Anirudh Ramanathan <ramanath...@google.com.INVALID>
> wrote:
> > > Any good way to debug this?
> >
> > One way might be reading the events from "kubectl get events". That
> should
> > reveal some information about the pod removal event.
> > This brings up another question - should errored pods be persisted for
> > debugging?
> >
> > On Fri, Jan 12, 2018 at 3:07 PM, jordan.zuc...@gmail.com <
> > jordan.zuc...@gmail.com> wrote:
> >
> > > I'm trying to use Airflow and Kubernetes and having trouble using git
> sync
> > > to pull DAGs into workers.
> > >
> > > I use a git sync init container on the scheduler to pull in DAGs
> initially
> > > and that works. But when worker pods are spawned, the workers terminate
> > > almost immediately because they cannot find the DAGs. But since the
> workers
> > > terminate so quickly, I can't even inspect the file structure to see
> where
> > > the DAGs ended up during the workers git sync init container.
> > >
> > > I noticed that the git sync init container for the workers is hard
> coded
> > > into /tmp/dags and there is a git_subpath config setting as well. But I
> > > can't understand how the git synced DAGs ever end up in
> /root/airflow/dags
> > >
> > > I am successfully using a git sync init container for the scheduler,
> so I
> > > know my git credentials are valid. Any good way to debug this? Or an
> > > example of how to set this up correctly?
> > >
> >
> >
> >
> > --
> > Anirudh Ramanathan
> Anirudh, in my case, the pods are persisted because I set the
> `delete_worker_pods` to False in the airflow configmap, however, I cannot
> exec into them because they terminated on an error and are no longer
> running.
>

Reply via email to