Hello all, the meeting will be moved to the following zoom.us: https://zoom.us/j/529334876
Sorry again for the confusion. On Thu, Jul 27, 2017 at 11:07 AM Daniel Imberman <[email protected]> wrote: > Hi Guys, sorry about the technical difficulties. I will be switching this > call to zoom.us and send out a link shortly > > On Thu, Jul 27, 2017 at 11:07 AM Maxime Beauchemin < > [email protected]> wrote: > >> I see the same thing, "The call is full" >> >> On Thu, Jul 27, 2017 at 11:04 AM, Wilson Lian <[email protected]> >> wrote: >> >> > The call is full :( >> > >> > On Thu, Jul 27, 2017 at 11:02 AM, Daniel Imberman < >> > [email protected] >> > > wrote: >> > >> > > Hello everyone, >> > > >> > > The kubernetes discussion is beginning now. I am including a link >> below, >> > > but it can also be found in the calendar event itself. >> > > >> > > https://hangouts.google.com/hangouts/_/calendar/ >> > > >> ZGFuaWVsLmltYmVybWFuQGdtYWlsLmNvbQ.3jcahq63njusjbnc2l4po6t53a?authuser=0 >> > > >> > > On Thu, Jul 27, 2017 at 10:44 AM Marc Bollinger <[email protected]> >> > > wrote: >> > > >> > > > Awesome, thanks! >> > > > >> > > > On Thu, Jul 27, 2017 at 10:38 AM, Daniel Imberman < >> > > > [email protected] >> > > > > wrote: >> > > > >> > > > > Hi Marc, >> > > > > >> > > > > Sorry for the confusion. The meeting will take place over the >> google >> > > > > hangout link on the event (zoom was just an early suggestion >> before I >> > > > > realized we could just create the google hangout). >> > > > > >> > > > > Best, >> > > > > Daniel >> > > > > >> > > > > On Thu, Jul 27, 2017 at 10:11 AM Marc Bollinger < >> [email protected]> >> > > > > wrote: >> > > > > >> > > > > > Is this still happening over a Zoom link to be distributed, or >> on >> > the >> > > > > > Google Hangouts link already associated with the invite? >> > > > > > >> > > > > > On Sun, Jul 23, 2017 at 12:53 PM, Daniel Imberman < >> > > > > > [email protected] >> > > > > > > wrote: >> > > > > > >> > > > > > > Hi Victor, >> > > > > > > >> > > > > > > This implementation is about allowing native kubernetes >> > deployment >> > > of >> > > > > > tasks >> > > > > > > on airflow. There are existing deployments/helm scripts with >> all >> > > the >> > > > > > > components needed to launch an airflow cluster, but these >> > clusters >> > > > > > > basically consist of having a set up static workers run with >> > celery >> > > > to >> > > > > > > communicate between pods. >> > > > > > > >> > > > > > > The KubernetesExecutor will dynamically create pods for each >> task >> > > > > you're >> > > > > > > running. Your cluster will be significantly more elastic and >> it >> > > won't >> > > > > > waste >> > > > > > > resources when jobs aren't running. >> > > > > > > >> > > > > > > We have also created a k8s operator which will allow people to >> > > build >> > > > > pods >> > > > > > > by linking to images/settings. This will give users a lot more >> > > > > > flexibility >> > > > > > > in creating custom operators that launch single pods or entire >> > > > clusters >> > > > > > to >> > > > > > > complete a task. >> > > > > > > >> > > > > > > I think our implementation is similar to what you're >> proposing, >> > but >> > > > if >> > > > > > > there's any improvements you can see I'd be glad to speak >> further >> > > > over >> > > > > DM >> > > > > > > or during the meet-up this Thursday :). >> > > > > > > >> > > > > > > Here is the proposal for a high level overview and the WIP PR >> for >> > > > > > > reference: >> > > > > > > >> > > > > > > >> > > > > > https://cwiki.apache.org/confluence/pages/viewpage. >> > > > > action?pageId=71013666 >> > > > > > > https://github.com/apache/incubator-airflow/pull/2414 >> > > > > > > >> > > > > > > Cheers, >> > > > > > > >> > > > > > > Daniel >> > > > > > > >> > > > > > > On Sat, Jul 22, 2017 at 2:24 PM Amit Kulkarni < >> [email protected]> >> > > > wrote: >> > > > > > > >> > > > > > > > I would like to join as well. >> > > > > > > > >> > > > > > > > >> > > > > > > > Amit Kulkarni >> > > > > > > > Site Reliability Engineer >> > > > > > > > Mobile: (716)-352-3270 <(716)%20352-3270> >> <(716)%20352-3270> <(716)%20352-3270> >> > > > <(716)%20352-3270> >> > > > > > > > >> > > > > > > > Payments partner to the platform economy >> > > > > > > > >> > > > > > > > On Sat, Jul 22, 2017 at 9:07 AM, Victor Monteiro < >> > > > > > > [email protected]> >> > > > > > > > wrote: >> > > > > > > > >> > > > > > > > > Hi everyone! >> > > > > > > > > >> > > > > > > > > I'm new here and I've read some of the discussions about >> K8s >> > + >> > > > > > Airflow. >> > > > > > > > And >> > > > > > > > > I would be very interested to join this meeting. >> > > > > > > > > >> > > > > > > > > But I'm not quiet sure what is this integration. Is it to >> > make >> > > it >> > > > > > easy >> > > > > > > to >> > > > > > > > > deploy Airflow to K8S and have scheduler, webserver and >> other >> > > > > airflow >> > > > > > > > > components inside K8S pods or is it to be able to run >> > > dags/tasks >> > > > in >> > > > > > > > > isolated pod environments to allow temporary isolated >> > > > environments >> > > > > > that >> > > > > > > > > will be destroyed as soon as the task instance / dag run >> is >> > > over? >> > > > > > > > > >> > > > > > > > > At the company I work on here in Brazil, we have deployed >> > > Airflow >> > > > > > with >> > > > > > > > K8S >> > > > > > > > > and we are thinking in a way to run dags/tasks in >> temporary >> > > > > isolated >> > > > > > > pods >> > > > > > > > > to allow, mainly, any powerful operator like >> PythonOperator >> > and >> > > > > > > > > BashOperator. Something using this >> > > > > > > > > >> > > > > > https://kubernetes.io/docs/concepts/workloads/ >> > > controllers/jobs-run-to- >> > > > > > > > > completion/#what-is-a-job >> > > > > > > > > . >> > > > > > > > > >> > > > > > > > > Ps: Sorry if this wasn't the right thread to discuss about >> > ir. >> > > If >> > > > > > that >> > > > > > > is >> > > > > > > > > true, message me and I'll take this message to the right >> > place. >> > > > > > > > > On Fri, 21 Jul 2017 at 01:55 Maxime Beauchemin < >> > > > > > > > [email protected] >> > > > > > > > > > >> > > > > > > > > wrote: >> > > > > > > > > >> > > > > > > > > > +1! >> > > > > > > > > > >> > > > > > > > > > On Thu, Jul 20, 2017 at 5:53 PM, Feng Lu >> > > > > <[email protected] >> > > > > > > >> > > > > > > > > > wrote: >> > > > > > > > > > >> > > > > > > > > > > Would like to join, please kindly invite me. >> > > > > > > > > > > Thanks! >> > > > > > > > > > > >> > > > > > > > > > > On Thu, Jul 20, 2017 at 9:54 AM, Daniel Imberman < >> > > > > > > > > > > [email protected]> >> > > > > > > > > > > wrote: >> > > > > > > > > > > >> > > > > > > > > > > > Hello everyone, >> > > > > > > > > > > > >> > > > > > > > > > > > Recently there's been a fair amount of discussion >> > > regarding >> > > > > the >> > > > > > > > > > > integration >> > > > > > > > > > > > of airflow with kubernetes. If there is interest I >> > would >> > > > love >> > > > > > to >> > > > > > > > host >> > > > > > > > > > an >> > > > > > > > > > > > e-meeting to discuss this integration. I can go over >> > the >> > > > > > > > architecture >> > > > > > > > > > as >> > > > > > > > > > > it >> > > > > > > > > > > > stands right now and would love feedback on >> > > > > > > > > > > improvements/features/design. I >> > > > > > > > > > > > could also attempt to get one or two members of >> > google's >> > > > > > > kubernetes >> > > > > > > > > > team >> > > > > > > > > > > to >> > > > > > > > > > > > join to discuss best practices. >> > > > > > > > > > > > >> > > > > > > > > > > > I'm currently thinking that next Thursday at 11AM >> PST >> > > over >> > > > > > > zoom.us >> > > > > > > > , >> > > > > > > > > > > though >> > > > > > > > > > > > if there's strong opinions otherwise I'd be glad to >> > > propose >> > > > > > other >> > > > > > > > > > times. >> > > > > > > > > > > > >> > > > > > > > > > > > Cheers! >> > > > > > > > > > > > >> > > > > > > > > > > > Daniel >> > > > > > > > > > > > >> > > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > >> >
