around this that I don't know about or am overlooking?
--
Kyle Hamlin
myself so don't trust my word on this)
>
> -ash
> > On 4 Oct 2018, at 16:38, Kyle Hamlin wrote:
> >
> > If I remove the Flask-AppBuild pinning to 1.11.0 then it uncovers a
> Jinja2
> > conflict which is baffling because I don't see anywhere in the graph that
>
;=0.6, installed: 1.5]
- six [required: >=1.9.0, installed: 1.11.0]
- thrift [required: >=0.9.2, installed: 0.11.0]
- six [required: >=1.7.2, installed: 1.11.0]
- tzlocal [required: >=1.4, installed: 1.5.1]
- pytz [required: Any, installed: 2018.5]
- unicodecsv [require
b5ffd146a5a33820cfa7541e5ce09098f3d541a
>
>
> For installing in the mea time pin `Flask-AppBuilder=1.11.0'
>
> > On 4 Oct 2018, at 00:41, Kyle Hamlin wrote:
> >
> > Hi,
> >
> > Today I was trying to upgrade Airflow to 1.10.0 and it appears that there
>
whoops remove the [[source]] at the end of the url = "
https://pypi.python.org/simple; that is a typo.
On Thu, Oct 4, 2018 at 11:26 AM Kyle Hamlin wrote:
> Thank you for the response Ash.
>
> Even with your suggestion, there appear to be version conflicts all over
> the p
red: >=1.9.0, installed: 1.11.0]
- thrift [required: >=0.9.2, installed: 0.11.0]
- six [required: >=1.7.2, installed: 1.11.0]
- tzlocal [required: >=1.4, installed: 1.5.1]
- pytz [required: Any, installed: 2018.5]
- unicodecsv [required: >=0.14.1, installed: 0.14.1]
- werkzeug [required: >=0.14.1,<0.15.0, installed: 0.14.1]
- zope.deprecation [required: >=4.0,<5.0, installed: 4.3.0]
- setuptools [required: Any, installed: 40.4.3]
--
Kyle Hamlin
also doesnt work.
> Can you please help me.
>
>
> Thanks,
>
> Bhavani
>
--
Kyle Hamlin
an
> event
> > stream of pods completing and it keeps a checkpoint so it can resubscribe
> > when it comes back up.
> >
> > I forget if the worker pods update the db or if the scheduler is doing
> > that, but it should work out.
> >
> > On Thu, Aug 3
gentle bump
On Wed, Aug 22, 2018 at 5:12 PM Kyle Hamlin wrote:
> I'm about to make the switch to Kubernetes with Airflow, but am wondering
> what happens when my CI/CD pipeline redeploys the webserver and scheduler
> and there are still long-running tasks (pods). My intuition is t
tasks pods change/update state in the database while being
"headless"?
Will the UI/Scheduler still be aware of the tasks (pods) once they are live
again?
Is there anything else the might cause issues when deploying while tasks
(pods) are running that I'm not thinking of here?
Kyle Hamlin
minster+%0D%0ALondon+SW1E+5LB+-+UK=gmail=g>
> phone: +44 (0)20 7730 6000
> k.n...@reply.com
> www.reply.com
>
> [image: Data Reply]
>
--
Kyle Hamlin
r to run these jobs.
>
> Thanks!
> --
> Frank Maritato
>
>
--
Kyle Hamlin
ster running your k8s cluster? Is there
> any
> > > reason you wouldn't just launch the Dask cluster for the job you're
> > running
> > > and then tear it down? I feel like with k8s the elasticity is one of
> the
> > > main benefits.
> > >
> > >
sExecutor to launch tasks into the Dask cluster (these are
> ML
> > jobs with sklearn)? I feel like there is a bit of inception going on here
> > in my mind and I just want to make sure a setup like this makes sense?
> > Thanks in advance for anyone's input!
> >
>
--
Kyle Hamlin
Hi all,
If I have a Kubernetes cluster running in DCOC and a Dask cluster running
in that same Kubernetes cluster is it possible/does it makes sense to use
the KubernetesExecutor to launch tasks into the Dask cluster (these are ML
jobs with sklearn)? I feel like there is a bit of inception going
I'm a bit confused with how the scheduler catches up in relation to
start_date and schedule_interval. I have one dag that runs hourly:
dag = DAG(
dag_id='hourly_dag',
start_date=days_ago(1),
schedule_interval='@hourly',
default_args=ARGS)
When I start this DAG fresh it will catch
This morning I tried to upgrade to the newer version of the logging config
file but I keep getting the following a TypeError for my database session.
I know my credentials are correct so I'm confused why this is happening now.
Has anyone experiences this? Note that I'm installing Airflow from
slie-Waksman
> > <geo...@cloverhealth.com.invalid> wrote:
> >
> > > It's presumably useful if you want to package your plugins for other
> > people
> > > to use but it seems like everyone just adds those directly to the
> Airflow
> > > codebase th
> However, executors and some other pieces are a little bit harder to deal
> with as non-plugins
>
>> On Thu, Mar 29, 2018 at 3:56 PM Kyle Hamlin <hamlin...@gmail.com> wrote:
>>
>> Hello,
>>
>> I just got done writing a few plugins, and the process ha
Hello,
I just got done writing a few plugins, and the process has left me
wondering what the real benefits are? As far as I can tell, it makes
testing more difficult since you cannot import from the created module, you
have to import directly from the plugin. Additionally, your code editor
isn't
tion to the global
> context.
> > > > Like this:
> > > > def create_client_dag(client_id):
> > > > # build dag here
> > > >
> > > > def get_client_ids_locally():
> > > > # access the data that was pulled from the API
> &
mply index % 60. This
> > spreads out the load so that the scheduler isn’t trying to run everything
> > at the exact same moment. I would suggest if you do go this route, to
> also
> > stagger your hours if you can because of how many you plan to run.
> Perhaps
> > your DA
Hello,
I'm currently using Airflow for some ETL tasks where I submit a spark job
to a cluster and poll till it is complete. This workflow is nice because it
is typically a single Dag. I'm now starting to do more machine learning
tasks and need to build a model per client which is 1000+ clients.
23 matches
Mail list logo