I had a go once to introduce something similar, but never got it merged.
Maybe you can use it as an inspiration.
https://github.com/apache/incubator-airflow/pull/2412
Niels
Op wo 14 nov. 2018 16:43 schreef Sai Phanindhra Above mentioned PR address issues/bugs in current functionality. I want
SSHOperator) case, Airflow would just be receiving std
> output right as opposed to driver logs?
>
> @Kyle - If you can, then it would definitely be useful to have
> LivyOperators to Airflow.
>
> Regards,
> Kaxil
>
> On 22/06/2018, 13:34, "Niels Zeilemaker" w
Hi Kaxil,
I would recommend using the SSHOperator to start the Spark Job on the
master node of the HDInsight cluster.
This avoids the problems associated with Livy, and doesn't require you to
open ports/copy the hadoop configuration to your airflow machine.
Niels
2018-06-22 14:17 GMT+02:00 Naik
How would I access the logging from within a PyhtonOperator python callable?
That's a method that's defined in your dag, but doesn't have a reference to
the operator.
Niels
Op 31 okt. 2017 20:56 schreef "Bolke de Bruin" :
> Where do you want those to end up? As they are
>> `@requires_authentication` but they... don't. Oh, because the default
> >> backend doesn't do any authentication or protection at all.
> >>
> >> I thik this is CVEworthy - using the User+Password auth for the web
> front
> >> end/using default
Hi All,
I've implemented HTTP Basic Authentication for the experiment API, see
https://github.com/apache/incubator-airflow/pull/2730. This seems to work fine.
However, while implementing this. I noticed, to my surprise, that the
experimental API was open even though we enabled Password
thing I'd be worried about with your
> `trigger dagrun` approach is if the trigger dagrun operator fails for any
> reason you'll stop monitoring the external system, while with the
scheduled
> approach you don't have to worry about the failure modes of retrying
failed
> dags/etc.
>
> On Mon
Hi Guys,
I've created a Sensor which is monitoring the number of files in an
Azure Blobstore. If the number of files increases, then I would like
to trigger another dag. This is more or less similar to the
example_trigger_controller_dag.py and example_trigger_target_dag.py
setup.
However, after
@bolke, this is probably the same bug as reported in
https://github.com/apache/incubator-airflow/pull/2578
2017-09-19 10:21 GMT+02:00 Bolke de Bruin :
> Can you report the stack trace please?
>
> Cheers
> Bolke
>
>> On 19 Sep 2017, at 08:55, Ruslan Dautkhanov
ailure_callback` and `on_retry_callback`.
>
> On Wed, Jul 5, 2017 at 7:03 AM, Niels Zeilemaker <ni...@zeilemaker.nl>
> wrote:
>
>> Hi All,
>>
>> I’ve opened a pull request
>> (https://github.com/apache/incubator-airflow/pull/2412) which
>> introduces
ature that would really
useful to a lot of people. Have you considered setting up a "notifier
plugin" so that people can create custom notifiers? The API seems pretty
consistent so I don't think it would be too much work to add.
On Wed, Jul 5, 2017 at 11:40 AM Niels Zeilemaker <
nie
Hi All,
I've opened a pull request
(https://github.com/apache/incubator-airflow/pull/2412) which introduces the
concept of notifiers.
I've made this change as I have a requirement to push status changes of
failed/retried jobs to more than only email. Eg, I want to use slack in this
case.
12 matches
Mail list logo