Since we are talking about the SLA implementation, The current SLA miss
implementation is part of the scheduler code. So in the cases like
scheduler max out the process / not running for some reason, we will miss
all the SLA alert. It is worth to decouple SLA alert from the scheduler
path and run a
We use SLA as well and works great for some DAGs and painful for others
We rely on sensors to validate the data is ready before we run and each dag
waits on sensors for different times (one dag waits for 8 hours since it
expects date at the start of day but tends to get it 8 hours later). We
also
Hi Jakob,
I’m having the feeling we are on different wave lengths and we are not getting
closer :-(.
Remarks inline.
> On 2 May 2018, at 22:56, Jakob Homan wrote:
>
> Hey Bolke-
> Stabilizing the tree has nothing to do with getting a release
> through IPMC. The IPMC doesn't test the code -
Hey Bolke-
Stabilizing the tree has nothing to do with getting a release
through IPMC. The IPMC doesn't test the code - it only verifies that
the required licenses and legal obligations are met, that the release
artifacts meet the requirements to be processed through ASF's
publishing infra, etc.
At Quantopian we use Airflow to produce artifacts based on the previous
day's stock market data. These artifacts are required for us to trade on
today's stock market. Therefore, I've been investing time in improving
Airflow notifications (such as writing PagerDuty and Slack integrations).
My attent
Hi Jakob,
This ‘release’ is not effectively a RC. We want to have the kubernetes
executor stabilised or at least passing its own tests before we like to move
to RC status. People also tend to rally to have some extra bugfixes in or
some extra features when we announce “beta” status. Given the fac
I have seen two cases where the try_number can be greater than max_tries
1. Tasks inside A SubDag: If the SubDagOperator has a retry mentioned then
the tasks inside the SubDag will be tried for
number_of_retries_of_subdag * number_of_retries_of_task
2. Manual Retries: If you manually retry a task
Hi,
I need to pass certain arguments to my custom operator at run time. It seems
that airflow cli's trigger_dag
command support passing conf at run time which referred in operator's execute
function through {{ dag_run.conf['name'] }} template.
But I am not being able to read the "name" conf insid
Hi,
I am using apache airflow and I have a query regarding the same.
I have specified the number of retries as 2 in DAG but I am confused by the
parameters shown in airflow console. These are as follows:
max_tries : 2
next_try_number : 4
try_number: 4
Can you please help me in understanding that