It is a protection against major upgraded that are not backwards compatible.
Please test and provide a PR if it is ok.
Cheers
Bolke
Verstuurd vanaf mijn iPad
> Op 16 jan. 2018 om 22:36 heeft Ruslan Dautkhanov het
> volgende geschreven:
>
> We have tabulate 0.8.2.. requirements demand tabulat
tkha/airflow/dags/discover/discover-ora-load-2.py
>> Traceback (most recent call last):
>> File
>> "/opt/airflow/airflow-20180116/src/apache-airflow/airflow/models.py", line
>> 290, in process_file
>>m = imp.load_source(mod_name, filepat
Ok indeed I can see the connection is getting closed in get_records!
Thanks
Alexis Rolland
+86 138 1602 1449
> Le 17 janv. 2018 à 03:18, Bolke de Bruin a écrit :
>
> Good point Chris. I overlooked that.
>
> B.
>
>> On 16 Jan 2018, at 20:09, Chris Palmer wrote:
>>
>> I'm not sure this is the
Seems like a reasonable PR to me.
On Tue, Jan 16, 2018 at 3:20 PM Ruslan Dautkhanov
wrote:
> Fix:
>
> vi "/opt/airflow/airflow-20180116/src/apache-airflow/airflow/models.py"
> :2951
> changed
> self.timezone = self.default_args['start_da
Fix:
vi "/opt/airflow/airflow-20180116/src/apache-airflow/airflow/models.py"
:2951
changed
self.timezone = self.default_args['start_date'].tzinfo
to
if self.default_args['start_date']:
self.timezone = self.default_args['s
models.py:293} ERROR - Failed to import:
> /home/rdautkha/airflow/dags/discover/discover-ora-load-2.py
> Traceback (most recent call last):
> File
> "/opt/airflow/airflow-20180116/src/apache-airflow/airflow/models.py", line
> 290, in process_file
> m = imp.load_sou
We have tabulate 0.8.2.. requirements demand tabulate<0.8.0,>=0.7.5
Are there known issues with tabulate versions higher than 0.8.0?
$ airflow kerberos -D
>
> Traceback (most recent call last):
> File "/opt/cloudera/parcels/Anaconda/bin/airflow", line 4, in
>
> __import__('pkg_resources').r
Yes, that documentation appears to be inconsistent.
Would you mind opening an issue in jira:
https://issues.apache.org/jira/projects/AIRFLOW/issues
and submitting a PR to fix it?
Thanks,
George
On Wed, Jan 10, 2018 at 5:18 PM Kewei Shang wrote:
> Hi,
>
> May I ask if the following dag definiti
+1
On Tue, Jan 16, 2018 at 11:08 AM Joy Gao wrote:
> The new FAB UI does not modify the existing API (i.e. www2/api/ will be a
> copy of www/api/), and the endpoints are registered as blueprints to the
> flask app the same way as before, so it is fully backward compatible.
>
> Although FAB offer
I'll try to spend some time on this as well but it will take some time to
get some understanding on the core code of Airflow.
On Tue, Jan 16, 2018 at 9:12 PM, Joy Gao wrote:
> @Milan, I agree this bug needs to be fixed, and the the hack I provided
> previously isn't complete as it doesn't work f
@Milan, I agree this bug needs to be fixed, and the the hack I provided
previously isn't complete as it doesn't work for any operator with on_kill
method that requires transient task attributes (i.e. self.sp)...
Unfortunately I don't think the new UI will help rid of the issue (as the
bug is on the
Good point Chris. I overlooked that.
B.
> On 16 Jan 2018, at 20:09, Chris Palmer wrote:
>
> I'm not sure this is the right solution. I haven't explored all the code
> but it seems to me that the actual database connections are managed in the
> hooks. Different databases will want to handle this
I'm not sure this is the right solution. I haven't explored all the code
but it seems to me that the actual database connections are managed in the
hooks. Different databases will want to handle this differently, and
modifying SqlSensor seems too heavy handed to me.
If you look at the get_records
The new FAB UI does not modify the existing API (i.e. www2/api/ will be a
copy of www/api/), and the endpoints are registered as blueprints to the
flask app the same way as before, so it is fully backward compatible.
Although FAB offers REST APIs on the models out-of-the-box, it currently is
still
Airflow 1.6 ??!!
I don’t think any of the maintainers is running that version anymore.
> On 16 Jan 2018, at 19:30, Michael Gong wrote:
>
> Hi, Airflow developers,
>
>
> On Airflow 1.6, we found sometimes a dag's status was set to "Failed" even
> when some tasks were still running.
>
>
> Th
Awesome, thanks to everybody who contributed their vote.
Now it should be a little bit easier to search and tag questions regarding
Airflow. :)
-Original Message-
From: Ash Berlin-Taylor [mailto:ash_airflowl...@firemirror.com]
Sent: Tuesday, January 16, 2018 7:51 PM
To: dev@airflow.incu
Thank you for doing this - been bugging me for a while :) It's been created now
too!
-ash
> On 16 Jan 2018, at 06:58, Baghino, Stefano(AWF) wrote:
>
> Hi everybody,
>
> on StackOverflow many people regularly come to get help about Airflow.
> Currently on the site there are both the `airflow`
Hi, Airflow developers,
On Airflow 1.6, we found sometimes a dag's status was set to "Failed" even when
some tasks were still running.
The steps to illustrate this behavior is: (suppose a dag contains 2 tasks , A
and B.)
1.
let task A and B starts running. --> The dag status is "Running".
Of course. PR welcome. It would be nice to know how to test for connection
leakage.
Sent from my iPhone
> On 16 Jan 2018, at 16:01, Alexis Rolland wrote:
>
> Hello everyone,
>
> I’m reaching out to discuss / suggest a small improvement in the class
> SqlSensor:
> https://pythonhosted.org/a
Hello everyone,
I'm reaching out to discuss / suggest a small improvement in the class
SqlSensor:
https://pythonhosted.org/airflow/_modules/airflow/operators/sensors.html
We are currently using SqlSensors on top of Teradata in several DAGs. When the
DAGs execute we receive the following error m
[Edit: resending e-mail now that my account is whitelisted]
Hello everyone,
I'm reaching out to discuss / suggest a small improvement in the class
SqlSensor:
https://pythonhosted.org/airflow/_modules/airflow/operators/sensors.html
We are currently using SqlSensors on top of Teradata in several
This is a different tack entirely, but how we do it is bake our DAGs etc
into the images we use for our deploy, which we push to Google Cloud
Registry. We use helm, an extra package on top of kubernetes to interpolate
the image name into the deployment files on release. We basically have a
`Make de
22 matches
Mail list logo