ayushchauhan0811 commented on a change in pull request #13832:
URL: https://github.com/apache/airflow/pull/13832#discussion_r571154673
##########
File path: airflow/providers/amazon/aws/operators/batch.py
##########
@@ -177,29 +177,26 @@ def submit_job(self, context: Dict): # pylint:
disable=unused-argument
self.job_id = response["jobId"]
self.log.info("AWS Batch job (%s) started: %s", self.job_id,
response)
-
except Exception as e:
self.log.error("AWS Batch job (%s) failed submission", self.job_id)
raise AirflowException(e)
def monitor_job(self, context: Dict): # pylint: disable=unused-argument
"""
Monitor an AWS Batch job
+ monitor_job can raise an exception or an AirflowTaskTimeout can be
raised if execution_timeout
+ is given while creating the task. These exceptions should be handled
in taskinstance.py
+ instead of here like it was previously done
Review comment:
@dstandish It's part of the base operator, but some hooks/operators have
also used this exception. The issue I am trying to solve in the PR is the
timeout exception raised when the task is running for more than that the
defined interval which is handled by taskinstance
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]