pankajkoti commented on code in PR #30945:
URL: https://github.com/apache/airflow/pull/30945#discussion_r1183664933
##########
airflow/providers/amazon/aws/sensors/emr.py:
##########
@@ -292,10 +301,27 @@ def poke(self, context: Context) -> bool:
return False
return True
- @cached_property
- def hook(self) -> EmrContainerHook:
- """Create and return an EmrContainerHook"""
- return EmrContainerHook(self.aws_conn_id,
virtual_cluster_id=self.virtual_cluster_id)
+ def execute(self, context: Context):
+ if not self.deferrable:
+ super().execute(context=context)
+ elif not self.poke(context):
+ self.defer(
+ timeout=self.execution_timeout,
+ trigger=EmrContainerSensorTrigger(
+ virtual_cluster_id=self.virtual_cluster_id,
+ job_id=self.job_id,
+ max_tries=self.max_retries,
Review Comment:
If the user has not set this in their DAG, this will be `None`. I think we
should assign it to a default value in the __init__ method (maybe equal to what
we specify in the waiters json).
I got the below error on a similar operator I'm working on now when it was
sent as None
```
[2023-05-03, 18:30:29 IST] {taskinstance.py:1697} ERROR - Trigger failed:
Traceback (most recent call last):
File "/opt/***/***/jobs/triggerer_job_runner.py", line 536, in
cleanup_finished_triggers
result = details["task"].result()
File "/opt/***/***/jobs/triggerer_job_runner.py", line 614, in run_trigger
async for event in trigger.run():
File "/opt/***/***/providers/amazon/aws/triggers/sagemaker.py", line 58,
in run
await waiter.wait(
File "/usr/local/lib/python3.8/site-packages/aiobotocore/waiter.py", line
49, in wait
await AIOWaiter.wait(self, **kwargs)
File "/usr/local/lib/python3.8/site-packages/aiobotocore/waiter.py", line
131, in wait
if num_attempts >= max_attempts:
TypeError: '>=' not supported between instances of 'int' and 'NoneType'
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]