dstandish commented on code in PR #40468:
URL: https://github.com/apache/airflow/pull/40468#discussion_r1664462653


##########
airflow/executors/base_executor.py:
##########
@@ -149,7 +155,7 @@ def queue_command(
             self.log.info("Adding to queue: %s", command)
             self.queued_tasks[task_instance.key] = (command, priority, queue, 
task_instance)
         else:
-            self.log.error("could not queue task %s", task_instance.key)
+            self.task_context_logger.error("could not queue task %s", 
task_instance.key, ti=task_instance)

Review Comment:
   is this truly an exceptional circumstance?  i ask because, we should just be 
careful about adding too many usages of `task_context_logger` because, in many 
cases, each call results in uploading a file to s3.  think about what happens 
if the scheduler starts having to upload files for many TIs in the scheduler 
loop.
   
   ideally, people use a better logging system like elastic, where, it's at 
least possible that "shipping logs" just means printing to stdout.  but i don't 
think that's true for the majority of users.
   
   perhaps this is an other argument for considering sticking it in a queue 
that is managed separately from the main scheduler loop.  just a thought.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to