potiuk commented on code in PR #25161:
URL: https://github.com/apache/airflow/pull/25161#discussion_r936016581


##########
airflow/executors/local_executor.py:
##########
@@ -78,14 +79,17 @@ def execute_work(self, key: TaskInstanceKey, command: 
CommandType) -> None:
 
         self.log.info("%s running %s", self.__class__.__name__, command)
         setproctitle(f"airflow worker -- LocalExecutor: {command}")
-        if settings.EXECUTE_TASKS_NEW_PYTHON_INTERPRETER:
-            state = self._execute_work_in_subprocess(command)
-        else:
-            state = self._execute_work_in_fork(command)
-
-        self.result_queue.put((key, state))
-        # Remove the command since the worker is done executing the task
-        setproctitle("airflow worker -- LocalExecutor")
+        with _airflow_parsing_context_manager(
+            AirflowParsingContextType.TASK_EXECUTION, dag_id=key[0], 
task_id=key[1]
+        ):

Review Comment:
   This really comes from the non-consistent way our executors/runners are 
implemented (or maybe I do not understand it fully) - so this is more of a 
"safety net" - maybe not needed but nor harmful either.
   
   The problem is that the way celery/local executor works is a but 
inconsistent. While we have the context manager in executor layer, you say that 
we should implement them in Local Executor at "runnner" layer. 
   
   Seems rather inconsistent if we are going to get a consistent "executor" 
interface level.  But I believe (see the comment above  by @pingzh 
https://github.com/apache/airflow/pull/25161#pullrequestreview-1044262872 - 
that we can get rid of this double "context" - when we also get rid of double 
"parsing" with AIP-45.  
   
   Should we leave it in parallel then and get rid when we get rid of the 
double parsing as well?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to