o-nikolas commented on code in PR #42048:
URL: https://github.com/apache/airflow/pull/42048#discussion_r1765582120


##########
airflow/providers/edge/executors/edge_executor.py:
##########
@@ -0,0 +1,175 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from __future__ import annotations
+
+from datetime import datetime, timedelta
+from typing import TYPE_CHECKING, Any
+
+from sqlalchemy import delete
+
+from airflow.cli.cli_config import GroupCommand
+from airflow.configuration import conf
+from airflow.executors.base_executor import BaseExecutor
+from airflow.models.abstractoperator import DEFAULT_QUEUE
+from airflow.models.taskinstance import TaskInstanceState
+from airflow.providers.edge.models.edge_job import EdgeJobModel
+from airflow.providers.edge.models.edge_logs import EdgeLogsModel
+from airflow.providers.edge.models.edge_worker import EdgeWorkerModel
+from airflow.utils.db import DBLocks, create_global_lock
+from airflow.utils.session import NEW_SESSION, provide_session
+
+if TYPE_CHECKING:
+    import argparse
+
+    from sqlalchemy.orm import Session
+
+    from airflow.executors.base_executor import CommandType
+    from airflow.models.taskinstance import TaskInstance
+    from airflow.models.taskinstancekey import TaskInstanceKey
+
+PARALLELISM: int = conf.getint("core", "PARALLELISM")
+
+
+class EdgeExecutor(BaseExecutor):
+    """Implementation of the EdgeExecutor to distribute work to Edge Workers 
via HTTP."""
+
+    def __init__(self, parallelism: int = PARALLELISM):
+        super().__init__(parallelism=parallelism)
+        self.last_reported_state: dict[TaskInstanceKey, TaskInstanceState] = {}
+
+    @provide_session
+    def start(self, session: Session = NEW_SESSION):
+        """If EdgeExecutor provider is loaded first time, ensure table 
exists."""
+        with create_global_lock(session=session, lock=DBLocks.MIGRATIONS):
+            engine = session.get_bind().engine
+            EdgeJobModel.metadata.create_all(engine)
+            EdgeLogsModel.metadata.create_all(engine)
+            EdgeWorkerModel.metadata.create_all(engine)
+
+    @provide_session
+    def execute_async(
+        self,
+        key: TaskInstanceKey,
+        command: CommandType,
+        queue: str | None = None,
+        executor_config: Any | None = None,
+        session: Session = NEW_SESSION,
+    ) -> None:
+        """Execute asynchronously."""
+        self.validate_airflow_tasks_run_command(command)
+        session.add(

Review Comment:
   > I was thinking about this all-day. Do you know a "simple" method to count 
the number of read/write operations in the DB? (Not transactions but 
statements?) Then I'd offer to make a measurement.
   
   I'm not sure, but either way I think the onus is on the author to test their 
code. Much time was spent during the hybrid executor project to get 
benchmarking of some kind (that wasn't available off the shelf).
   
   > I would assume that (1) this is a MVP implementation and is only for a 
small functional test. In the AIP I was offering to add scalability later.
   
   That's fair, I just worry once these things get in the door the tech debt 
isn't cleaned up. But you are very diligent and present so I don't worry much 
in this case if you plan to come back and fix later :smiley: 
   
   > From gut feeling w/o measuring I assume the write transactions are trivial 
and IOPS not much compared to other queries made by the scheduler. Also 
comparing to Celery - Celery also has a result backend table where each job 
adds a tuple into the "job"(?) table - I assume this is very comparable. I feel 
the overhead on DB is quire small.
   
   Perhaps, but do note that this all adds up now, if you run celery along side 
edge executors in a multiple executor configuration (and we continue to allow 
other execs to use the DB) we could find ourselves in a situation with a _lot_ 
of traffic to the DB.
   
   At the end of the day, I'm no DB expert, so perhaps someone can weigh in. 
It's just making my spidey-senses tingle when I saw this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to