uranusjr commented on code in PR #39138:
URL: https://github.com/apache/airflow/pull/39138#discussion_r1580515385


##########
airflow/models/dagbag.py:
##########
@@ -727,3 +734,41 @@ def _sync_perm_for_dag(cls, dag: DAG, session: Session = 
NEW_SESSION):
 
         security_manager = ApplessAirflowSecurityManager(session=session)
         security_manager.sync_perm_for_dag(root_dag_id, dag.access_control)
+
+
+class DagPriorityParsingRequests(Base):
+    """Model to store the dag parsing requests that will be prioritized when 
parsing files."""
+
+    __tablename__ = "dag_priority_parsing_requests"
+
+    id = Column(String(40), primary_key=True)
+    # The location of the file containing the DAG object
+    # Note: Do not depend on fileloc pointing to a file; in the case of a
+    # packaged DAG, it will point to the subpath of the DAG within the
+    # associated zip.
+    fileloc = Column(String(2000), nullable=False)
+
+    def __init__(self, fileloc: str):
+        super().__init__()
+        self.fileloc = fileloc
+        # Adding a unique constraint to fileloc results in the creation of an 
index and we have a limitation
+        # on the size of the string we can use in the index for MySql DB. We 
also have to keep the fileloc
+        # size consistent with other tables. This is a workaround to enforce 
the unique constraint.
+        self.id = self._generate_md5_hash(fileloc)
+
+    def _generate_md5_hash(self, fileloc: str):
+        return hashlib.md5(fileloc.encode()).hexdigest()

Review Comment:
   It would be better to use `default` for this instead. Maybe also `onupdate` 
but I don’t think we’re ever going to update things in this table anyway.
   
   
https://docs.sqlalchemy.org/en/20/core/defaults.html#context-sensitive-default-functions



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to