alexott commented on code in PR #27912:
URL: https://github.com/apache/airflow/pull/27912#discussion_r1032459844


##########
airflow/providers/databricks/hooks/databricks_sql.py:
##########
@@ -163,38 +163,43 @@ def run(
         :param return_last: Whether to return result for only last statement 
or for all after split
         :return: return only result of the LAST SQL expression if handler was 
provided.
         """
-        self.scalar_return_last = isinstance(sql, str) and return_last
+        self.descriptions = []
         if isinstance(sql, str):
             if split_statements:
-                sql = self.split_sql_string(sql)
+                sql_list = [self.strip_sql_string(s) for s in 
self.split_sql_string(sql)]
             else:
-                sql = [self.strip_sql_string(sql)]
+                sql_list = [self.strip_sql_string(sql)]
+        else:
+            sql_list = [self.strip_sql_string(s) for s in sql]
 
-        if sql:
-            self.log.debug("Executing following statements against Databricks 
DB: %s", list(sql))
+        if sql_list:
+            self.log.debug("Executing following statements against Databricks 
DB: %s", sql_list)
         else:
             raise ValueError("List of SQL statements is empty")
 
         results = []
-        for sql_statement in sql:
+        for sql_statement in sql_list:

Review Comment:
   Yes, this is done to support Azure Active Directory authentication - when we 
have long-running queries, AAD token may expire, so next queries will fail. To 
mitigate this, DBSQL hook checks if the AAD token is expiring, and renewing it. 
This is necessary right now as DBSQL connector for Python doesn't have built-in 
AAD auth support - this will come together with unified Python SDK next year.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to