Joffreybvn commented on PR #32319:
URL: https://github.com/apache/airflow/pull/32319#issuecomment-1620793844

   I quickly wrapped up an update to this PR. It's not complete yet. Let me 
suggest this:
   
   > Actually - there is no need to make it backwards-compatible. We could make 
it a breaking change for ODBC provider and bump major version - and if you make 
it a named tuple with same attributes as Row, this will be "not too much 
breaking" change - it will mostly work for all the current users.
   >
   > Also adding make_serializable in common.sql in this context is not needed, 
and it's actually very good, because otherwise you would have to add dependency 
on newer version of common.sql to make it works.
   
   I admit I have never tried to json.dumps a namedtuple (will give a try on 
Airflow tomorrow). If that works, could we **expand the context** of this PR to 
Databricks - making the transformation to serializable objects at Hook level + 
solving the consequences of that change in the 
[DatabricksSqlOperator](https://github.com/apache/airflow/blob/ab2c861dd8a96f22b0fda692368ce9b103175322/airflow/providers/databricks/operators/databricks_sql.py#L128)
 ? It's doable with a simple tuple + description, and it's even more easier 
with namedtuple.
   
   I'm suggesting that because I like the solution of adding a serialization 
method that other hooks can override. Like this, the right tool is added at the 
right place. And can be used for both ODBC and Databricks. Otherwise, doing 
this change without touching the common.sql hook feels like hacking and working 
around the run method (and it add a new flavor of this method in the codebase).
   
   What about a PR for ODBC and Databricks, with the serializable 
transformation beind a flag, which will depend on a newer version of the 
common.sql package ? So that it's clean and doesn't break too much things.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to