grundprinzip commented on PR #47233:
URL: https://github.com/apache/spark/pull/47233#issuecomment-2272637534

   It would be great to have a better explanation in the PR description to give 
details on why the current path was chosen over alternatives. 
   
   My personal perspective is that it would be preferred to rely on the query 
planning to determine if the update is valid or not.
   
   `spark.table("name").where().write.update()...` can give you similar compile 
time safety for what can be supplied as arguments. 
   
   The benefit of such an approach is that it retains the existing code path 
and familiarity of the API.
   
   Adding a completely new set of APIs and possible chainings doesn't seem 
right to me. The goal seems to be to map the SQL syntax to an API rather than 
seeing how the SQL maps into the existing APIs and what is fitting or not. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to