cloud-fan commented on PR #41683:
URL: https://github.com/apache/spark/pull/41683#issuecomment-1706680230

   Let's spend more time on the API design first, as different people may have 
different opinions and we should collect as much feedback as possible.
   
   Taking a step back, I think what we need is an SQL API to specify per-scan 
options, like `spark.read.options(...)`. The SQL API should be general as it's 
very likely that people will ask for something similar for `df.write.options` 
and `spark.readStream.options`.
   
   TVF can only be used in the FROM clause, so a new SQL syntax may be better 
here. Inspired by the [pgsql 
syntax](https://www.postgresql.org/docs/current/sql-createtable.html), we can 
add a WITH clause to Spark SQL:
   ```
   ... FROM tbl_name WITH (optionA = v1, optionB = v2, ...)
   INSERT INTO tbl_name WITH (optionA = v1, optionB = v2, ...) SELECT ...
   ```
   
   Streaming is orthogonal to this, and this new WITH clause won't conflict 
with it. E.g. we can probably do `... FROM STREAM tbl_name WITH (...)`. It's 
out of the scope of this PR though, as streaming SQL is a big topic.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to