wang-zhun opened a new pull request #34072:
URL: https://github.com/apache/spark/pull/34072
### What changes were proposed in this pull request?
Add a new hint `OPTION`
### Why are the changes needed?
Now a DataFrame API user can implement dynamic options through the
DataFrameReader$option method, but Spark SQL users cannot use.
```
public interface SupportsRead extends Table {
ScanBuilder newScanBuilder(CaseInsensitiveStringMap var1);
}
```
The table options were persisted to the Catalog and if we want to modify
that, we should use another DDL like "ALTER TABLE ...". But there are some
cases that user want to modify the table options dynamically just in the query:
- JDBCTable set fetchsize according to the actual situation of the table
- IcebergTable support time travel
```
spark.read
.option("snapshot-id", 10963874102873L)
.format("iceberg")
.load("path/to/table")
```
These parameters setting is very common and ad-hoc, setting them flexibly
would promote the user experience with Spark SQL especially for Now we support
catalog expansion.
### Does this PR introduce _any_ user-facing change?
##### Partitioning Hints
```
-- time trave
SELECT * FROM t /*+ OPTIONS('snapshot-id'='10963874102873L') */
```
### How was this patch tested?
Added Unit test.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]