gengliangwang commented on a change in pull request #24518: [SPARK-27627][SQL]
Make option "pathGlobFilter" as a general option for all file sources
URL: https://github.com/apache/spark/pull/24518#discussion_r280969925
##########
File path: docs/sql-migration-guide-upgrade.md
##########
@@ -126,6 +126,8 @@ license: |
- Since Spark 3.0, parquet logical type `TIMESTAMP_MICROS` is used by
default while saving `TIMESTAMP` columns. In Spark version 2.4 and earlier,
`TIMESTAMP` columns are saved as `INT96` in parquet files. To set `INT96` to
`spark.sql.parquet.outputTimestampType` restores the previous behavior.
+ - Since Spark 3.0, a new data source option `pathGlobFilter` is introduced
for filtering files in `DataFrameReader` and `DataStreamReader`. For example,
`spark.read.option("pathGlobFilter", "*.orc").orc(path)` will read all the
files ending with `.orc` under the given `path`. Note that with the option the
query result will contain partition columns if any; while with glob pattern in
path, e.g `spark.read.orc("path/*/*/*/*.orc")`, the result won't contain
partition columns.
Review comment:
I will update
https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html.
Make this PR as WIP for now.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]