tustvold commented on code in PR #5057:
URL: https://github.com/apache/arrow-datafusion/pull/5057#discussion_r1087016354
##########
datafusion/core/src/datasource/file_format/parquet.rs:
##########
@@ -539,6 +542,44 @@ async fn fetch_statistics(
Ok(statistics)
}
+async fn fetch_format_scan_metadata(
+ store: &dyn ObjectStore,
+ table_schema: SchemaRef,
+ file: &ObjectMeta,
+ metadata_size_hint: Option<usize>,
+ collect_statistics: bool,
+ collect_file_ranges: bool,
+) -> Result<FormatScanMetadata> {
+ let mut format_scan_metadata = FormatScanMetadata::default();
+
+ if !collect_statistics && !collect_file_ranges {
+ return Ok(format_scan_metadata);
+ }
+
+ let parquet_metadata =
+ fetch_parquet_metadata(store, file, metadata_size_hint).await?;
+
+ if collect_statistics {
+ format_scan_metadata = format_scan_metadata.with_statistics(
+ extract_statistics_from_metadata(&parquet_metadata, table_schema)?,
+ );
+ };
+
+ if collect_file_ranges {
+ let file_ranges = parquet_metadata
Review Comment:
FWIW the way these ranges are applied in parquet is based on if the row
group's midpoint lies within the given range, as a result there is no
requirement that these ranges exactly delimit boundaries.
For example you could take a parquet file of 2GB and blindly chop it into 4x
512MB slices. This makes the assumption that there are at least 4 row groups
and the row groups are similarly sized, in practice this is probably fine. This
is what Spark does and avoids needing the file's metadata to do the
optimisation.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]