nuno-faria opened a new pull request, #16971:
URL: https://github.com/apache/datafusion/pull/16971

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   - Closes #15582.
   
   - Related issues:
     - #9964
     - #11719
     - #12592
     - #13456
     - #14481
     - https://github.com/apache/datafusion/issues/14608#issuecomment-3016555361
     - #15179
     - #16200
     - #16365
   
   ## Rationale for this change
   
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.  
   -->
   
   With large Parquet files, a non-negligible amount of time is spent reading 
the footer and page metadata. This overhead becomes more noticeable when the 
queries are relatively simple. With repeated scans over the same file, we can 
improve this by caching the metadata, so it is only read once.
   
   With a benchmark using a large file (100M rows, 2 cols) and simple reads 
(`select where k = ...`), caching the Parquet metadata makes it more than 10x 
faster.
   
   <details>
   <summary>Simple benchmark</summary>
   
   ```rust
   use datafusion::common::Result;
   use datafusion::prelude::*;
   use tokio::time::Instant;
   
   async fn create_file() -> Result<()> {
       let ctx = SessionContext::new();
       ctx.sql(
           "
           COPY (
               SELECT 'k-' || i as k, i as v
               FROM generate_series(1, 100000000) t(i)
               ORDER BY k
           )
           TO 't.parquet'
           OPTIONS (MAX_ROW_GROUP_SIZE 131072, DATA_PAGE_ROW_COUNT_LIMIT 8192, 
DICTIONARY_ENABLED false);
       ",
       )
       .await?
       .collect()
       .await?;
       Ok(())
   }
   
   async fn bench(cache_metadata: bool) -> Result<f64> {
       let config = SessionConfig::new().with_target_partitions(1);
       let ctx = SessionContext::new_with_config(config);
       let options = ParquetReadOptions::new().cache_metadata(cache_metadata);
       ctx.register_parquet("t", "t.parquet", options).await?;
   
       let t = Instant::now();
       for _ in 0..1000 {
           ctx.sql("SELECT v FROM t where k = 'k-12345'")
               .await?
               .collect()
               .await?;
       }
       Ok(t.elapsed().as_secs_f64())
   }
   
   #[tokio::main]
   async fn main() -> Result<()> {
       create_file().await?;
   
       let time_not_cached = bench(false).await?;
       println!("time_not_cached: {time_not_cached:.3}");
   
       let time_cached = bench(true).await?;
       println!("time_cached: {time_cached:.3}");
   
       println!("diff: {:.3}x faster", time_not_cached / time_cached);
   
       Ok(())
   }
   ```
   </details> 
   
   ```
   time_not_cached: 18.507
   time_cached: 1.485
   diff: 12.459x faster
   ```
   
   The metadata cache is disabled by default. It can be turned on for a 
specific Parquet using `ParquetReadOptions`:
   ```rust
   let options = ParquetReadOptions::new().cache_metadata(true);
   ctx.register_parquet("t", "t.parquet", options).await?;
   ```
   
   It can also be enabled for all Parquet files, using the SQL API:
   ```sql
   set datafusion.execution.parquet.cache_metadata = true;
   ```
   
   The cache is automatically invalidated when the file changes.
   
   When the cache is enabled, the entire metadata will be read, including the 
page index, unless using encryption: 
https://github.com/apache/datafusion/blob/94e85488df31738b7c83d57015f51440e285feff/datafusion/datasource-parquet/src/opener.rs#L146-L147
   This means that it is not worth enabling it for single file scans whose 
query does not need the page index.
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   - Added `cache_metadata` to `ParquetOptions` (default = false).
   - Added `cache_metadata` to `ParquetReadOptions` (default = None).
   - Added `CachedParquetFileReaderFactory` and `CachedParquetFileReader`.
   - Added `file_metadata_cache` to `CacheManager` (default = 
`DefaultFilesMetadataCache`).
   - Added `DefaultFilesMetadataCache`.
   - Update the `ParquetFormat` to call the `CachedParquetFileReaderFactory` 
when caching is enabled.
   - Updated the `proto::ParquetOptions`.
   - Added Parquet sqllogictests.
   - Added a unit test to `cache/cache_unit.rs`.
   
   ## Are these changes tested?
   
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are 
they covered by existing tests)?
   -->
   
   Yes.
   
   ## Are there any user-facing changes?
   
   <!--
   If there are user-facing changes then we may require documentation to be 
updated before approving the PR.
   -->
   
   <!--
   If there are any breaking changes to public APIs, please add the `api 
change` label.
   -->
   
   Added a new external configuration, but it is disabled by default.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to