cashmand opened a new pull request, #52406:
URL: https://github.com/apache/spark/pull/52406

   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: 
https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: 
https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., 
'[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a 
faster review.
     7. If you want to add a new configuration, please read the guideline first 
for naming configurations in
        
'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the 
guideline first in
        'common/utils/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   
   When writing Variant to Parquet, we want the shredding schema to adapt to 
the data being written on a per-file basis. This PR adds a new output writer 
that buffers the first few rows before starting the write, then uses the 
content of those rows to determine a shredding schema, and only then creates 
the Parquet writer with that schema.
   
   The heuristics for determining the shredding schema are currently fairly 
simple: if a field appears consistently with a consistent type, we create 
`value` and `typed_value`, and if it appears with an inconsistent type, we only 
create `value`. We drop fields that occur in less than 10% of sampled rows, and 
have an upper bound of 300 total fields (counting `value` and `typed_value` 
separately) to avoid creating excessively wide Parquet schemas, which can cause 
performance issues.
   
   ### Why are the changes needed?
   
   Allows Spark to make use of the [Variant shredding 
spec](https://github.com/apache/parquet-format/blob/master/VariantShredding.md) 
without requiring the user to manually set a shredding schema.
   
   ### Does this PR introduce _any_ user-facing change?
   
   Only if `spark.sql.variant.inferShreddingSchema` and 
`spark.sql.variant.writeShredding.enabled` are both set to true. They currently 
false by default.
   
   ### How was this patch tested?
   
   Unit tests in PR.
   
   ### Was this patch authored or co-authored using generative AI tooling?
   
   No.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to