Zouxxyy commented on PR #48779:
URL: https://github.com/apache/spark/pull/48779#issuecomment-2464263643

   Exciting to see shredding being pushed forward! So If I understand 
correctly, the shredding write chain may be like this:
   Get the expected shredded schema (DataType) through some ways (sampling or 
justs user defined?) -> parquet writer accepts the variant + shredded schema -> 
cast to shredded InternalRow -> write to parquet file (the actual col type is 
the group type corresponding to the shredded dataType).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to