voonhous commented on code in PR #17833:
URL: https://github.com/apache/hudi/pull/17833#discussion_r2735729393
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/io/storage/row/HoodieRowParquetWriteSupport.java:
##########
@@ -281,6 +282,23 @@ private ValueWriter makeWriter(HoodieSchema schema,
DataType dataType) {
} else if (dataType == DataTypes.BinaryType) {
return (row, ordinal) -> recordConsumer.addBinary(
Binary.fromReusedByteArray(row.getBinary(ordinal)));
+ } else if
(SparkAdapterSupport$.MODULE$.sparkAdapter().isVariantType(dataType)) {
+ // Maps VariantType to a group containing 'metadata' and 'value' fields.
+ // This ensures Spark 4.0 compatibility and supports both Shredded and
Unshredded schemas.
+ // Note: We intentionally omit 'typed_value' for shredded variants as
this writer only accesses raw binary blobs.
+ final byte[][] variantBytes = new byte[2][]; // [0] = value, [1] =
metadata
Review Comment:
I've made the code here leaner. Instead of storing the bytes and reading
them back, pass the parquet writing logic directly into the consumers.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]