ZiyaZa commented on code in PR #52557:
URL: https://github.com/apache/spark/pull/52557#discussion_r2441976052
##########
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedParquetRecordReader.java:
##########
@@ -265,7 +265,15 @@ private void initBatch(
MemoryMode memMode,
StructType partitionColumns,
InternalRow partitionValues) {
- StructType batchSchema = new StructType(sparkSchema.fields());
+ boolean returnNullStructIfAllFieldsMissing = configuration.getBoolean(
+
SQLConf$.MODULE$.LEGACY_PARQUET_RETURN_NULL_STRUCT_IF_ALL_FIELDS_MISSING().key(),
+ (boolean)
SQLConf$.MODULE$.LEGACY_PARQUET_RETURN_NULL_STRUCT_IF_ALL_FIELDS_MISSING()
+ .defaultValue().get());
+ StructType batchSchema = returnNullStructIfAllFieldsMissing
+ ? new StructType(sparkSchema.fields())
+ // Truncate to match requested schema to make sure extra struct field
that we read for
+ // nullability is not included in columnarBatch and exposed outside.
Review Comment:
Apparently we don't, let me add a test.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]