voonhous commented on PR #9625:
URL: https://github.com/apache/hudi/pull/9625#issuecomment-1717020387
The affected tests
`TestSparkConsistentBucketClustering#testClusteringColumnSort` was assumes the
default config below:
```hoodie.datasource.write.row.writer.enable=false```
Since I changed the default config to be aligned with the global config of
it to true, the tests started failing. As such, I have fixed the test by
overriding it back to false in the test.
Will open a separate PR to fix sorting for native row writers when
performing clustering for **ConsistentBucketClustering**.
```
Caused by: java.lang.UnsupportedOperationException:
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
at
org.apache.parquet.io.api.PrimitiveConverter.addLong(PrimitiveConverter.java:105)
at
org.apache.parquet.column.impl.ColumnReaderBase$2$4.writeValue(ColumnReaderBase.java:325)
at
org.apache.parquet.column.impl.ColumnReaderBase.writeCurrentValueToConverter(ColumnReaderBase.java:440)
at
org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:30)
at
org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
at
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:234)
Error: Errors:
Error: Can not read value at 1 in block 0 in file
file:/tmp/junit2525135472431698271/dataset/2016/03/15/398f4e47-ded4-46b7-90d4-3da6e4a1485a-0_2-116-257_20230906061353657.parquet
Error: Can not read value at 1 in block 0 in file
file:/tmp/junit13763144683442925835/dataset/2016/03/15/fca915de-8b3b-42fd-b2b5-a151558f64ec-0_1-105-225_20230906061411593.parquet
[INFO]
Error: Tests run: 199, Failures: 0, Errors: 2, Skipped: 1
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]