rdblue commented on a change in pull request #3774:
URL: https://github.com/apache/iceberg/pull/3774#discussion_r790076249
##########
File path:
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/data/TestSparkParquetReader.java
##########
@@ -169,6 +178,60 @@ public void
testInt96TimestampProducedBySparkIsReadCorrectly() throws IOExceptio
}
}
+ @Test
+ public void testWriteReadAvroBinary() throws IOException {
+ String schema = "{" +
+ "\"type\":\"record\"," +
+ "\"name\":\"DbRecord\"," +
+ "\"namespace\":\"com.iceberg\"," +
+ "\"fields\":[" +
+ "{\"name\":\"arraybytes\", " +
+ "\"type\":[ \"null\", { \"type\":\"array\", \"items\":\"bytes\"}],
\"default\":null}," +
+ "{\"name\":\"topbytes\", \"type\":\"bytes\"}" +
+ "]" +
+ "}";
+
+ org.apache.avro.Schema.Parser parser = new org.apache.avro.Schema.Parser();
+ org.apache.avro.Schema avroSchema = parser.parse(schema);
+ AvroSchemaConverter converter = new AvroSchemaConverter();
+ MessageType parquetScehma = converter.convert(avroSchema);
+ Schema icebergSchema = ParquetSchemaUtil.convert(parquetScehma);
+
+ File testFile = temp.newFile();
+ Assert.assertTrue(testFile.delete());
+
+ ParquetWriter<GenericRecord> writer =
AvroParquetWriter.<GenericRecord>builder(new Path(testFile.toURI()))
+ .withDataModel(GenericData.get())
+ .withSchema(avroSchema)
+ .config("parquet.avro.add-list-element-records", "true")
+ .config("parquet.avro.write-old-list-structure", "true")
+ .build();
+
+ GenericRecordBuilder recordBuilder = new GenericRecordBuilder(avroSchema);
+ List<ByteBuffer> expectedByteList = new ArrayList();
+ byte[] expectedByte = {0x00, 0x01};
+ expectedByteList.add(ByteBuffer.wrap(expectedByte));
+
+ recordBuilder.set("arraybytes", expectedByteList);
+ recordBuilder.set("topbytes", ByteBuffer.wrap(expectedByte));
+ GenericData.Record record = recordBuilder.build();
+ writer.write(record);
+ writer.close();
+
+ List<InternalRow> rows;
+ try (CloseableIterable<InternalRow> reader =
+ Parquet.read(Files.localInput(testFile))
+ .project(icebergSchema)
+ .createReaderFunc(type ->
SparkParquetReaders.buildReader(icebergSchema, type))
+ .build()) {
+ rows = Lists.newArrayList(reader);
+ }
+
+ InternalRow row = rows.get(0);
Review comment:
I don't see a reason for this test to be in Spark. This isn't related to
Spark, it is testing parquet-avro files. Can you move this to the Parquet tests
instead?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]