SinghAsDev commented on a change in pull request #3774:
URL: https://github.com/apache/iceberg/pull/3774#discussion_r792110439



##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/actions/TestCreateActions.java
##########
@@ -610,7 +623,71 @@ public void testStructOfThreeLevelLists() throws Exception 
{
     structOfThreeLevelLists(false);
   }
 
-  public void threeLevelList(boolean useLegacyMode) throws Exception {
+  @Test
+  public void testTwoLevelList() throws IOException {
+    spark.conf().set("spark.sql.parquet.writeLegacyFormat", true);
+
+    String tableName = sourceName("testTwoLevelList");
+    File location = temp.newFolder();
+
+    StructType sparkSchema =
+        new StructType(
+            new StructField[]{
+                new StructField(
+                        "col1", new ArrayType(
+                            new StructType(
+                                new StructField[]{
+                                    new StructField(
+                                        "col2",
+                                        DataTypes.IntegerType,
+                                        false,
+                                        Metadata.empty())
+                                }), false), true, Metadata.empty())});
+    String expectedParquetSchema =
+        "message spark_schema {\n" +
+            "  optional group col1 (LIST) {\n" +
+            "    repeated group array {\n" +
+            "      required int32 col2;\n" +
+            "    }\n" +
+            "  }\n" +
+            "}\n";
+
+
+    // generate parquet file with required schema
+    List<String> testData = Collections.singletonList("{\"col1\": [{\"col2\": 
1}]}");
+    spark.read().schema(sparkSchema).json(
+            
JavaSparkContext.fromSparkContext(spark.sparkContext()).parallelize(testData))
+        
.coalesce(1).write().format("parquet").mode(SaveMode.Append).save(location.getPath());
+
+    File parquetFile = 
Arrays.stream(Objects.requireNonNull(location.listFiles(new FilenameFilter() {
+      @Override
+      public boolean accept(File dir, String name) {
+        return name.endsWith("parquet");
+      }
+    }))).findAny().get();
+
+    // verify generated parquet file has expected schema
+    ParquetFileReader pqReader = 
ParquetFileReader.open(HadoopInputFile.fromPath(new Path(parquetFile.getPath()),
+        new Configuration()));
+    MessageType schema = pqReader.getFooter().getFileMetaData().getSchema();
+    Assert.assertEquals(expectedParquetSchema, schema.toString());

Review comment:
       Got it, the string comparison in tests really helps in understanding the 
differences though. Updated to compare `MessageType` instead.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to