szehon-ho commented on code in PR #5063:
URL: https://github.com/apache/iceberg/pull/5063#discussion_r923675812


##########
spark/v3.2/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestMetadataTables.java:
##########
@@ -319,6 +320,52 @@ public void testAllFilesPartitioned() throws Exception {
     TestHelpers.assertEqualsSafe(filesTableSchema.asStruct(), expectedFiles, 
actualFiles);
   }
 
+  @Test
+  public void testMetadataLogMetatable() throws Exception {
+    // Create table and insert data
+    sql("CREATE TABLE %s (id bigint, data string) " +
+        "USING iceberg " +
+        "PARTITIONED BY (data)", tableName);
+
+    List<SimpleRecord> recordsA = Lists.newArrayList(
+        new SimpleRecord(1, "a"),
+        new SimpleRecord(2, "a")
+    );
+    spark.createDataset(recordsA, Encoders.bean(SimpleRecord.class))
+        .writeTo(tableName)
+        .append();
+
+    List<SimpleRecord> recordsB = Lists.newArrayList(
+        new SimpleRecord(1, "b"),
+        new SimpleRecord(2, "b")
+    );
+    spark.createDataset(recordsB, Encoders.bean(SimpleRecord.class))
+        .writeTo(tableName)
+        .append();
+
+    Table table = Spark3Util.loadIcebergTable(spark, tableName);
+    Long currentSnapshotId = table.currentSnapshot().snapshotId();
+
+    // Check metadataLog table
+    List<Object[]> metadataLogs = sql("SELECT * FROM %s.metadata_log", 
tableName);
+    Assert.assertEquals("metadataLog table should return 3 rows", 3, 
metadataLogs.size());

Review Comment:
   Would it be hard to keep handle to all three snapshots/metadata and make a 
list of expected Records , to check all the historical values are good (using 
TestHelpers.assertEquals?



##########
spark/v3.2/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestMetadataTables.java:
##########
@@ -319,6 +320,52 @@ public void testAllFilesPartitioned() throws Exception {
     TestHelpers.assertEqualsSafe(filesTableSchema.asStruct(), expectedFiles, 
actualFiles);
   }
 
+  @Test
+  public void testMetadataLogMetatable() throws Exception {
+    // Create table and insert data
+    sql("CREATE TABLE %s (id bigint, data string) " +
+        "USING iceberg " +
+        "PARTITIONED BY (data)", tableName);
+
+    List<SimpleRecord> recordsA = Lists.newArrayList(
+        new SimpleRecord(1, "a"),
+        new SimpleRecord(2, "a")
+    );
+    spark.createDataset(recordsA, Encoders.bean(SimpleRecord.class))
+        .writeTo(tableName)
+        .append();
+
+    List<SimpleRecord> recordsB = Lists.newArrayList(
+        new SimpleRecord(1, "b"),
+        new SimpleRecord(2, "b")
+    );
+    spark.createDataset(recordsB, Encoders.bean(SimpleRecord.class))
+        .writeTo(tableName)
+        .append();
+
+    Table table = Spark3Util.loadIcebergTable(spark, tableName);
+    Long currentSnapshotId = table.currentSnapshot().snapshotId();
+
+    // Check metadataLog table
+    List<Object[]> metadataLogs = sql("SELECT * FROM %s.metadata_log", 
tableName);
+    Assert.assertEquals("metadataLog table should return 3 rows", 3, 
metadataLogs.size());

Review Comment:
   Would it be too much to keep handle to all three snapshots/metadata and make 
a list of expected Records , to check all the historical values are good (using 
TestHelpers.assertEquals?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to