szehon-ho commented on code in PR #6980:
URL: https://github.com/apache/iceberg/pull/6980#discussion_r1122500658
##########
spark/v3.3/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestMetadataTables.java:
##########
@@ -488,6 +489,54 @@ public void testMetadataLogEntries() throws Exception {
metadataLogWithProjection);
}
+ @Test
+ public void testFilesVersionAsOf() throws Exception {
+ // Create table and insert data
+ sql(
+ "CREATE TABLE %s (id bigint, data string) "
+ + "USING iceberg "
+ + "PARTITIONED BY (data) "
+ + "TBLPROPERTIES"
+ + "('format-version'='2',
'write.delete.mode'='merge-on-read')",
+ tableName);
+
+ List<SimpleRecord> recordsA =
+ Lists.newArrayList(new SimpleRecord(1, "a"), new SimpleRecord(2,
"a"));
+ spark
+ .createDataset(recordsA, Encoders.bean(SimpleRecord.class))
+ .coalesce(1)
+ .writeTo(tableName)
+ .append();
+
+ Table table = Spark3Util.loadIcebergTable(spark, tableName);
+ Long olderSnapshotId = table.currentSnapshot().snapshotId();
+
+ sql(
+ "ALTER TABLE %s ADD COLUMNS (data2 string)",
+ tableName);
+
+ List<SimpleExtraColumnRecord> recordsB =
+ Lists.newArrayList(new SimpleExtraColumnRecord(1, "b", "c"), new
SimpleExtraColumnRecord (2, "b", "c"));
+ spark
+ .createDataset(recordsB,
Encoders.bean(SimpleExtraColumnRecord.class))
+ .coalesce(1)
+ .writeTo(tableName)
+ .append();
+
+
+ List<Object[]> res1 = sql("SELECT * from %s.files VERSION AS OF %s",
tableName, olderSnapshotId );
+
+ Dataset<Row> ds = spark.read().format("iceberg").option("snapshot-id",
olderSnapshotId ).load(tableName + ".files");
+ List<Row> res2 = ds.collectAsList();
+
+ Long currentSnapshotId = table.currentSnapshot().snapshotId();
+
+ List<Object[]> res3 = sql("SELECT * from %s.files VERSION AS OF %s",
tableName, currentSnapshotId);
+
+ Dataset<Row> ds2 = spark.read().format("iceberg").option("snapshot-id",
currentSnapshotId ).load(tableName + ".files");
Review Comment:
Or, on the topic of using's
TestIcebergSourceTablesBase::testSnapshotReadAfterAddAndDropColumn mechanism,
maybe an easier way is to even use the RowFactory to construct the expected
results?
For example, select a few columns you want to check like :
```
Dataset<Row> actual =
spark.sql("SELECT file_path, file_format, record_count FROM " +
tableName + ".files VERSION AS OF %s", tableName, currentSnasphotId);
```
and construct expected value RowFactory, with data you get from:
```
TestHelpers.dataFiles(table).stream().map(df -> RowFactory.create(df.path(),
df.format(), df.recordCount())).collect(Collectors.toList())
```
up to you, if that is easier.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]