amogh-jahagirdar commented on code in PR #13555:
URL: https://github.com/apache/iceberg/pull/13555#discussion_r2212548250
##########
spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/SparkCachedTableCatalog.java:
##########
@@ -63,27 +64,26 @@ public Identifier[] listTables(String[] namespace) {
@Override
public SparkTable loadTable(Identifier ident) throws NoSuchTableException {
- Pair<Table, Long> table = load(ident);
- return new SparkTable(table.first(), table.second(), false /* refresh
eagerly */);
+ return load(ident);
}
@Override
public SparkTable loadTable(Identifier ident, String version) throws
NoSuchTableException {
- Pair<Table, Long> table = load(ident);
+ SparkTable table = load(ident);
Preconditions.checkArgument(
- table.second() == null, "Cannot time travel based on both table
identifier and AS OF");
- return new SparkTable(table.first(), Long.parseLong(version), false /*
refresh eagerly */);
+ table.snapshotId() == null, "Cannot time travel based on both table
identifier and AS OF");
+ return table.copyWithSnapshotId(Long.parseLong(version));
}
@Override
public SparkTable loadTable(Identifier ident, long timestampMicros) throws
NoSuchTableException {
- Pair<Table, Long> table = load(ident);
+ SparkTable table = load(ident);
Preconditions.checkArgument(
- table.second() == null, "Cannot time travel based on both table
identifier and AS OF");
+ table.snapshotId() == null, "Cannot time travel based on both table
identifier and AS OF");
// Spark passes microseconds but Iceberg uses milliseconds for snapshots
long timestampMillis = TimeUnit.MICROSECONDS.toMillis(timestampMicros);
- long snapshotId = SnapshotUtil.snapshotIdAsOfTime(table.first(),
timestampMillis);
- return new SparkTable(table.first(), snapshotId, false /* refresh eagerly
*/);
+ long snapshotId = SnapshotUtil.snapshotIdAsOfTime(table.table(),
timestampMillis);
+ return table.copyWithSnapshotId(snapshotId);
Review Comment:
I refactored this so that the load(ident) returns a SparkTable instead of a
Pair<Table, Long>.
Since this approach adds a prefix for when it's a rewrite, a `Pair<Table,
Long>` isn't expressive enough since it won't ever be a time travel. We need to
preerve the validations so what we do is we validate against the first load of
SParkTable, and if it passes then we copy with the time travel set correctly.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]