wypoon commented on a change in pull request #1508:
URL: https://github.com/apache/iceberg/pull/1508#discussion_r553767658



##########
File path: 
spark3/src/main/java/org/apache/iceberg/spark/source/IcebergSource.java
##########
@@ -60,10 +61,13 @@ public SparkTable getTable(StructType schema, Transform[] 
partitioning, Map<Stri
     // Get Iceberg table from options
     Configuration conf = SparkSession.active().sessionState().newHadoopConf();
     Table icebergTable = getTableAndResolveHadoopConfiguration(options, conf);
+    CaseInsensitiveStringMap cIOptions = new CaseInsensitiveStringMap(options);
+    Long snapshotId = Spark3Util.propertyAsLong(cIOptions, "snapshot-id", 
null);
+    Long asOfTimestamp = Spark3Util.propertyAsLong(cIOptions, 
"as-of-timestamp", null);
 
     // Build Spark table based on Iceberg table, and return it
     // Eagerly refresh the table before reading to ensure views containing 
this table show up-to-date data
-    return new SparkTable(icebergTable, schema, true);
+    return new SparkTable(icebergTable, schema, snapshotId, asOfTimestamp, 
true);

Review comment:
       Q. Is it the case that in master after #1783, there is no path to `new 
SparkTable(...)` where the  `requestedSchema` argument is not null?
   
   Anyway, one idea that comes to my mind is that I could add some method to 
`SparkTable` to set the snapshotId and asOfTimestamp after an instance is 
already created, and in `IcebergSource#getTable`, once we have loaded a `Table` 
from the catalog, check if it is a `SparkTable` and then set the snapshotId and 
asOfTimestamp.
    




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to