amogh-jahagirdar commented on code in PR #6651:
URL: https://github.com/apache/iceberg/pull/6651#discussion_r1115135509
##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java:
##########
@@ -250,7 +251,9 @@ public ScanBuilder newScanBuilder(CaseInsensitiveStringMap
options) {
@Override
public WriteBuilder newWriteBuilder(LogicalWriteInfo info) {
Preconditions.checkArgument(
- snapshotId == null, "Cannot write to table at a specific snapshot:
%s", snapshotId);
+ snapshotId == null || info.options().get(SparkWriteOptions.BRANCH) !=
null,
Review Comment:
@rdblue @aokolnychyi @namrathamyske This goes back to the thread
[above](https://github.com/apache/iceberg/pull/6651/files#r1105162329). After
merging #6717 the SparkDataWriteTests fail at this validation because we are
now setting a snapshotID to determine the schema. My solution here is to use
the `LogicalWriteInfo` info.options().get(SparkWriteOptions.Branch) != null
because this serves as an indication that we're not writing in a time travel
operation and rather doing a branch write where we expect a snapshot ID to be
set.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]