szehon-ho commented on a change in pull request #2925:
URL: https://github.com/apache/iceberg/pull/2925#discussion_r809397910



##########
File path: core/src/main/java/org/apache/iceberg/util/PartitionSet.java
##########
@@ -183,6 +184,26 @@ public boolean removeAll(Collection<?> objects) {
     return changed;
   }
 
+  @Override
+  public String toString() {

Review comment:
       Should be the expected error messages in the new TestReplacePartition 
tests

##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkWriteOptions.java
##########
@@ -68,4 +68,10 @@ private SparkWriteOptions() {
   // Controls whether to take into account the table distribution and sort 
order during a write operation
   public static final String USE_TABLE_DISTRIBUTION_AND_ORDERING = 
"use-table-distribution-and-ordering";
   public static final boolean USE_TABLE_DISTRIBUTION_AND_ORDERING_DEFAULT = 
true;
+
+  // Identifies snapshot from which to start validating conflicting changes
+  public static final String VALIDATE_FROM_SNAPSHOT_ID = 
"validate-from-snapshot";

Review comment:
       Done

##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkWriteOptions.java
##########
@@ -68,4 +68,10 @@ private SparkWriteOptions() {
   // Controls whether to take into account the table distribution and sort 
order during a write operation
   public static final String USE_TABLE_DISTRIBUTION_AND_ORDERING = 
"use-table-distribution-and-ordering";
   public static final boolean USE_TABLE_DISTRIBUTION_AND_ORDERING_DEFAULT = 
true;
+
+  // Identifies snapshot from which to start validating conflicting changes
+  public static final String VALIDATE_FROM_SNAPSHOT_ID = 
"validate-from-snapshot";
+
+  public static final String DYNAMIC_OVERWRITE_ISOLATION_LEVEL = 
"write.dynamic.overwrite.isolation-level";

Review comment:
       Good point, updated

##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkWriteConf.java
##########
@@ -236,4 +237,19 @@ public boolean useTableDistributionAndOrdering() {
         
.defaultValue(SparkWriteOptions.USE_TABLE_DISTRIBUTION_AND_ORDERING_DEFAULT)
         .parse();
   }
+
+  public long validateFromSnapshotId() {
+    return confParser.longConf()
+        .option(SparkWriteOptions.VALIDATE_FROM_SNAPSHOT_ID)
+        .defaultValue(0)

Review comment:
       Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to