ajantha-bhat commented on code in PR #5707:
URL: https://github.com/apache/iceberg/pull/5707#discussion_r979715217


##########
spark/v3.2/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestAlterTablePartitionFields.java:
##########
@@ -421,6 +421,31 @@ public void testSparkTableAddDropPartitions() throws 
Exception {
         "spark table partition should be empty", 0, 
sparkTable().partitioning().length);
   }
 
+  @Test
+  public void testDropColumnOfOldPartitionFieldV1() {
+    // default table created in v1 format
+    sql(
+        "CREATE TABLE %s (id bigint NOT NULL, ts timestamp, day_of_ts date) 
USING iceberg PARTITIONED BY (day_of_ts)",
+        tableName);
+
+    sql("ALTER TABLE %s REPLACE PARTITION FIELD day_of_ts WITH days(ts)", 
tableName);
+
+    sql("ALTER TABLE %s DROP COLUMN day_of_ts", tableName);

Review Comment:
   Hmm, Interesting. This is what I assumed would happen and went with spec 
change (disruptive change)
   
   @marton-bod: I have tagged you in one of the slack discussions. Where I 
proposed spec change to handle this.  
   If Fokko doesn't find an easy way to fix this, we can discuss my approach 
after his experiment. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to