marton-bod commented on code in PR #5707:
URL: https://github.com/apache/iceberg/pull/5707#discussion_r978526114


##########
spark/v3.2/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestAlterTablePartitionFields.java:
##########
@@ -421,6 +421,31 @@ public void testSparkTableAddDropPartitions() throws 
Exception {
         "spark table partition should be empty", 0, 
sparkTable().partitioning().length);
   }
 
+  @Test
+  public void testDropColumnOfOldPartitionFieldV1() {
+    // default table created in v1 format
+    sql(
+        "CREATE TABLE %s (id bigint NOT NULL, ts timestamp, day_of_ts date) 
USING iceberg PARTITIONED BY (day_of_ts)",
+        tableName);
+
+    sql("ALTER TABLE %s REPLACE PARTITION FIELD day_of_ts WITH days(ts)", 
tableName);
+
+    sql("ALTER TABLE %s DROP COLUMN day_of_ts", tableName);

Review Comment:
   Can we also include a sql to read back the data from the table after the 
drop? We have noticed the same issue in Trino, but for us the drop column still 
succeeds, it's the subsequent read operations that start failing



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to