RussellSpitzer commented on code in PR #15614:
URL: https://github.com/apache/iceberg/pull/15614#discussion_r2957146525
##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/SparkCatalog.java:
##########
@@ -393,6 +421,14 @@ public boolean purgeTable(Identifier ident) {
}
}
+ private boolean dropTableWithPurging(Identifier ident) {
+ if (isPathIdentifier(ident)) {
Review Comment:
This is a bit of a tricky one, we shouldn't be able to go down this path
unless we are a Hadoop Catalog so our property is a bit off (we have a "rest"
prefix) and we don't really have a catalog to delegate to since we are
basically always deleting. I probably would just remove this branch entirely
and have isPathidentifier tables always go the "noCatalogPurge" route and let
spark do the cleanup.
I think if you rename the functions it becomes a lot clearer with
catalogDropOnly and CatalogDropWithPurge (or the combined method)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]