sshkvar commented on a change in pull request #2854:
URL: https://github.com/apache/iceberg/pull/2854#discussion_r678873441
##########
File path: spark3/src/main/java/org/apache/iceberg/spark/SparkCatalog.java
##########
@@ -81,13 +81,17 @@
public class SparkCatalog extends BaseCatalog {
private static final Set<String> DEFAULT_NS_KEYS =
ImmutableSet.of(TableCatalog.PROP_OWNER);
+ public static final String PURGE_DATA_AND_METADATA =
"purge-data-and-metadata";
Review comment:
> I don't think `SparkCatalog` is the right place to set an Iceberg
catalog property - i.e. what about Trino and Hive compute engines? Would they
have a different behavior or not have the property at all?
Not sure that I understand why `SparkCatalog` is not a right place, please
take a look on current implementation of [dropTable in
SparkCatalog](https://github.com/apache/iceberg/blob/master/spark3/src/main/java/org/apache/iceberg/spark/SparkCatalog.java#L237).
As you can see we are calling [dropTable method from Catalog.java
](https://github.com/apache/iceberg/blob/master/api/src/main/java/org/apache/iceberg/catalog/Catalog.java#L284)
where we always pass `true` for purge flag. So I just added ability to make it
configurable via property passed to a catalog.
Currently Trino not dropping any data and metadata, just calls drop table
from hive metastore
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]