RussellSpitzer commented on code in PR #7228:
URL: https://github.com/apache/iceberg/pull/7228#discussion_r1151145734


##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/SparkSessionCatalog.java:
##########
@@ -136,7 +136,7 @@ public Identifier[] listTables(String[] namespace) throws 
NoSuchNamespaceExcepti
   public Table loadTable(Identifier ident) throws NoSuchTableException {
     try {
       return icebergCatalog.loadTable(ident);
-    } catch (NoSuchTableException e) {
+    } catch (NoSuchTableException | 
org.apache.iceberg.exceptions.NotFoundException e) {

Review Comment:
   Why do we only do this in SparkSessionCatalog? Shouldn't the same issue 
exist with SparkCatalog? I also wonder if we should have a session parameter or 
something to enable this behavior. Basically in this case we are saying drop 
table should drop any table that has a catalog entry even if we can't determine 
it is an Iceberg table.
   
   I think we can probably make this a little safer by checking whether the 
table has the metadata_location property set in the catalog? I'm open to other 
suggestions as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to