stevenzwu commented on code in PR #4673:
URL: https://github.com/apache/iceberg/pull/4673#discussion_r862282353
##########
core/src/main/java/org/apache/iceberg/SnapshotProducer.java:
##########
@@ -343,11 +343,15 @@ public void commit() {
LOG.warn("Failed to load committed snapshot, skipping manifest
clean-up");
}
- } catch (RuntimeException e) {
- LOG.warn("Failed to load committed table metadata, skipping manifest
clean-up", e);
+ } catch (Throwable e) {
Review Comment:
@RussellSpitzer thanks a lot for the detailed explanation. Didn't realize it
is Spark abort flow deleted the data files.
I am uneasy with swallowing fatal errors (like OutOfMemoryError) though.
Should Spark only catch `CommitFailedException` and perform abort only for that
specific exception? I assume Spark doesn't perform abort for
`CommitStateUnknownException`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]