mridulm commented on PR #38467: URL: https://github.com/apache/spark/pull/38467#issuecomment-1321508245
Agree with @Ngone51, there are two issues here. a) When we have locked for read/write, we expect it to be unlocked and exceptions to be handled gracefully. In this case, `removeBlockInternal` should ensure the lock is unlocked gracefully. Instead of catching `Exception`, I would suggest to move`removeBlock` into `finally` and everything above it in `removeBlockInternal` into a `try` block. A quick look indicated the other uses of `lockForWriting` should be fine - but perhaps something we should audit in future @Ngone51 ! b) Ensure we do not recreate a directory when exit'ing (it is not limited to `removeBlockInternal` in this PR). In addition to (a), I do believe we should do what is in this PR @Ngone51. Thoughts ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
