squito commented on issue #23614: [SPARK-26689][CORE]Support blacklisting bad disk directory and retry in DiskBlockManager URL: https://github.com/apache/spark/pull/23614#issuecomment-458654349 oh I see, this is handling a bad disk on the driver, not the executors. A bad disk on the executors should be handled by blacklisting. In general, spark's fault tolerance for the driver is extremely poor, its a single point of failure for lots of reasons. Still, this may be a small fix to improve things somewhat, without any real guarantees. There are a lot of cases here to think through carefully. Eg. as noted in the comments, this has to be kept in sync w/ ExternalShuffleBlockResolver#getFile. Even if you put in the same logic, its still possible they'll end up with different views of the actual bad directories. Also probably want to update getAllFiles() as well to respect the badDirs. And what happens when a dir with lots of data in it is suddenly marked as bad, that all other internal state is updated reasonably (maybe not immediately, but when it is updated that it makes sense). I'm just thinking aloud about this change ... still on the fence of whether its good or not
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
