yaooqinn commented on code in PR #38024:
URL: https://github.com/apache/spark/pull/38024#discussion_r983005641
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/FilePartitionReader.scala:
##########
@@ -36,8 +36,15 @@ class FilePartitionReader[T](
private def ignoreMissingFiles = options.ignoreMissingFiles
private def ignoreCorruptFiles = options.ignoreCorruptFiles
+ private def ignoreCorruptFilesAfterRetries =
options.ignoreCorruptFilesAfterRetries
override def next(): Boolean = {
+
+ def shouldSkipCorruptFiles(): Boolean = {
Review Comment:
In PR desc, I have shown a case where the same query reads the same copy of
data at the same time, one is 'corrupt' and the other succeeds. It means that
some errors we meet with IOE are not positive data/file corruption. If we
ignore them, IMHO, it is a correctness issue. And the config ignoreCorruptFiles
says that it ignores corrupt files, not errors. If the data to read contains
plenty of corrupt files which means the error is unrecoverable, I add RETIES
conf which separates from `spark.task.maxFailures` to limit.
BTW, I have checked the ORC read path, it seems to wrap all errors with IOE.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]