[
https://issues.apache.org/jira/browse/SPARK-23308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16354784#comment-16354784
]
Steve Loughran edited comment on SPARK-23308 at 2/7/18 12:26 AM:
-----------------------------------------------------------------
bq. I have not heard this come up before as an issue in another implementation.
S3A's input stream handles an IOE other than EOF with a: increment metrics,
close the stream, retry once; generally that causes the error to be recovered
from. If not, you are into the unrecoverable-network-problems kind of problem,
except for the special case of "you are recycling the pool of HTTP connections
and should abort that TCP connection before trying anything else". I think
there are opportunities to improve S3A there by aborting the connection before
retrying.
I don't think Spark is in the position to be clever about retries, as it's too
low-level as to what is retryable vs not; it would need a policy for all
possible exceptions from all known FS clients and split them into "we can
recover" from "no, fail fast"
Trying to come up with a good policy is (a) something the FS clients should be
doing and (b) really hard to get right in the absence of frequent failures; it
is usually evolution based on bug reports. For example
[S3ARetryPolicy|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java#L87]
is very much a WiP (HADOOP-14531).
Marcio: surprised you are getting so many socket timeouts. If this is happening
in EC2 it's *potentially* throttling related; overloaded connection pools raise
ConnectionPoolTimeoutException, apparently.
was (Author: [email protected]):
bq. I have not heard this come up before as an issue in another implementation.
S3A's input stream handles an IOE other than EOF with a: increment metrics,
close the stream, retry once; generally that causes the error to be recovered
from. If not, you are into the unrecoverable-network-problems kind of problem,
except for the special case of "you are recycling the pool of HTTP connections
and should abort that TCP connection before trying anything else". I think
there are opportunities to improve S3A there by aborting the connection before
retrying.
I don't think Spark is in the position to be clever about retries, as its too
low-level as to what is retryable vs not; it would need a policy for all
possible exceptions from all known FS clients and split them into "we can
recover" from "no, fail fast"
Trying to come up with a good policy is (a) something the FS clients should be
doing and (b) really hard to get right in the absence of frequent failures; its
usually evolution based on bug reports. For example
[S3ARetryPolicy|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java#L87]
is very much a WiP (HADOOP-14531).
Marcio: surprised you are getting so many socket timeouts. If this is happening
in EC2 it's *potentially* throttling related; overloaded connection pools raise
ConnectionPoolTimeoutException, apparently.
> ignoreCorruptFiles should not ignore retryable IOException
> ----------------------------------------------------------
>
> Key: SPARK-23308
> URL: https://issues.apache.org/jira/browse/SPARK-23308
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.2.1
> Reporter: Márcio Furlani Carmona
> Priority: Minor
>
> When `spark.sql.files.ignoreCorruptFiles` is set it totally ignores any kind
> of RuntimeException or IOException, but some possible IOExceptions may happen
> even if the file is not corrupted.
> One example is the SocketTimeoutException which can be retried to possibly
> fetch the data without meaning the data is corrupted.
>
> See:
> https://github.com/apache/spark/blob/e30e2698a2193f0bbdcd4edb884710819ab6397c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala#L163
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]