[ 
https://issues.apache.org/jira/browse/SPARK-4195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193827#comment-14193827
 ] 

Apache Spark commented on SPARK-4195:
-------------------------------------

User 'lianhuiwang' has created a pull request for this issue:
https://github.com/apache/spark/pull/3061

> retry to fetch blocks's result when fetchfailed's reason is connection timeout
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-4195
>                 URL: https://issues.apache.org/jira/browse/SPARK-4195
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Lianhui Wang
>
> when there are many executors in a application(example:1000),Connection 
> timeout often occure.Exception is:
> WARN nio.SendingConnection: Error finishing connection 
> java.net.ConnectException: Connection timed out
>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>         at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
>         at 
> org.apache.spark.network.nio.SendingConnection.finishConnect(Connection.scala:342)
>         at 
> org.apache.spark.network.nio.ConnectionManager$$anon$11.run(ConnectionManager.scala:273)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> that will make driver as these executors are lost,but in fact these executors 
> are alive.so add retry mechanism to reduce the probability of the occurrence 
> of this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to