Lianhui Wang created SPARK-4195:
-----------------------------------

             Summary: retry to fetch blocks's result when fetchfailed's reason 
is connection timeout
                 Key: SPARK-4195
                 URL: https://issues.apache.org/jira/browse/SPARK-4195
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
            Reporter: Lianhui Wang


when there are many executors in a application(example:1000),Connection timeout 
often occure.Exception is:
WARN nio.SendingConnection: Error finishing connection 
java.net.ConnectException: Connection timed out
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at 
org.apache.spark.network.nio.SendingConnection.finishConnect(Connection.scala:342)
        at 
org.apache.spark.network.nio.ConnectionManager$$anon$11.run(ConnectionManager.scala:273)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
that will make driver as these executors are lost,but in fact these executors 
are alive.so add retry mechanism to reduce the probability of the occurrence of 
this problem.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to