[ 
https://issues.apache.org/jira/browse/SPARK-22272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16214569#comment-16214569
 ] 

Saisai Shao commented on SPARK-22272:
-------------------------------------

Setting "spark.file.transferTo" to "false" will potentially affect the 
performance, that's why we enabled this by default and leave an undocumented 
configurations if users has some issues. The original JIRA is SPARK-3948, which 
is a kernel issue. I don't think we should disable this by default, since it is 
JDK/Kernel specific. If you encountered such problems, you can disable it in 
your cluster, but generally we don't want to disable this to hurt the 
performance.

> killing task may cause the executor progress hang because of the JVM bug
> ------------------------------------------------------------------------
>
>                 Key: SPARK-22272
>                 URL: https://issues.apache.org/jira/browse/SPARK-22272
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.2
>         Environment: java version "1.7.0_75"
> hadoop version 2.5.0
>            Reporter: roncenzhao
>         Attachments: 26883.jstack, screenshot-1.png, screenshot-2.png
>
>
> JVM bug: http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8132693
> We kill the task using 'Thread.interrupt()' and the ShuffleMapTask use nio to 
> merge all partitions files when 'spark.file.transferTo' is true(default), so 
> it may cause the jvm bug.
> When the driver send one task to this bad executor, the task will never run 
> and as a result the job will hang forever without handling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to