[ 
https://issues.apache.org/jira/browse/SPARK-40235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Rosen updated SPARK-40235:
-------------------------------
    Description: 
This patch modifies the synchronization in {{Executor.updateDependencies()}} in 
order to allow tasks to be interrupted while they are blocked and waiting on 
other tasks to finish downloading dependencies.

This synchronization was added years ago in 
[mesos/spark@{{{}7b9e96c{}}}|https://github.com/mesos/spark/commit/7b9e96c99206c0679d9925e0161fde738a5c7c3a]
 in order to prevent concurrently-launching tasks from performing concurrent 
dependency updates. If one task is downloading dependencies, all other 
newly-launched tasks will block until the original dependency download is 
complete.

Let's say that a Spark task launches, becomes blocked on a 
{{updateDependencies()}} call, then is cancelled while it is blocked. Although 
Spark will send a Thread.interrupt() to the canceled task, the task will 
continue waiting because threads blocked on a {{synchronized}} won't throw an 
InterruptedException in response to the interrupt. As a result, the blocked 
thread will continue to wait until the other thread exits the synchronized 
block. 

In the wild, we saw a case where this happened and the thread remained blocked 
for over 1 minute, causing the TaskReaper to kick in and self-destruct the 
executor.

This PR aims to fix this problem by replacing the {{synchronized}} with a 
ReentrantLock, which has a {{lockInterruptibly}} method.

  was:
This patch modifies the synchronization in {{Executor.updateDependencies()}} in 
order to allow tasks to be interrupted while they are blocked and waiting on 
other tasks to finish downloading dependencies.

This synchronization was added years ago in 
[mesos/spark@{{{}7b9e96c{}}}|https://github.com/mesos/spark/commit/7b9e96c99206c0679d9925e0161fde738a5c7c3a]
 in order to prevent concurrently-launching tasks from performing concurrent 
dependency updates (file downloads, and, later, library installation). If one 
task is downloading dependencies, all other newly-launched tasks will block 
until the original dependency download is complete.

Let's say that a Spark task launches, becomes blocked on a 
{{updateDependencies()}} call, then is cancelled while it is blocked. Although 
Spark will send a Thread.interrupt() to the canceled task, the task will 
continue waiting because threads blocked on a {{synchronized}} won't throw an 
InterruptedException in response to the interrupt. As a result, the blocked 
thread will continue to wait until the other thread exits the synchronized 
block. 

In the wild, we saw a case where this happened and the thread remained blocked 
for over 1 minute, causing the TaskReaper to kick in and self-destruct the 
executor.

This PR aims to fix this problem by replacing the {{synchronized}} with a 
ReentrantLock, which has a {{lockInterruptibly}} method.


> Use interruptible lock instead of synchronized in 
> Executor.updateDependencies()
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-40235
>                 URL: https://issues.apache.org/jira/browse/SPARK-40235
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.4.0
>            Reporter: Josh Rosen
>            Assignee: Josh Rosen
>            Priority: Major
>
> This patch modifies the synchronization in {{Executor.updateDependencies()}} 
> in order to allow tasks to be interrupted while they are blocked and waiting 
> on other tasks to finish downloading dependencies.
> This synchronization was added years ago in 
> [mesos/spark@{{{}7b9e96c{}}}|https://github.com/mesos/spark/commit/7b9e96c99206c0679d9925e0161fde738a5c7c3a]
>  in order to prevent concurrently-launching tasks from performing concurrent 
> dependency updates. If one task is downloading dependencies, all other 
> newly-launched tasks will block until the original dependency download is 
> complete.
> Let's say that a Spark task launches, becomes blocked on a 
> {{updateDependencies()}} call, then is cancelled while it is blocked. 
> Although Spark will send a Thread.interrupt() to the canceled task, the task 
> will continue waiting because threads blocked on a {{synchronized}} won't 
> throw an InterruptedException in response to the interrupt. As a result, the 
> blocked thread will continue to wait until the other thread exits the 
> synchronized block. 
> In the wild, we saw a case where this happened and the thread remained 
> blocked for over 1 minute, causing the TaskReaper to kick in and 
> self-destruct the executor.
> This PR aims to fix this problem by replacing the {{synchronized}} with a 
> ReentrantLock, which has a {{lockInterruptibly}} method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to