[ https://issues.apache.org/jira/browse/SPARK-25822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16662580#comment-16662580 ]
Apache Spark commented on SPARK-25822: -------------------------------------- User 'zsxwing' has created a pull request for this issue: https://github.com/apache/spark/pull/22816 > Fix a race condition when releasing a Python worker > --------------------------------------------------- > > Key: SPARK-25822 > URL: https://issues.apache.org/jira/browse/SPARK-25822 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 2.3.2 > Reporter: Shixiong Zhu > Assignee: Shixiong Zhu > Priority: Major > > There is a race condition when releasing a Python worker. If > "ReaderIterator.handleEndOfDataSection" is not running in the task thread, > when a task is early terminated (such as "take(N)"), the task completion > listener may close the worker but "handleEndOfDataSection" can still put the > worker into the worker pool to reuse. > https://github.com/zsxwing/spark/commit/0e07b483d2e7c68f3b5c3c118d0bf58c501041b7 > is a patch to reproduce this issue. > I also found a user reported this in the mail list: > http://mail-archives.apache.org/mod_mbox/spark-user/201610.mbox/%3CCAAUq=h+yluepd23nwvq13ms5hostkhx3ao4f4zqv6sgo5zm...@mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org