[GitHub] [spark] HyukjinKwon commented on pull request #22480: [SPARK-25473][PYTHON][SS][TEST] ForeachWriter tests failed on Python 3.6 and macOS High Sierra
HyukjinKwon commented on pull request #22480: URL: https://github.com/apache/spark/pull/22480#issuecomment-639338265 @pquentin, there's no thread running in the parent `daemon.py` - also the forked workers are usually reused once they are on in most cases. So might be less serious in Spark's case. Also, does the thread in the article refers Python threads with GIL or threads in OS level? Seems like it refers the latter to me. Would you mind elaborating how the cases you pointed out leads to a deadlock case? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] HyukjinKwon commented on pull request #22480: [SPARK-25473][PYTHON][SS][TEST] ForeachWriter tests failed on Python 3.6 and macOS High Sierra
HyukjinKwon commented on pull request #22480: URL: https://github.com/apache/spark/pull/22480#issuecomment-639212453 @pquentin, yes, it's kind of difficult to avoid in PySpark side for now. The problem isn't solely because we use `fork()` but it binds to other conditions. I didn't take a very close look at that time but the error was thrown when a particular instance is pickled. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org