[
https://issues.apache.org/jira/browse/SPARK-20920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-20920:
------------------------------
Comment: was deleted
(was: [~mrares] I have a fix in the pull request attached. I don't suppose you
have any easy way to test it, if you've already got code that reproduces the
issue? I might struggle to construct a test that verifies this.)
> ForkJoinPool pools are leaked when writing hive tables with many partitions
> ---------------------------------------------------------------------------
>
> Key: SPARK-20920
> URL: https://issues.apache.org/jira/browse/SPARK-20920
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.1.1
> Reporter: Rares Mirica
>
> This bug is loosely related to SPARK-17396
> In this case it happens when writing to a hive table with many, many,
> partitions (my table is partitioned by hour and stores data it gets from
> kafka in a spark streaming application):
> df.repartition()
> .write
> .format("orc")
> .option("path", s"$tablesStoragePath/$tableName")
> .mode(SaveMode.Append)
> .partitionBy("dt", "hh")
> .saveAsTable(tableName)
> As this table grows beyond a certain size, ForkJoinPool pools start leaking.
> Upon examination (with a debugger) I found that the caller is
> AlterTableRecoverPartitionsCommand and the problem happens when
> `evalTaskSupport` is used (line 555). I have tried setting a very large
> threshold via `spark.rdd.parallelListingThreshold` and the problem went away.
> My assumption is that the problem happens in this case and not in the one in
> SPARK-17396 due to the fact that AlterTableRecoverPartitionsCommand is a case
> class while UnionRDD is an object so multiple instances are not possible,
> therefore no leak.
> Regards,
> Rares
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]