[
https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16531998#comment-16531998
]
Sahil Takiar commented on HIVE-19937:
-------------------------------------
[[email protected]] I've attached a jxray report I created by running TPC-DS
query3 against HoS with {{spark.executor.cores = 40}} (which means 40 threads
per JVM and thus 40 copies of {{JobConf}}). Most of the memory goes towards
byte buffers for Netty and Parquet, but 13.5% of memory is wasted due to string
duplication.
Will look more into how to implement the changes for
{{CopyOnFirstWriteProperties}}.
> Intern JobConf objects in Spark tasks
> -------------------------------------
>
> Key: HIVE-19937
> URL: https://issues.apache.org/jira/browse/HIVE-19937
> Project: Hive
> Issue Type: Improvement
> Components: Spark
> Reporter: Sahil Takiar
> Assignee: Sahil Takiar
> Priority: Major
> Attachments: HIVE-19937.1.patch, report.html
>
>
> When fixing HIVE-16395, we decided that each new Spark task should clone the
> {{JobConf}} object to prevent any {{ConcurrentModificationException}} from
> being thrown. However, setting this variable comes at a cost of storing a
> duplicate {{JobConf}} object for each Spark task. These objects can take up a
> significant amount of memory, we should intern them so that Spark tasks
> running in the same JVM don't store duplicate copies.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)