[ 
https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537362#comment-16537362
 ] 

Misha Dmitriev commented on HIVE-19937:
---------------------------------------

The analysis above looks good. I remember seeing similar issues with Kryo (and 
in general, all other kinds of serialization) in the past, and dealing with 
them in a similar way.

The new patch looks good to me. I assume that you analyzed a heap dump before 
and after this change and verified that the problems that you wanted to address 
indeed went away. A few small comments/nits:
 * MapOperator.java, 'contexts = new LinkedHashMap<Operator<?>, MapOpCtx>()' - 
if this code has any idea of the expected size of LinkedHashMap, you may want 
to create it with the appropriate capacity. This is especially relevant when 
such maps are small - then the default capacity of 16 makes them waste a lot of 
memory.
 * MapWork.java, 'if (includedBuckets != null) { this.includedBuckets = ...' - 
you can make this code a bit shorter using the conditional operator, i.e. 
'this.includedBuckets = (includedBuckets != null) ? includedBuckets : null' 
Same in several other methods.

> Use BeanSerializer for MapWork to carry calls to String.intern
> --------------------------------------------------------------
>
>                 Key: HIVE-19937
>                 URL: https://issues.apache.org/jira/browse/HIVE-19937
>             Project: Hive
>          Issue Type: Improvement
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Sahil Takiar
>            Priority: Major
>         Attachments: HIVE-19937.1.patch, HIVE-19937.2.patch, report.html
>
>
> When fixing HIVE-16395, we decided that each new Spark task should clone the 
> {{JobConf}} object to prevent any {{ConcurrentModificationException}} from 
> being thrown. However, setting this variable comes at a cost of storing a 
> duplicate {{JobConf}} object for each Spark task. These objects can take up a 
> significant amount of memory, we should intern them so that Spark tasks 
> running in the same JVM don't store duplicate copies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to