[
https://issues.apache.org/jira/browse/HIVE-13531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15244443#comment-15244443
]
Gopal V commented on HIVE-13531:
--------------------------------
The concurrent access might be due to local fetch-task conversions.
{{set hive.fetch.task.conversion=none;}}
> Cache in json_tuple UDF grows larger than it should
> ---------------------------------------------------
>
> Key: HIVE-13531
> URL: https://issues.apache.org/jira/browse/HIVE-13531
> Project: Hive
> Issue Type: Bug
> Components: UDF
> Affects Versions: 1.1.0
> Environment: CDH 5.5.0 with Java 1.8.0_45
> Reporter: Jürgen Thomann
> Assignee: Jason Dere
> Priority: Minor
>
> According to the code in
> ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFJSONTuple.java
> the HashCache should never grow larger than 16 entries. In the last OOM of
> Hive Server 2 I found this HashCache with over 1 million
> java.util.LinkedHashMap$Entry objects.
> The code looks right and works single threaded as it should when I tested it
> isolated. The only problem I can imagine with my limited Hive source code
> knowledge that it is accessed concurrently and somewhere the cleanup with
> removeEldestEntry is not working in that case.
> I had this problem with Hive 1.1.0 but the current implementation in master
> looks the same for the HashCache.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)