[
https://issues.apache.org/jira/browse/HIVE-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15811045#comment-15811045
]
Prasanth Jayachandran commented on HIVE-15565:
----------------------------------------------
For LLAP, GBY hashtable memory should be shared by all executors. The problem
is with tracking memory usage per thread. There is already an open jira for
this. HIVE-15508
Ideally, we want to track memory usage per executor and flush based on usage vs
max memory available per executor ratio.
> LLAP: GroupByOperator flushes hash table too frequently
> -------------------------------------------------------
>
> Key: HIVE-15565
> URL: https://issues.apache.org/jira/browse/HIVE-15565
> Project: Hive
> Issue Type: Bug
> Components: llap
> Reporter: Rajesh Balamohan
> Assignee: Rajesh Balamohan
> Priority: Minor
> Attachments: HIVE-15565.1.patch
>
>
> {{GroupByOperator::isTez}} would be true in LLAP mode. Current memory
> computations can go wrong with {{isTez}} checks in {{GroupByOperator}}. For
> e.g, in a LLAP instance with Xmx128G and 12 executors, it would start
> flushing hash table for every record once it reaches around 42GB
> (hive.tez.container.size=7100, hive.map.aggr.hash.percentmemory=0.5).
> {noformat}
> 2017-01-08T23:40:21,339 INFO [TezTaskRunner
> (1480722417364_1922_7_03_000004_1)]
> org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size
> = 0
> 2017-01-08T23:40:21,339 INFO [TezTaskRunner
> (1480722417364_1922_7_03_000012_1)]
> org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size
> = 0
> 2017-01-08T23:40:21,339 INFO [TezTaskRunner
> (1480722417364_1922_7_03_000004_1)]
> org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table =
> 1
> 2017-01-08T23:40:21,339 INFO [TezTaskRunner
> (1480722417364_1922_7_03_000012_1)]
> org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table =
> 1
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)