[ 
https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16627012#comment-16627012
 ] 

KaiXu commented on HIVE-17684:
------------------------------

Hi [~stakiar] and [~mi...@cloudera.com], I recently encountered a similar issue 
as this Jira with Hive2.1 on Spark2.2, the issue seems randomly occurred when 
under high concurrency and pressure. Below is the stack trace, I am not sure if 
it's the same issue, and do you have any suggestions for the workaround?

 

18/09/24 14:30:42 ERROR spark.SparkMapRecordHandler: Error processing row: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row \{"i_item_sk":118975}
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row \{"i_item_sk":118975}
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:497)
 at 
org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:136)
 at 
org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
 at 
org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
 at 
org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85)
 at 
scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
 at scala.collection.Iterator$class.foreach(Iterator.scala:893)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
 at 
org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
 at 
org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
 at org.apache.spark.SparkContext$$anonfun$34.apply(SparkContext.scala:2182)
 at org.apache.spark.SparkContext$$anonfun$34.apply(SparkContext.scala:2182)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
 at org.apache.spark.scheduler.Task.run(Task.scala:109)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionException: 
2018-09-24 14:30:42 Processing rows: 200000 Hashtable size: 199999 Memory 
usage: 5920999680 percentage: 0.551
 at 
org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionHandler.checkMemoryStatus(MapJoinMemoryExhaustionHandler.java:99)
 at 
org.apache.hadoop.hive.ql.exec.HashTableSinkOperator.process(HashTableSinkOperator.java:259)
 at 
org.apache.hadoop.hive.ql.exec.SparkHashTableSinkOperator.process(SparkHashTableSinkOperator.java:85)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)
 at 
org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)
 at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
 at 
org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:147)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:487)
 ... 17 more

 

 

> HoS memory issues with MapJoinMemoryExhaustionHandler
> -----------------------------------------------------
>
>                 Key: HIVE-17684
>                 URL: https://issues.apache.org/jira/browse/HIVE-17684
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Misha Dmitriev
>            Priority: Major
>         Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, 
> HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, 
> HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, 
> HIVE-17684.09.patch, HIVE-17684.10.patch, HIVE-17684.11.patch
>
>
> We have seen a number of memory issues due the {{HashSinkOperator}} use of 
> the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect 
> scenarios where the small table is taking too much space in memory, in which 
> case a {{MapJoinMemoryExhaustionError}} is thrown.
> The configs to control this logic are:
> {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90)
> {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55)
> The handler works by using the {{MemoryMXBean}} and uses the following logic 
> to estimate how much memory the {{HashMap}} is consuming: 
> {{MemoryMXBean#getHeapMemoryUsage().getUsed() / 
> MemoryMXBean#getHeapMemoryUsage().getMax()}}
> The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be 
> inaccurate. The value returned by this method returns all reachable and 
> unreachable memory on the heap, so there may be a bunch of garbage data, and 
> the JVM just hasn't taken the time to reclaim it all. This can lead to 
> intermittent failures of this check even though a simple GC would have 
> reclaimed enough space for the process to continue working.
> We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. 
> In Hive-on-MR this probably made sense to use because every Hive task was run 
> in a dedicated container, so a Hive Task could assume it created most of the 
> data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks 
> running in a single executor, each doing different things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to