[
https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16612936#comment-16612936
]
Hive QA commented on HIVE-17684:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939352/HIVE-17684.07.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 14936 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_without_localtask]
(batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_convert_join]
(batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook]
(batchId=13)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=251)
{noformat}
Test results:
https://builds.apache.org/job/PreCommit-HIVE-Build/13746/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13746/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13746/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12939352 - PreCommit-HIVE-Build
> HoS memory issues with MapJoinMemoryExhaustionHandler
> -----------------------------------------------------
>
> Key: HIVE-17684
> URL: https://issues.apache.org/jira/browse/HIVE-17684
> Project: Hive
> Issue Type: Bug
> Components: Spark
> Reporter: Sahil Takiar
> Assignee: Misha Dmitriev
> Priority: Major
> Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch,
> HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch,
> HIVE-17684.06.patch, HIVE-17684.07.patch
>
>
> We have seen a number of memory issues due the {{HashSinkOperator}} use of
> the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect
> scenarios where the small table is taking too much space in memory, in which
> case a {{MapJoinMemoryExhaustionError}} is thrown.
> The configs to control this logic are:
> {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90)
> {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55)
> The handler works by using the {{MemoryMXBean}} and uses the following logic
> to estimate how much memory the {{HashMap}} is consuming:
> {{MemoryMXBean#getHeapMemoryUsage().getUsed() /
> MemoryMXBean#getHeapMemoryUsage().getMax()}}
> The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be
> inaccurate. The value returned by this method returns all reachable and
> unreachable memory on the heap, so there may be a bunch of garbage data, and
> the JVM just hasn't taken the time to reclaim it all. This can lead to
> intermittent failures of this check even though a simple GC would have
> reclaimed enough space for the process to continue working.
> We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS.
> In Hive-on-MR this probably made sense to use because every Hive task was run
> in a dedicated container, so a Hive Task could assume it created most of the
> data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks
> running in a single executor, each doing different things.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)