[
https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299373#comment-16299373
]
Misha Dmitriev commented on HIVE-17684:
---------------------------------------
[~stakiar] How do I run these {{TestSparkCliDriver}} tests? Note the "Spark"
thing - looks like they are different from TestCliDriver tests that I know how
to run. Also in the test report all these test names look the same.
In the mean time, I've just tried the first two of the failed {{TestCliDriver}}
tests locally. The first one passed for me. The second one keeps failing, with
the error below. This looks confusing. Note that the error message mentions
"exhausted memory". However, I cannot find any exception stack traces in the
hive.log file. Please advise.
{code}
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running org.apache.hadoop.hive.cli.TestCliDriver
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 102.789
s <<< FAILURE! - in org.apache.hadoop.hive.cli.TestCliDriver
[ERROR]
testCliDriver[auto_join_without_localtask](org.apache.hadoop.hive.cli.TestCliDriver)
Time elapsed: 21.46 s <<< FAILURE!
java.lang.AssertionError:
Client Execution succeeded but contained differences (error code = 1) after
executing auto_join_without_localtask.q
1047a1048,1053
> Hive Runtime Error: Map local work exhausted memory
> FAILED: Execution Error, return code 3 from
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> Hive Runtime Error: Map local work exhausted memory
> FAILED: Execution Error, return code 3 from
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
1054c1060
< RUN: Stage-9:MAPRED
---
> RUN: Stage-1:MAPRED
1057c1063
< RUN: Stage-6:MAPRED
---
> RUN: Stage-2:MAPRED
at org.junit.Assert.fail(Assert.java:88)
at org.apache.hadoop.hive.ql.QTestUtil.failedDiff(QTestUtil.java:2244)
at
org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:183)
at
org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104)
at
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver(TestCliDriver.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:92)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at
org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:73)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
at
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
at
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
[INFO]
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestCliDriver.testCliDriver:59 Client Execution succeeded but
contained differences (error code = 1) after executing
auto_join_without_localtask.q
1047a1048,1053
> Hive Runtime Error: Map local work exhausted memory
> FAILED: Execution Error, return code 3 from
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> Hive Runtime Error: Map local work exhausted memory
> FAILED: Execution Error, return code 3 from
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
1054c1060
< RUN: Stage-9:MAPRED
---
> RUN: Stage-1:MAPRED
1057c1063
< RUN: Stage-6:MAPRED
---
> RUN: Stage-2:MAPRED
[INFO]
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
{code}
> HoS memory issues with MapJoinMemoryExhaustionHandler
> -----------------------------------------------------
>
> Key: HIVE-17684
> URL: https://issues.apache.org/jira/browse/HIVE-17684
> Project: Hive
> Issue Type: Bug
> Components: Spark
> Reporter: Sahil Takiar
> Assignee: Misha Dmitriev
> Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch
>
>
> We have seen a number of memory issues due the {{HashSinkOperator}} use of
> the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect
> scenarios where the small table is taking too much space in memory, in which
> case a {{MapJoinMemoryExhaustionError}} is thrown.
> The configs to control this logic are:
> {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90)
> {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55)
> The handler works by using the {{MemoryMXBean}} and uses the following logic
> to estimate how much memory the {{HashMap}} is consuming:
> {{MemoryMXBean#getHeapMemoryUsage().getUsed() /
> MemoryMXBean#getHeapMemoryUsage().getMax()}}
> The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be
> inaccurate. The value returned by this method returns all reachable and
> unreachable memory on the heap, so there may be a bunch of garbage data, and
> the JVM just hasn't taken the time to reclaim it all. This can lead to
> intermittent failures of this check even though a simple GC would have
> reclaimed enough space for the process to continue working.
> We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS.
> In Hive-on-MR this probably made sense to use because every Hive task was run
> in a dedicated container, so a Hive Task could assume it created most of the
> data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks
> running in a single executor, each doing different things.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)