[
https://issues.apache.org/jira/browse/HIVE-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16076200#comment-16076200
]
Hive QA commented on HIVE-17010:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12875874/HIVE-17010.2.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10832 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb]
(batchId=143)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic]
(batchId=140)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
(batchId=145)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[infer_bucket_sort_reducers_power_two]
(batchId=167)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14]
(batchId=232)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionRegistrationWithCustomSchema
(batchId=177)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema
(batchId=177)
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
(batchId=177)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation (batchId=226)
{noformat}
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/5906/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/5906/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-5906/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12875874 - PreCommit-HIVE-Build
> Fix the overflow problem of Long type in SetSparkReducerParallelism
> -------------------------------------------------------------------
>
> Key: HIVE-17010
> URL: https://issues.apache.org/jira/browse/HIVE-17010
> Project: Hive
> Issue Type: Bug
> Reporter: liyunzhang_intel
> Assignee: liyunzhang_intel
> Attachments: HIVE-17010.1.patch, HIVE-17010.2.patch
>
>
> We use
> [numberOfByteshttps://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java#L129]
> to collect the numberOfBytes of sibling of specified RS. We use Long type
> and it happens overflow when the data is too big. After happening this
> situation, the parallelism is decided by
> [sparkMemoryAndCores.getSecond()|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java#L184]
> if spark.dynamic.allocation.enabled is true, sparkMemoryAndCores.getSecond
> is a dymamic value which is decided by spark runtime. For example, the value
> of sparkMemoryAndCores.getSecond is 5 or 15 randomly. There is possibility
> that the value may be 1. The may problem here is the overflow of addition of
> Long type. You can reproduce the overflow problem by following code
> {code}
> public static void main(String[] args) {
> long a1= 9223372036854775807L;
> long a2=1022672;
> long res = a1+a2;
> System.out.println(res); //-9223372036853753137
> BigInteger b1= BigInteger.valueOf(a1);
> BigInteger b2 = BigInteger.valueOf(a2);
> BigInteger bigRes = b1.add(b2);
> System.out.println(bigRes); //9223372036855798479
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)