[
https://issues.apache.org/jira/browse/HIVE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16657987#comment-16657987
]
Hive QA commented on HIVE-20512:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12944872/HIVE-20512.2.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15111 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_aggregate]
(batchId=162)
org.apache.hive.jdbc.miniHS2.TestMiniHS2.testConfInSession (batchId=261)
{noformat}
Test results:
https://builds.apache.org/job/PreCommit-HIVE-Build/14598/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14598/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14598/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12944872 - PreCommit-HIVE-Build
> Improve record and memory usage logging in SparkRecordHandler
> -------------------------------------------------------------
>
> Key: HIVE-20512
> URL: https://issues.apache.org/jira/browse/HIVE-20512
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Reporter: Sahil Takiar
> Assignee: Bharathkrishna Guruvayoor Murali
> Priority: Major
> Attachments: HIVE-20512.1.patch, HIVE-20512.2.patch
>
>
> We currently log memory usage and # of records processed in Spark tasks, but
> we should improve the methodology for how frequently we log this info.
> Currently we use the following code:
> {code:java}
> private long getNextLogThreshold(long currentThreshold) {
> // A very simple counter to keep track of number of rows processed by the
> // reducer. It dumps
> // every 1 million times, and quickly before that
> if (currentThreshold >= 1000000) {
> return currentThreshold + 1000000;
> }
> return 10 * currentThreshold;
> }
> {code}
> The issue is that after a while, the increase by 10x factor means that you
> have to process a huge # of records before this gets triggered.
> A better approach would be to log this info at a given interval. This would
> help in debugging tasks that are seemingly hung.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)