[jira] [Commented] (HIVE-17718) HS2 Logs print unnecessary stack trace when HoS query is cancelled

2017-10-06 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195487#comment-16195487
 ] 

Hive QA commented on HIVE-17718:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12890772/HIVE-17718.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 11190 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=231)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=231)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1]
 (batchId=171)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=239)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7167/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7167/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7167/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12890772 - PreCommit-HIVE-Build

> HS2 Logs print unnecessary stack trace when HoS query is cancelled
> --
>
> Key: HIVE-17718
> URL: https://issues.apache.org/jira/browse/HIVE-17718
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-17718.1.patch, HIVE-17718.2.patch, 
> HIVE-17718.3.patch
>
>
> Example:
> {code}
> 2017-10-05 17:47:11,881 ERROR 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: 
> [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with 
> exception 'java.lang.InterruptedException(sleep interrupted)'
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-10-05 17:47:11,881 WARN  org.apache.hadoop.hive.ql.Driver: 
> [HiveServer2-Handler-Pool: Thread-105]: Shutting down task : Stage-2:MAPRED
> 2017-10-05 17:47:11,882 ERROR 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: 
> [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with 
> exception 'java.lang.InterruptedException(sleep interrupted)'
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> 

[jira] [Commented] (HIVE-17718) HS2 Logs print unnecessary stack trace when HoS query is cancelled

2017-10-06 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195448#comment-16195448
 ] 

Hive QA commented on HIVE-17718:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12890772/HIVE-17718.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 11190 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=231)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=231)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1]
 (batchId=171)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=239)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7166/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7166/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7166/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12890772 - PreCommit-HIVE-Build

> HS2 Logs print unnecessary stack trace when HoS query is cancelled
> --
>
> Key: HIVE-17718
> URL: https://issues.apache.org/jira/browse/HIVE-17718
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-17718.1.patch, HIVE-17718.2.patch, 
> HIVE-17718.3.patch
>
>
> Example:
> {code}
> 2017-10-05 17:47:11,881 ERROR 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: 
> [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with 
> exception 'java.lang.InterruptedException(sleep interrupted)'
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-10-05 17:47:11,881 WARN  org.apache.hadoop.hive.ql.Driver: 
> [HiveServer2-Handler-Pool: Thread-105]: Shutting down task : Stage-2:MAPRED
> 2017-10-05 17:47:11,882 ERROR 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: 
> [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with 
> exception 'java.lang.InterruptedException(sleep interrupted)'
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
>