[ 
https://issues.apache.org/jira/browse/HIVE-16422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15965364#comment-15965364
 ] 

Hive QA commented on HIVE-16422:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12862962/HIVE-16422.000.txt

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10570 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.hbase.TestHBaseMetastoreSql.partitionedTable 
(batchId=201)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4650/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4650/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4650/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12862962 - PreCommit-HIVE-Build

> Should kill running Spark Jobs when a query is cancelled.
> ---------------------------------------------------------
>
>                 Key: HIVE-16422
>                 URL: https://issues.apache.org/jira/browse/HIVE-16422
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 2.1.0
>            Reporter: zhihai xu
>            Assignee: zhihai xu
>         Attachments: HIVE-16422.000.txt
>
>
> Should kill running Spark Jobs when a query is cancelled. When a query is 
> cancelled, Driver.releaseDriverContext will be called by Driver.close. 
> releaseDriverContext will call DriverContext.shutdown which will call all the 
> running tasks' shutdown.
> {code}
>   public synchronized void shutdown() {
>     LOG.debug("Shutting down query " + ctx.getCmd());
>     shutdown = true;
>     for (TaskRunner runner : running) {
>       if (runner.isRunning()) {
>         Task<?> task = runner.getTask();
>         LOG.warn("Shutting down task : " + task);
>         try {
>           task.shutdown();
>         } catch (Exception e) {
>           console.printError("Exception on shutting down task " + 
> task.getId() + ": " + e);
>         }
>         Thread thread = runner.getRunner();
>         if (thread != null) {
>           thread.interrupt();
>         }
>       }
>     }
>     running.clear();
>   }
> {code}
> since SparkTask didn't implement shutdown method to kill the running spark 
> job, the spark job may be still running after the query is cancelled. So it 
> will be good to kill the spark job in SparkTask.shutdown to save cluster 
> resource.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to