Well... if the query created a MR job on your cluster then there's always: 1. use jobtracker to find your job id. 2. use hadoop job -kill <job_id> to nuke it.
you're saying there's no way to interrupt/kill the query from the client? That very well may be the case. On Tue, Jun 25, 2013 at 10:22 AM, Christian Schneider < cschneiderpub...@gmail.com> wrote: > I figured out that there are two implementations of the Hive JDBC driver > in the hive-jdbc-0.10-cdh4.2.0 jar. > > 1. org.apache.hadoop.hive.jdbc.HiveStatement > 2. org.apache.hive.jdbc.HiveStatement > > The 1. implements .close() and .cancel() but it will not delete the > running jobs on the cluster anyway. > > Any suggestions? > > > 2013/6/25 Christian Schneider <cschneiderpub...@gmail.com> > >> Hi, >> is it possible to kill a running query (including all the hadoop jobs >> behind)? >> >> I think it's not, because the Hive JDBC Driver doesn't implement .close() >> and .cancel() on the (prepared) statement. >> >> This attached code shows the problem. >> >> Bevor the statement gets executed, it will spawn a Thread that tries to >> stop the execution of the query after 10 sec. >> >> Are there any other ways to stop the job on the cluster? >> >> I could do it over the Job Client, but for that i need the JobId. >> >> Thanks a lot. >> >> >> Best Regards, >> >> Christian. >> > >