Hi Stephen, thanks for the anser.

Identifying to the JobId is not that easy. I also tought about this.
Our application adds now a unique prefix to all queries. With this we can
identify the job.

Smht. like this:

-- UUID: 3242-414-124-14...
SELECT * FROM foobar;

Now, our application can filter by Job Names starting with: -- UUID:
3242-414-124-14... to kill the query.
But i think this is more a workaround then a reliable, or?

Best Regards,
Christian.


2013/6/25 Stephen Sprague <sprag...@gmail.com>

> Well... if the query created a MR job on your cluster then there's always:
>
> 1. use jobtracker to find your job id.
> 2. use hadoop job -kill <job_id>  to nuke it.
>
> you're saying there's no way to interrupt/kill the query from the client?
> That very well may be the case.
>
>
> On Tue, Jun 25, 2013 at 10:22 AM, Christian Schneider <
> cschneiderpub...@gmail.com> wrote:
>
>> I figured out that there are two implementations of the Hive JDBC driver
>> in the hive-jdbc-0.10-cdh4.2.0 jar.
>>
>> 1. org.apache.hadoop.hive.jdbc.HiveStatement
>> 2. org.apache.hive.jdbc.HiveStatement
>>
>> The 1. implements .close() and .cancel() but it will not delete the
>> running jobs on the cluster anyway.
>>
>> Any suggestions?
>>
>>
>> 2013/6/25 Christian Schneider <cschneiderpub...@gmail.com>
>>
>>> Hi,
>>> is it possible to kill a running query (including all the hadoop jobs
>>> behind)?
>>>
>>> I think it's not, because the Hive JDBC Driver doesn't implement
>>> .close() and .cancel() on the (prepared) statement.
>>>
>>> This attached code shows the problem.
>>>
>>> Bevor the statement gets executed, it will spawn a Thread that tries to
>>> stop the execution of the query after 10 sec.
>>>
>>> Are there any other ways to stop the job on the cluster?
>>>
>>> I could do it over the Job Client, but for that i need the JobId.
>>>
>>> Thanks a lot.
>>>
>>>
>>> Best Regards,
>>>
>>> Christian.
>>>
>>
>>
>

Reply via email to