Thank you Shreepadma,  I don't see stack trace. Below is the full execution
log

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201301081859_0002, Tracking URL =
http://host:50030/jobdetails.jsp?jobid=job_201301081859_0002
Kill Command = /usr/lib/hadoop/bin/hadoop job
 -Dmapred.job.tracker=host:8021 -kill job_201301081859_0002
Hadoop job information for Stage-1: number of mappers: 1; number of
reducers: 1
2013-01-09 00:28:52,147 Stage-1 map = 0%,  reduce = 0%
2013-01-09 00:28:56,190 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
1.29 sec
2013-01-09 00:28:57,204 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
1.29 sec
2013-01-09 00:28:58,214 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
1.29 sec
2013-01-09 00:28:59,224 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
1.29 sec
2013-01-09 00:29:00,233 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
1.29 sec
2013-01-09 00:29:01,243 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
1.29 sec
2013-01-09 00:29:02,253 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
1.29 sec
2013-01-09 00:29:03,262 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
1.29 sec
2013-01-09 00:29:04,276 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
3.2 sec
2013-01-09 00:29:05,288 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
3.2 sec
2013-01-09 00:29:06,299 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
3.2 sec
2013-01-09 00:29:07,311 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
3.2 sec
MapReduce Total cumulative CPU time: 3 seconds 200 msec
Ended Job = job_201301081859_0002
Loading data to table TAB_INDEX
rmr: DEPRECATED: Please use 'rm -r' instead.
Moved: 'hdfs://host:8020/hive/optqc5d.db/TAB_INDEX' to trash at:
hdfs://host:8020/user/saachhra/.Trash/Current
Invalid alter operation: Unable to alter index.
Table TAB_INDEX stats: [num_partitions: 0, num_files: 1, num_rows: 0,
total_size: 5475, raw_data_size: 0]
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask

Good wishes,always !
Santosh


On Wed, Jan 9, 2013 at 6:16 AM, Shreepadma Venugopalan <
shreepa...@cloudera.com> wrote:

> Hi Santosh,
>
> The execution log will contain the stack trace of the exception that
> caused the task to fail. It would help to look into the execution log and
> attach it to the email.
>
> Thanks.
> Shreepadma
>
>
> On Tue, Jan 8, 2013 at 7:40 AM, Santosh Achhra <santoshach...@gmail.com>wrote:
>
>> Hello Hive Users,
>>
>> After I execute below mention Alter statement, I get beloe mentioned
>> error. Even though log shows that index file was moved to trash, I see
>> index file present, however I am not able to understand why i am getting
>> error.
>>
>> *ALTER INDEX TAB_INDEX ON TABLE REBUILD;*
>> *
>> *
>> *2013-01-08 15:30:46,602 Stage-1 map = 100%,  reduce = 100%, Cumulative
>> CPU 3.41 sec*
>> *2013-01-08 15:30:47,612 Stage-1 map = 100%,  reduce = 100%, Cumulative
>> CPU 3.41 sec*
>> *MapReduce Total cumulative CPU time: 3 seconds 410 msec*
>> *Ended Job = job_201301072354_0115*
>> *Loading data to table TABLE_index_table*
>> *rmr: DEPRECATED: Please use 'rm -r' instead.*
>> *Moved: 'hdfs://host:8020/hive/optqc5d.db/TABLE_index_table' to trash
>> at: hdfs://host:8020/user/saachhra/.Trash/Current*
>> *Invalid alter operation: Unable to alter index.*
>> *FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.DDLTask*
>>
>> Good wishes,always !
>> Santosh
>>
>
>

Reply via email to