Also check the Hive Metastore log files.

In general if the table has a large number of partitions incremental stats
will have very large overhead in terms of metadata.

I would recommend running "compute stats bi_ full" then manually set the
row count for newly added partitions whenever possible.

On Wed, Jan 3, 2018 at 10:36 AM, Alexander Behm <[email protected]>
wrote:

> Thanks for the report. I have not seen this issue. Looks like the alter
> RPC is rejected by the Hive Metastore. Maybe looking into the
> Hive/Metastore logs would help.
>
> The SHOW CREATE TABLE output might also help us debug.
>
> On Wed, Jan 3, 2018 at 10:28 AM, Piyush Narang <[email protected]>
> wrote:
>
>> Hi folks,
>>
>>
>>
>> I’m running into some issues when I try to compute incremental stats in
>> Impala that I was hoping someone would be able to help with. I’m able to
>> ‘compute stats’ in Impala on my smaller tables just fine. When I try
>> computing stats incrementally for one of my larger tables, I seem to be
>> running into this error:
>>
>> > compute incremental stats bi_ full partition
>> (param1=0,day='2017-10-04',hour=00,host_platform='EU');
>>
>> Query: compute incremental stats bi_full partition
>> (param1=0,day='2017-10-04',hour=00,host_platform='EU')
>>
>> WARNINGS: ImpalaRuntimeException: Error making 'alter_partitions' RPC to
>> Hive Metastore:
>>
>> CAUSED BY: InvalidOperationException: alter is not possible
>>
>>
>>
>> Looking at impalad.INFO and catalogd.INFO I don’t see any additional
>> details. I verified that I’m the owner of the tables in HDFS.
>>
>>
>>
>> Has anyone run into this issue in the past? Any workarounds?
>>
>>
>>
>> Thanks,
>>
>>
>>
>> -- Piyush
>>
>>
>>
>
>

Reply via email to