Thanks James! Is there a jira ref for the fix?

On Nov 26, 2016 11:50 AM, "James Taylor" <jamestay...@apache.org> wrote:

> I believe that issue has been fixed. The 4.4 release is 1 1/2 years old
> and we've had five releases since that have fixed hundreds of bugs. Please
> encourage your vendor to provide a more recent release.
>
> Thanks,
> James
>
> On Sat, Nov 26, 2016 at 10:23 AM Neelesh <neele...@gmail.com> wrote:
>
>> Hi All,
>>   we are using phoenix 4.4 with HBase 1.1.2 (HortonWorks distribution).
>> We're struggling with the following error on pretty much all our region
>> servers. The indexes are global, the data table has more than a 100B rows
>>
>> 2016-11-26 12:15:41,250 INFO  
>> [RW.default.writeRpcServer.handler=40,queue=6,port=16020]
>> util.IndexManagementUtil: Rethrowing 
>> org.apache.hadoop.hbase.DoNotRetryIOException:
>> ERROR 2008 (INT10): ERROR
>> 2008 (INT10): Unable to find cached index metadata.
>>  key=7015231383024113337 region=<table>,<keyprefix>-056946674
>>                    ,1477336770695.07d70ebd63f737a62e24387cf0912af5. Index
>> update failed
>>
>> I looked at https://issues.apache.org/jira/browse/PHOENIX-1718  and
>> bumped up the settings mentioned there to 1 hour
>>
>> <property>
>> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>
>> <value>3600000</value>
>> </property>
>> <property>
>> <name>phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs</name>
>> <value>3600000</value>
>> </property>
>>
>> but to no avail.
>>
>> Any help is appreciated!
>>
>> Thanks!
>>
>>

Reply via email to