So looks like I have some tables created with the previous version of
phoenix before the migration toward the apache project.
The meta data on the tables have their coprocessors defined like this:

coprocessor$5 =>
'|com.salesforce.hbase.index.Indexer|1073741823|com.salesforce.hbase.index.codec.class=com.salesforce.phoenix.index.PhoenixIndexCodec,index.builder=com.salesforce.
true


 phoenix.index.PhoenixIndexBuilder', coprocessor$4 =>
'|com.salesforce.phoenix.coprocessor.ServerCachingEndpointImpl|1|',
coprocessor$3 =>
'|com.salesforce.phoenix.coprocessor.GroupedAggregateRegionObser



 ver|1|', coprocessor$2 =>
'|com.salesforce.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|',
coprocessor$1 => '|com.salesforce.phoenix.coprocessor.ScanRegionObserver|1|'


Clearly it still references the old package name, and won't work with the
latest Phoenix.

What do I need to do to be able to run the latest Phoenix without losing
data?

Thanks

Sean






On Thu, Feb 6, 2014 at 11:50 AM, Sean Huo <[email protected]> wrote:

> I pushed the latest phoenix jar to the regionservers and restart.
> There are tons of exception pertaining to the coprocessor like
>
> 2014-02-06 11:39:00,570 DEBUG
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor
> class com.salesforce.phoenix.coprocessor.UngroupedAggregateRegionObserver
> with path null and priority 1
>
> 2014-02-06 11:39:00,571 WARN
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute
> 'coprocessor$2' has invalid coprocessor specification
> '|com.salesforce.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|'
>
> 2014-02-06 11:39:00,571 WARN
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
> java.io.IOException: No jar path specified for
> com.salesforce.phoenix.coprocessor.UngroupedAggregateRegionObserver
>
> at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)
>
>
> I understand that the new code is under apache , and the package name has
> been changed to
>
> org.apache.phoenix, hence the error can be understood.
>
> Are there any migrations that have to be undertaken to get rid of the
> errors?
>
>
> Thanks
>
> Sean
>

Reply via email to