[
https://issues.apache.org/jira/browse/PHOENIX-1294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150427#comment-14150427
]
James Taylor commented on PHOENIX-1294:
---------------------------------------
This is *not* an upgrade issue at all. 4.2 is not release yet. If you want to
try the pre-release version, it's best if you try on a test cluster. You may be
able to snapshot your SYSTEM.CATALOG table and rollback to the earlier one, but
there's no guarantee, as we may need to do metadata change on existing tables
to add coprocessors. In general, it's nearly impossible to allow a rollback to
the previous release, even among released versions, as HBase does not version
metadata changes.
[~lhofhansl]
> Provide rollback capability
> ---------------------------
>
> Key: PHOENIX-1294
> URL: https://issues.apache.org/jira/browse/PHOENIX-1294
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.2
> Reporter: Carter Shanklin
>
> I was having trouble with 4.2 causing my regionservers to abort (somthing to
> do with the stats table) so I decided to revert to 4.1. This proved difficult
> since 4.1 did not have certain classes that were required by the system
> tables Phoenix creates:
> {code}
> 2014-09-25 11:30:44,147 ERROR [RS_OPEN_REGION-sandbox:60020-1]
> coprocessor.CoprocessorHost: The coprocessor
> org.apache.phoenix.schema.stat.StatisticsCollector threw an unexpected
> exception
> java.io.IOException: No jar path specified for
> org.apache.phoenix.schema.stat.StatisticsCollector
> at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> {code}
> This caused the regionservers to abort so I couldn't drop or modify the
> table.
> So then I got rid of any and all Phoenix JARs from the HBase classpath and
> this:
> {code}
> 2014-09-25 09:12:33,791 ERROR [RS_OPEN_REGION-sandbox:60020-0]
> coprocessor.CoprocessorHost: The coprocessor
> org.apache.phoenix.coprocessor.MetaDataRegionObserver threw an unexpected
> exception
> java.io.IOException: No jar path specified for
> org.apache.phoenix.coprocessor.MetaDataRegionObserver
> at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> {code}
> At this point I'm planning to delete all my HBase data to work around the
> issue. No loss there wasn't any real data in there.
> But it would be good if rolling back was easier.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)