[
https://issues.apache.org/jira/browse/PHOENIX-1294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150399#comment-14150399
]
Lars Hofhansl commented on PHOENIX-1294:
----------------------------------------
Generally this is actually a lot to ask. We do not do that in HBase either...
You only get roll-forward.
On the other hand in minor version (such as 4.1-4.2) we should not break tables
in such a way.
We still need to have the discussion about what exactly we want to allow in
major/minor/patch versions.
[~giacomotaylor], [~jesse_yates], FYI.
> Provide rollback capability
> ---------------------------
>
> Key: PHOENIX-1294
> URL: https://issues.apache.org/jira/browse/PHOENIX-1294
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.2
> Reporter: Carter Shanklin
>
> I was having trouble with 4.2 causing my regionservers to abort (somthing to
> do with the stats table) so I decided to revert to 4.1. This proved difficult
> since 4.1 did not have certain classes that were required by the system
> tables Phoenix creates:
> {code}
> 2014-09-25 11:30:44,147 ERROR [RS_OPEN_REGION-sandbox:60020-1]
> coprocessor.CoprocessorHost: The coprocessor
> org.apache.phoenix.schema.stat.StatisticsCollector threw an unexpected
> exception
> java.io.IOException: No jar path specified for
> org.apache.phoenix.schema.stat.StatisticsCollector
> at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> {code}
> This caused the regionservers to abort so I couldn't drop or modify the
> table.
> So then I got rid of any and all Phoenix JARs from the HBase classpath and
> this:
> {code}
> 2014-09-25 09:12:33,791 ERROR [RS_OPEN_REGION-sandbox:60020-0]
> coprocessor.CoprocessorHost: The coprocessor
> org.apache.phoenix.coprocessor.MetaDataRegionObserver threw an unexpected
> exception
> java.io.IOException: No jar path specified for
> org.apache.phoenix.coprocessor.MetaDataRegionObserver
> at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> {code}
> At this point I'm planning to delete all my HBase data to work around the
> issue. No loss there wasn't any real data in there.
> But it would be good if rolling back was easier.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)