And what about hbase 0.90 export && distcp hftp://hdfs0.20/ dfs://hdfs1.0/ && hbase 0.92 import ?
Then switch client (a rest interface), then recorver the few last update with the same approch limiting export on starttime http://hadoop.apache.org/docs/hdfs/current/hftp.html This way could be safe with a minimal downtime ? Cheers, 2012/9/28 n keywal <[email protected]> > Depending on what you're doing with the data, I guess you might have some > corner cases, especially after a major compaction. That may be a non > trivial piece of code to write (again, it depends on how you use HBase. May > be it is actually trivial). > And, if you're pessimistic, the regression in 0.92 can be one of those that > corrupts the data, so you will need manual data fixes as well during the > rollback. > > It may be simpler to secure the migration by investing more in the testing > process (dry/parallel runs). As well, if you find bugs while a release is > in progress, it increases your chances to get your bugs fixed... > > Nicolas > > On Thu, Sep 27, 2012 at 10:37 AM, Damien Hardy <[email protected] > >wrote: > > > Actually, I have an old cluster on on prod with 0.90.3 version installed > > manually and I am working on a CDH4 new cluster deployed full automatic > > with puppet. > > While migration is not reversible (according to the pointer given by > > Jean-Daniel) I would like to keep he old cluster safe by side to be able > to > > revert operation > > Switching from an old vanilla version to a Cloudera one is an other risk > > introduced in migrating the actual cluster and I'm not feeling > confortable > > with. > > My idea is to copy data from old to new and switch clients the new > cluster > > and I am lookin for the best strategy to manage it. > > > > A scanner based on timestamp should be enougth to get the last updates > > after switching (But trying to keep it short). > > > > Cheers, > > > > -- > > Damien > > > > 2012/9/27 n keywal <[email protected]> > > > > > You don't have to migrate the data when you upgrade, it's done on the > > fly. > > > But it seems you want to do something more complex? A kind of realtime > > > replication between two clusters in two different versions? > > > > > > On Thu, Sep 27, 2012 at 9:56 AM, Damien Hardy <[email protected]> > > > wrote: > > > > > > > Hello, > > > > > > > > Corollary, what is the better way to migrate data from a 0.90 cluster > > to > > > a > > > > 0.92 cluser ? > > > > > > > > Hbase 0.90 => Client 0.90 => stdout | stdin => client 0.92 => Hbase > > 0.92 > > > > > > > > All the data must tansit on a single host where compute the 2 > clients. > > > > > > > > It may be paralalize with mutiple version working with different > range > > > > scanner maybe but not so easy. > > > > > > > > Is there a copytable version that should read on 0.90 to write on > 0.92 > > > with > > > > mapreduce version ? > > > > > > > > maybe there is some sort of namespace available for Java Classes that > > we > > > > may use 2 version of a same package and go for a mapreduce ? > > > > > > > > Cheers, > > > > > > > > -- > > > > Damien > > > > > > > > 2012/9/25 Jean-Daniel Cryans <[email protected]> > > > > > > > > > It's not compatible. Like the guide says[1]: > > > > > > > > > > "replace your hbase 0.90.x with hbase 0.92.0 binaries (be sure you > > > > > clear out all 0.90.x instances) and restart (You cannot do a > rolling > > > > > restart from 0.90.x to 0.92.x -- you must restart)" > > > > > > > > > > This includes the client. > > > > > > > > > > J-D > > > > > > > > > > 1. http://hbase.apache.org/book.html#upgrade0.92 > > > > > > > > > > On Tue, Sep 25, 2012 at 11:16 AM, Agarwal, Saurabh > > > > > <[email protected]> wrote: > > > > > > Hi, > > > > > > > > > > > > We recently upgraded hbase 0.90.4 to HBase 0.92. Our HBase app > > worked > > > > > fine in hbase 0.90.4. > > > > > > > > > > > > Our new setup has HBase 0.92 server and hbase 0.90.4 client. And > > > throw > > > > > following exception when client would like to connect to server. > > > > > > > > > > > > Is anyone running HBase 0.92 server and hbase 0.90.4 client? Let > me > > > > know, > > > > > > > > > > > > Thanks, > > > > > > Saurabh. > > > > > > > > > > > > > > > > > > 12/09/24 14:58:31 INFO zookeeper.ClientCnxn: Session > establishment > > > > > complete on server vm-3733-969C.nam.nsroot.net/10.49.217.56:2181, > > > > > sessionid = 0x139f61977650034, negotiated timeout = 60000 > > > > > > > > > > > > java.lang.IllegalArgumentException: Not a host:port pair: ? > > > > > > > > > > > > at > > > > > > org.apache.hadoop.hbase.HServerAddress.<init>(HServerAddress.java:60) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.zookeeper.RootRegionTracker.dataToHServerAddress(RootRegionTracker.java:82) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:73) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:786) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:766) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:895) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:797) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:766) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:895) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:801) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:766) > > > > > > > > > > > > at > > > org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:179) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.apache.hadoop.hbase.HBaseTestingUtility.truncateTable(HBaseTestingUtility.java:609) > > > > > > > > > > > > at > > > > > > > > > > > > > > > com.citi.sponge.flume.sink.ELFHbaseSinkTest.testAppend2(ELFHbaseSinkTest.java:221) > > > > > > > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown > > Source) > > > > > > > > > > > > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown > > > > Source) > > > > > > > > > > > > at java.lang.reflect.Method.invoke(Unknown Source) > > > > > > > > > > > > at junit.framework.TestCase.runTest(TestCase.java:168) > > > > > > > > > > > > at junit.framework.TestCase.runBare(TestCase.java:134) > > > > > > > > > > > > at > junit.framework.TestResult$1.protect(TestResult.java:110) > > > > > > > > > > > > at > > junit.framework.TestResult.runProtected(TestResult.java:128) > > > > > > > > > > > > at junit.framework.TestResult.run(TestResult.java:113) > > > > > > > > > > > > at junit.framework.TestCase.run(TestCase.java:124) > > > > > > > > > > > > at junit.framework.TestSuite.runTest(TestSuite.java:232) > > > > > > > > > > > > at junit.framework.TestSuite.run(TestSuite.java:227) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:81) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > > > > > > > > > > > > at > > > > > > > > > > > > > > > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > > > > > > > > > > > > > > > > > > > > -- > > Damien HARDY > > IT Infrastructure Architect > > > > Viadeo - 30 rue de la Victoire - 75009 Paris - France > > T : +33 1 80 48 39 73 – F : +33 1 42 93 22 56 > > > -- Damien HARDY IT Infrastructure Architect Viadeo - 30 rue de la Victoire - 75009 Paris - France T : +33 1 80 48 39 73 – F : +33 1 42 93 22 56
