Is the assumption that hbase:meta would not split ? In other thread, Francis Liu was proposing supporting splittable hbase:meta in 2.0 release.
Cheers > On Dec 6, 2016, at 2:20 AM, 张铎 <palomino...@gmail.com> wrote: > > Could this be solved by hosting meta only on master? > > BTW, MetaCellComparator is introduced in HBASE-10800. > > Thanks. > > 2016-12-06 17:44 GMT+08:00 Ted Yu <yuzhih...@gmail.com>: > >> Hi, >> When I restarted a cluster with 1.1 , I found that hbase:meta region >> (written to by the previously deployed 2.0) couldn't be opened: >> >> Caused by: java.io.IOException: >> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading >> HFile Trailer from file hdfs://yz1.xx.com:8020/apps/ hbase/data/data/ >> hbase/meta/1588230740/info/599fc8a37311414e876803312009a986 >> at >> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java: >> 579) >> at >> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java: >> 534) >> at >> org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:275) >> at >> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion. >> java:5150) >> at >> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:912) >> at >> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:909) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> ... 3 more >> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem >> reading HFile Trailer from file hdfs:// >> yz1.xx.com:8020/apps/hbase/data/data/hbase/ meta/1588230740/ >> info/599fc8a37311414e876803312009a986 >> at >> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:483) >> at >> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:511) >> at >> org.apache.hadoop.hbase.regionserver.StoreFile$Reader. >> <init>(StoreFile.java:1128) >> at >> org.apache.hadoop.hbase.regionserver.StoreFileInfo. >> open(StoreFileInfo.java:267) >> at >> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:409) >> at >> org.apache.hadoop.hbase.regionserver.StoreFile. >> createReader(StoreFile.java:517) >> at >> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader( >> HStore.java:687) >> at >> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:130) >> at >> org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:554) >> at >> org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:551) >> ... 6 more >> Caused by: java.io.IOException: java.lang.ClassNotFoundException: >> org.apache.hadoop.hbase.CellComparator$MetaCellComparator >> at >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass( >> FixedFileTrailer.java:581) >> at >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserializeFromPB( >> FixedFileTrailer.java:300) >> at >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer. >> deserialize(FixedFileTrailer.java:242) >> at >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream( >> FixedFileTrailer.java:407) >> at >> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:468) >> ... 15 more >> Caused by: java.lang.ClassNotFoundException: >> org.apache.hadoop.hbase.CellComparator$MetaCellComparator >> at java.net.URLClassLoader.findClass(URLClassLoader.java:381) >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424) >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357) >> at java.lang.Class.forName0(Native Method) >> at java.lang.Class.forName(Class.java:264) >> at >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass( >> FixedFileTrailer.java:579) >> >> When user does rolling upgrade from 1.1 to 2.0, the above may cause problem >> if hbase:meta region is updated by server running 2.0 but later assigned to >> a region server which still runs 1.1 (due to crash of the server running >> 2.0, e.g.) >> >> I want to get community feedback on the severity of this issue. >> >> Thanks >>