[ 
https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16569064#comment-16569064
 ] 

Chia-Ping Tsai commented on HBASE-21008:
----------------------------------------

{quote}What is preferred here?
{quote}
Here's is my two cents.
 # Since we have released 2.0 and 2.1, backporting only the "read" part to 1.x 
is necessary. 
 # if all 1.x deployment should work with hfiles generated by 2.x, we can file 
another Jira to change the serialization from protobuf to previous way. (of 
course, the hfiled impacted by HBASE-18754 still can't work with 1.x...:()

> HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
> ------------------------------------------------------------
>
>                 Key: HBASE-21008
>                 URL: https://issues.apache.org/jira/browse/HBASE-21008
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 2.1.0, 1.4.6
>            Reporter: Jerry He
>            Priority: Major
>
> It looks like HBase 1.x can not open hfiiles written by HBase2 still.
> I tested the latest HBase 1.4.6 and 2.1.0.  1.4.6 tried to read and open 
> regions written by 2.1.0.
> {code}
> 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] 
> regionserver.StoreFile: Error reading timestamp range data from meta -- 
> proceeding without
> java.lang.IllegalArgumentException: Timestamp cannot be negative. 
> minStamp:5783278630776778969, maxStamp:-4698050386518222402
>         at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112)
>         at org.apache.hadoop.hbase.io.TimeRange.<init>(TimeRange.java:100)
>         at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214)
>         at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521)
>         at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679)
>         at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122)
>         at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538)
>         at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> {code}
> Or:
> {code}
> 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] 
> handler.OpenRegionHandler: Failed open of 
> region=janusgraph,,1532630557542.b0fa777715cb0bf1b0bf740997b7056c., starting 
> to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.EOFException
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834)
>         at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364)
>         at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131)
>         at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.EOFException
>         at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564)
>         at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518)
>         at org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:281)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         ... 3 more
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readFully(DataInputStream.java:197)
>         at java.io.DataInputStream.readLong(DataInputStream.java:416)
>         at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.readFields(TimeRangeTracker.java:170)
>         at 
> org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:161)
>         at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRangeTracker(TimeRangeTracker.java:187)
>         at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:197)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521)
>         at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to