[ 
https://issues.apache.org/jira/browse/HBASE-1073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12658346#action_12658346
 ] 

stack commented on HBASE-1073:
------------------------------

Here is how the first stacktrace set ends (Its missing from post above):

{code}
....
2008-12-17 16:21:59,162 WARN org.apache.hadoop.hdfs.DFSClient: 
NotReplicatedYetException sleeping 
/hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
 retries left 1
2008-12-17 16:22:02,376 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
Exception: org.apache.hadoop.ipc.RemoteException: 
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on 
/hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
 File does not exist. [Lease.  Holder: DFSClient_-425857031, pendingcreates: 1]
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1331)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1322)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1250)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
        at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:892)

        at org.apache.hadoop.ipc.Client.call(Client.java:696)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
        at $Proxy1.addBlock(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy1.addBlock(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2815)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2697)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)

2008-12-17 16:22:02,376 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery 
for block blk_-2981913121263453157_39681 bad datanode[0] nodes == null
2008-12-17 16:22:02,376 WARN org.apache.hadoop.hdfs.DFSClient: Could not get 
block locations. Aborting...
2008-12-17 16:22:02,425 ERROR 
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction/Split 
failed for region streamitems,^...@^@^...@^@^...@�^d;,1229470182375
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: 
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on 
/hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
 File does not exist. [Lease.  Holder: DFSClient_-425857031, pendingcreates: 1]
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1331)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1322)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1250)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
        at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:892)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
        at 
org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:95)
        at 
org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
        at 
org.apache.hadoop.hbase.RemoteExceptionHandler.checkIOException(RemoteExceptionHandler.java:66)
        at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:98)
2008-12-17 16:22:02,427 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
starting compaction on region streamitems,^...@^@^...@^@^C^WU7,1229374746426
...
{code}

All is normal again except for this region.

> HStore out of sync with what is on filesystem
> ---------------------------------------------
>
>                 Key: HBASE-1073
>                 URL: https://issues.apache.org/jira/browse/HBASE-1073
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: stack
>             Fix For: 0.19.0
>
>
> On streamy cluster, TRUNK, I saw this when we went to major compact:
> {code}
> 2008-12-17 16:21:48,725 DEBUG org.apache.hadoop.hbase.regionserver.HStore: 
> Major compaction triggered on store: 657002096/content. Time since last major 
> compaction: 89517 seconds
> 2008-12-17 16:21:48,842 DEBUG org.apache.hadoop.hbase.regionserver.HStore: 
> Started compaction of 1 file(s)  into 
> /hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172
> 2008-12-17 16:21:56,354 INFO org.apache.hadoop.hdfs.DFSClient: 
> org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on 
> /hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
>  File does not exist. [Lease.  Holder: DFSClient_-425857031, pendingcreates: 
> 1]
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1331)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1322)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1250)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
>         at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:892)
>         at org.apache.hadoop.ipc.Client.call(Client.java:696)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>         at $Proxy1.addBlock(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at $Proxy1.addBlock(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2815)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2697)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)
> 2008-12-17 16:21:56,354 WARN org.apache.hadoop.hdfs.DFSClient: 
> NotReplicatedYetException sleeping 
> /hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
>  retries left 4
> 2008-12-17 16:21:56,757 INFO org.apache.hadoop.hdfs.DFSClient: 
> org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on 
> /hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
>  File does not exist. [Lease.  Holder: DFSClient_-425857031, pendingcreates: 
> 1]
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1331)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1322)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1250)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
>         at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:892)
>         at org.apache.hadoop.ipc.Client.call(Client.java:696)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>         at $Proxy1.addBlock(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at $Proxy1.addBlock(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2815)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2697)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)
> 2008-12-17 16:21:56,757 WARN org.apache.hadoop.hdfs.DFSClient: 
> NotReplicatedYetException sleeping 
> /hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
>  retries left 3
> 2008-12-17 16:21:57,561 INFO org.apache.hadoop.hdfs.DFSClient: 
> org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on 
> /hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
>  File does not exist. [Lease.  Holder: DFSClient_-425857031, pendingcreates: 
> 1]
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1331)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1322)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1250)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
>         at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:892)
>         at org.apache.hadoop.ipc.Client.call(Client.java:696)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>         at $Proxy1.addBlock(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at $Proxy1.addBlock(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2815)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2697)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)
> 2008-12-17 16:21:57,561 WARN org.apache.hadoop.hdfs.DFSClient: 
> NotReplicatedYetException sleeping 
> /hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
>  retries left 2
> 2008-12-17 16:21:59,162 INFO org.apache.hadoop.hdfs.DFSClient: 
> org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on 
> /hbase/streamitems/compaction.dir/657002096/content/mapfiles/362900034457176172/data
>  File does not exist. [Lease.  Holder: DFSClient_-425857031, pendingcreates: 
> 1]
> ...
> {code}
> Thereafter, an attempt to open a scanner on region fails...
> {code}
> 2008-12-18 00:41:03,964 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> compaction completed on region streamitems,^...@^@^...@^@^Bv�^K,1229373844647 
> in 0sec
> 2008-12-18 01:08:50,274 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> java.io.IOException: HStoreScanner failed construction
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.<init>(StoreFileScanner.java:70)
>         at 
> org.apache.hadoop.hbase.regionserver.HStoreScanner.<init>(HStoreScanner.java:84)
>         at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2120)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$HScanner.<init>(HRegion.java:1978)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1160)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:1581)
>         at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:632)
>         at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:892)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> hdfs://mb0:9000/hbase/streamitems/657002096/content/mapfiles/2509136481693763805/data
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:394)
>         at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:679)
>         at 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1417)
>         at 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1412)
>         at 
> org.apache.hadoop.io.MapFile$Reader.createDataFileReader(MapFile.java:302)
>         at 
> org.apache.hadoop.hbase.io.HBaseMapFile$HBaseReader.createDataFileReader(HBaseMapFile.java:98)
>         at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:284)
>         at 
> org.apache.hadoop.hbase.io.HBaseMapFile$HBaseReader.<init>(HBaseMapFile.java:81)
>         at 
> org.apache.hadoop.hbase.io.BloomFilterMapFile$Reader.<init>(BloomFilterMapFile.java:66)
>         at 
> org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:443)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.openReaders(StoreFileScanner.java:96)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.<init>(StoreFileScanner.java:67)
>         ... 10 more
> ...
> {code}
> The first error seems to mess up our view of whats on the filesystem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to