[
https://issues.apache.org/jira/browse/HBASE-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12652014#action_12652014
]
Andrew Purtell commented on HBASE-1039:
---------------------------------------
One crucial detail it seems is that the bloomfilter related exception happens
even when no bloomfilters are enabled in the schema. There are also DFS
related exceptions.
>From Thibaut:
I created all the tables from scratch and didn't change them at run time. The
schema for all the tables right now is as followed. (data is a bytearray of a
serialized google buffer object)
{NAME => 'entries', IS_ROOT => 'false', IS_META => 'false', FAMILIES =>
[{NAME => 'data', BLOOMFILTER => 'false', COMPRESSION => 'NONE', VERSIONS =>
'3', LENGTH => '2147483647', TTL => '-1', IN_MEMORY => 'false', BLOCKCACHE =>
'false'}]}
I reran everything from scratch with the new table scheme and got the same
exception again, just on a different table this time: (Disabling the
bloomfilter, compression and the blockcache doesn't seem to have any effect)
2008-11-30 23:22:20,774 ERROR
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction failed for
region entries,,1228075277421
java.lang.IllegalArgumentException: maxValue must be > 0
at org.onelab.filter.HashFunction.<init>(HashFunction.java:84)
at org.onelab.filter.Filter.<init>(Filter.java:97)
at org.onelab.filter.BloomFilter.<init>(BloomFilter.java:102)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Writer.<init>(HStoreFile.java:829)
at
org.apache.hadoop.hbase.regionserver.HStoreFile.getWriter(HStoreFile.java:436)
at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:889)
at
org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:902)
at
org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:860)
at
org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:83)
The log file is also full of these kind of errors: (before and after)
2008-11-30 23:22:44,500 INFO org.apache.hadoop.ipc.Server: IPC Server handler
16 on 60020, call next(8976385860586379110) from x.x.x.203:52747: error:
org.apache.hadoop.hbase.UnknownScannerException: Name: 8976385860586379110
at
org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1077)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.hbase.ipc.HbaseRPC$Server.call(HbaseRPC.java:554)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
Shortly afterwards I got the dfs error on the regionserver again, on a
different table though (and might be completely unreleated and not important?):
2008-11-30 23:26:55,885 WARN org.apache.hadoop.dfs.DFSClient: Exception while
reading from blk_-9066140877711029349_706715 of
/hbase/webrequestscache/2091560474/data/mapfiles/1510543474646532027/data from
x.x.x.204:50010: java.io.IOException: Premeture EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:102)
at org.apache.hadoop.dfs.DFSClient$BlockReader.readChunk(DFSClient.java:996)
at
org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:236)
at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:191)
at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:159)
at org.apache.hadoop.dfs.DFSClient$BlockReader.read(DFSClient.java:858)
at
org.apache.hadoop.dfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:1384)
at org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1420)
at java.io.DataInputStream.readFully(DataInputStream.java:176)
at
org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:64)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:102)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1933)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1833)
at org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:463)
at org.apache.hadoop.io.MapFile$Reader.getClosest(MapFile.java:558)
at org.apache.hadoop.io.MapFile$Reader.getClosest(MapFile.java:541)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Reader.getClosest(HStoreFile.java:761)
at org.apache.hadoop.hbase.regionserver.HStore.get(HStore.java:1291)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:1154)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1020)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.hbase.ipc.HbaseRPC$Server.call(HbaseRPC.java:554)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
Datnode entries related to that block:
08/11/30 22:44:19 INFO dfs.DataNode: Receiving block
blk_-9066140877711029349_706715 src: /x.x.x.204:44313 dest: /x.x.x.204:50010
08/11/30 22:44:41 INFO dfs.DataNode: Received block
blk_-9066140877711029349_706715 of size 33554432 from /x.x.x.204
08/11/30 22:44:41 INFO dfs.DataNode: PacketResponder 3 for block
blk_-9066140877711029349_706715 terminating
08/11/30 22:53:18 WARN dfs.DataNode: DatanodeRegistration(x.x.x.204:50010,
storageID=DS-364968361-x.x.x.204-50010-1220223683238, infoPort=50075,
ipcPort=50020):Got exception while serving blk_-9066140877711029349_706715 to
/x.x.x.204: java.net.SocketTimeoutException: 480000 millis timeout while
waiting for channel to be ready for write. ch
:java.nio.channels.SocketChannel[connected local=/x.x.x.204:50010
remote=/x.x.x.204:46220]
at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:185)
at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
at
org.apache.hadoop.dfs.DataNode$BlockSender.sendChunks(DataNode.java:1873)
at
org.apache.hadoop.dfs.DataNode$BlockSender.sendBlock(DataNode.java:1967)
at
org.apache.hadoop.dfs.DataNode$DataXceiver.readBlock(DataNode.java:1109)
at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:1037)
at java.lang.Thread.run(Thread.java:595)
> Compaction fails if bloomfilters are enabled
> --------------------------------------------
>
> Key: HBASE-1039
> URL: https://issues.apache.org/jira/browse/HBASE-1039
> Project: Hadoop HBase
> Issue Type: Bug
> Components: regionserver
> Affects Versions: 0.18.1
> Reporter: Andrew Purtell
>
> From Thibaut up on the list.
> As soon as hbase tries to compact the table, the following exception appears
> in the logfile: (Other compactations also work fine without any errors)
> 2008-11-30 00:55:57,769 ERROR
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction failed
> for region mytable,,1228002541526
> java.lang.IllegalArgumentException: maxValue must be > 0
> at org.onelab.filter.HashFunction.<init>(HashFunction.java:84)
> at org.onelab.filter.Filter.<init>(Filter.java:97)
> at org.onelab.filter.BloomFilter.<init>(BloomFilter.java:102)
> at
> org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Writer.<init>(HStoreFile.java:829)
> at
> org.apache.hadoop.hbase.regionserver.HStoreFile.getWriter(HStoreFile.java:436)
> at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:889)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:902)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:860)
> at
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:83)
> Because the region cannot compact and/or split, it is soon dead after
> (re)assignment.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.