[ 
https://issues.apache.org/jira/browse/HBASE-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14381804#comment-14381804
 ] 

Ruben Aguiar commented on HBASE-13329:
--------------------------------------

After further testing and tweaking, I'm starting to doubt this is related to 
the data being written itself. At the moment, I have 4 different servers 
sending random data, each one of these sending to a random metric between its 
range of 10 possible ones. This means writes are never repeated constantly, 
either the data or the metrics itself. Even regarding tags, I'm constantly 
changing between two of them. However, after deleting the whole database and 
starting the whole cluster from a clean sheet, after a few minutes a similar 
problem arises. This time, it fails a bit further on the code with a negative 
array size exception:
2015-03-26 11:59:42,597 INFO  [MemStoreFlusher.1] regionserver.HStore: Added 
hdfs://10.2.0.74:8020/hbase/data/default/tsdb/59f6a2c16cd9c
51677efa96ddf038fe8/t/d999a9dee66c4948862440ba1459a826, entries=806262, 
sequenceid=16116, filesize=6.8 M
2015-03-26 11:59:42,599 INFO  [MemStoreFlusher.1] regionserver.HRegion: 
Finished memstore flush of ~129.72 MB/136019416, currentsize=1.7
5 MB/1837224 for region tsdb,,1427368142097.59f6a2c16cd9c51677efa96ddf038fe8. 
in 717ms, sequenceid=16116, compaction requested=false
2015-03-26 12:02:15,957 INFO  [MemStoreFlusher.0] regionserver.HRegion: Started 
memstore flush for tsdb,,1427368142097.59f6a2c16cd9c5167
7efa96ddf038fe8., current region memstore size 128.06 MB
2015-03-26 12:02:16,077 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: 
ABORTING region server 10.2.0.73,16020,1427368055017: Repl
ay of WAL required. Forcing server shutdown
org.apache.hadoop.hbase.DroppedSnapshotException: region: 
tsdb,,1427368142097.59f6a2c16cd9c51677efa96ddf038fe8.
        at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1999)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1770)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1702)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:445)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NegativeArraySizeException
        at 
org.apache.hadoop.hbase.CellComparator.getMinimumMidpointArray(CellComparator.java:490)
        at 
org.apache.hadoop.hbase.CellComparator.getMidpoint(CellComparator.java:448)
        at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishBlock(HFileWriterV2.java:165)
        at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.checkBlockBoundary(HFileWriterV2.java:146)
        at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:263)
        at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
        at 
org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:932)
        at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:121)
        at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:71)
        at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:879)
        at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2128)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1953)
        ... 7 more

After some investigation, I ran a hdfs fsck and it reported healthy. However, 
with the flag -openforwrite (include files opened for writes) I get corrupted 
blocks that refer to the WALS on the other slave machines. This might be due to 
the region server crashing unexpectedly. The fsck output:
/hbase/WALs/10.2.0.171,16020,1427368055792/10.2.0.171%2C16020%2C1427368055792.default.1427368062105
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/10.2.0.171,16020,1427368055792/10.2.0.171%2C16020%2C1427368055792.default.1427368062105:
 MISSING 1 blocks of total size 83 
B../hbase/WALs/10.2.0.224,16020,1427368062011-splitting/10.2.0.224%2C16020%2C1427368062011.default.1427368068287
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/10.2.0.224,16020,1427368062011-splitting/10.2.0.224%2C16020%2C1427368062011.default.1427368068287:
 MISSING 1 blocks of total size 83 
B.....................................Status: CORRUPT
 Total size:    105297956 B
 Total dirs:    54
 Total files:   39
 Total symlinks:                0
 Total blocks (validated):      33 (avg. block size 3190847 B)
  ********************************
  CORRUPT FILES:        2
  MISSING BLOCKS:       2
  MISSING SIZE:         166 B
  ********************************
 Minimally replicated blocks:   31 (93.93939 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.8181818
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Thu Mar 26 12:22:51 UTC 2015 in 21 milliseconds


The filesystem under path '/' is CORRUPT

Again, any help or guidance is very much appreciated.




> Memstore flush fails if data has always the same value, breaking the region
> ---------------------------------------------------------------------------
>
>                 Key: HBASE-13329
>                 URL: https://issues.apache.org/jira/browse/HBASE-13329
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>    Affects Versions: 1.0.1
>         Environment: linux-debian-jessie
> ec2 - t2.micro instances
>            Reporter: Ruben Aguiar
>            Priority: Critical
>             Fix For: 2.0.0, 1.1.0
>
>
> While trying to benchmark my opentsdb cluster, I've created a script that 
> sends to hbase always the same value (in this case 1). After a few minutes, 
> the whole region server crashes and the region itself becomes impossible to 
> open again (cannot assign or unassign). After some investigation, what I saw 
> on the logs is that when a Memstore flush is called on a large region (128mb) 
> the process errors, killing the regionserver. On restart, replaying the edits 
> generates the same error, making the region unavailable. Tried to manually 
> unassign, assign or close_region. That didn't work because the code that 
> reads/replays it crashes.
> From my investigation this seems to be an overflow issue. The logs show that 
> the function getMinimumMidpointArray tried to access index -32743 of an 
> array, extremely close to the minimum short value in Java. Upon investigation 
> of the source code, it seems an index short is used, being incremented as 
> long as the two vectors are the same, probably making it overflow on large 
> vectors with equal data. Changing it to int should solve the problem.
> Here follows the hadoop logs of when the regionserver went down. Any help is 
> appreciated. Any other information you need please do tell me:
> 2015-03-24 18:00:56,187 INFO  [regionserver//10.2.0.73:16020.logRoller] 
> wal.FSHLog: Rolled WAL 
> /hbase/WALs/10.2.0.73,16020,1427216382590/10.2.0.73%2C16020%2C1427216382590.default.1427220018516
>  with entries=143, filesize=134.70 MB; new WAL 
> /hbase/WALs/10.2.0.73,16020,1427216382590/10.2.0.73%2C16020%2C1427216382590.default.1427220056140
> 2015-03-24 18:00:56,188 INFO  [regionserver//10.2.0.73:16020.logRoller] 
> wal.FSHLog: Archiving 
> hdfs://10.2.0.74:8020/hbase/WALs/10.2.0.73,16020,1427216382590/10.2.0.73%2C16020%2C1427216382590.default.1427219987709
>  to 
> hdfs://10.2.0.74:8020/hbase/oldWALs/10.2.0.73%2C16020%2C1427216382590.default.1427219987709
> 2015-03-24 18:04:35,722 INFO  [MemStoreFlusher.0] regionserver.HRegion: 
> Started memstore flush for 
> tsdb,,1427133969325.52bc1994da0fea97563a4a656a58bec2., current region 
> memstore size 128.04 MB
> 2015-03-24 18:04:36,154 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: 
> ABORTING region server 10.2.0.73,16020,1427216382590: Replay of WAL required. 
> Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: 
> tsdb,,1427133969325.52bc1994da0fea97563a4a656a58bec2.
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1999)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1770)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1702)
>       at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:445)
>       at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
>       at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
>       at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: -32743
>       at 
> org.apache.hadoop.hbase.CellComparator.getMinimumMidpointArray(CellComparator.java:478)
>       at 
> org.apache.hadoop.hbase.CellComparator.getMidpoint(CellComparator.java:448)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishBlock(HFileWriterV2.java:165)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.checkBlockBoundary(HFileWriterV2.java:146)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:263)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:932)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:121)
>       at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:71)
>       at 
> org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:879)
>       at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2128)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1953)
>       ... 7 more
> 2015-03-24 18:04:36,156 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: 
> RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to