[
https://issues.apache.org/jira/browse/HBASE-13260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Enis Soztutar updated HBASE-13260:
----------------------------------
Attachment: hbase-13260_bench.patch
I ran a super simple mini benchmark on my MBP to understand the differences
between the WAL based store and region based store. Attaching a simple patch
for the test.
It is numThreads inserting a dummy procedure and deleting that procedure from
the store (which is the mini clusters store). Below is the output. I was not
able to insert 1M to WALProcStore, inserting 10K takes around 120sec with 50
threads.
|| || num_procs || 5 ||10 || 30 ||50 ||
|region proc store | 1M | 78s | ~68s | ~200 | ~300s|
|wal proc store | 10K | ? | 7s | ~94s | ~120s|
I have also observed these with WAL store with 50 threads:
{code}
2015-04-21 15:31:48,294 DEBUG
[localhost,50957,1429655505598_splitLogManager__ChoreService_1]
zookeeper.ZKSplitLog(184): Garbage collecting all recovering region znodes
java.lang.ArrayIndexOutOfBoundsException: -1
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker$BitSetNode.updateState(ProcedureStoreTracker.java:325)
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker$BitSetNode.update(ProcedureStoreTracker.java:101)
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.insert(ProcedureStoreTracker.java:357)
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.insert(ProcedureStoreTracker.java:343)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.insert(WALProcedureStore.java:301)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:107)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-04-21 15:32:46,450 DEBUG [localhost,50957,1429655505598_ChoreService_1]
compactions.PressureAwareCompactionThroughputController(148):
compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec
2015-04-21 15:32:46,450 DEBUG [localhost,50959,1429655505784_ChoreService_1]
compactions.PressureAwareCompactionThroughputController(148):
compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec
java.lang.ArrayIndexOutOfBoundsException: -1
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker$BitSetNode.updateState(ProcedureStoreTracker.java:325)
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker$BitSetNode.update(ProcedureStoreTracker.java:101)
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.insert(ProcedureStoreTracker.java:357)
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.insert(ProcedureStoreTracker.java:343)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.insert(WALProcedureStore.java:301)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:107)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
java.lang.ArrayIndexOutOfBoundsException: -1
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker$BitSetNode.updateState(ProcedureStoreTracker.java:325)
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker$BitSetNode.update(ProcedureStoreTracker.java:101)
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.insert(ProcedureStoreTracker.java:357)
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.insert(ProcedureStoreTracker.java:343)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.insert(WALProcedureStore.java:301)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:107)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
...
2015-04-21 15:33:46,449 DEBUG [localhost,50957,1429655505598_ChoreService_1]
compactions.PressureAwareCompactionThroughputController(148):
compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec
2015-04-21 15:33:46,449 DEBUG [localhost,50959,1429655505784_ChoreService_1]
compactions.PressureAwareCompactionThroughputController(148):
compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
java.lang.ArrayIndexOutOfBoundsException
Wrote 10000 procedures in 166299 ms
{code}
When run with <10 threads, I see a different behavior. The WAL gets rolled very
frequently.
{code}
2015-04-21 15:42:21,976 INFO [IPC Server handler 5 on 51193]
blockmanagement.BlockManager(1074): BLOCK* addToInvalidates:
blk_1073741835_1011 127.0.0.1:51194
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-04-21 15:42:21,989 INFO [IPC Server handler 9 on 51193]
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated:
127.0.0.1:51194 is added to
blk_1073741839_1015{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-5bb1bfab-5e56-49e0-ab98-e4d497d85c83:NORMAL|RBW]]}
size 385
2015-04-21 15:42:21,989 INFO [pool-57-thread-1] wal.WALProcedureStore(549):
Roll new state log: 3
2015-04-21 15:42:21,990 INFO [pool-57-thread-1] wal.WALProcedureStore(571):
Remove all state logs with ID less then 2
2015-04-21 15:42:21,990 DEBUG [pool-57-thread-1] wal.WALProcedureStore(584):
remove log:
hdfs://localhost:51193/user/enis/test-data/de5f1aa9-4a7d-4871-88b5-7bb10b58c159/MasterProcWALs/state-00000000000000000002.log
2015-04-21 15:42:21,991 INFO [IPC Server handler 0 on 51193]
blockmanagement.BlockManager(1074): BLOCK* addToInvalidates:
blk_1073741839_1015 127.0.0.1:51194
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at
org.apache.hadoop.hbase.procedure2.store.ProcedureStoreTracker.delete(ProcedureStoreTracker.java:373)
at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.delete(WALProcedureStore.java:369)
at
org.apache.hadoop.hbase.procedure2.ProcedureStoreTest$Worker.run(ProcedureStoreTest.java:109)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-04-21 15:42:22,012 DEBUG
[localhost,51201,1429656139456_splitLogManager__ChoreService_1]
zookeeper.ZKSplitLog(184): Garbage collecting all recovering region znodes
2015-04-21 15:42:22,384 INFO [IPC Server handler 6 on 51193]
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated:
127.0.0.1:51194 is added to blk_1073741840_1016{blockUCState=COMMITTED,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-ddc64396-2e84-4e38-b3c0-fa10da1a1c44:NORMAL|RBW]]}
size 36342
2015-04-21 15:42:22,787 INFO [pool-57-thread-6] wal.WALProcedureStore(549):
Roll new state log: 4
2015-04-21 15:42:22,788 INFO [pool-57-thread-6] wal.WALProcedureStore(571):
Remove all state logs with ID less then 3
2015-04-21 15:42:22,788 DEBUG [pool-57-thread-6] wal.WALProcedureStore(584):
remove log:
hdfs://localhost:51193/user/enis/test-data/de5f1aa9-4a7d-4871-88b5-7bb10b58c159/MasterProcWALs/state-00000000000000000003.log
...
2015-04-21 15:42:41,724 INFO [IPC Server handler 4 on 51193]
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated:
127.0.0.1:51194 is added to
blk_1073741888_1064{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-ddc64396-2e84-4e38-b3c0-fa10da1a1c44:NORMAL|RBW]]}
size 385
2015-04-21 15:42:41,724 INFO [pool-57-thread-6] wal.WALProcedureStore(549):
Roll new state log: 52
2015-04-21 15:42:41,724 INFO [pool-57-thread-6] wal.WALProcedureStore(571):
Remove all state logs with ID less then 51
2015-04-21 15:42:41,725 DEBUG [pool-57-thread-6] wal.WALProcedureStore(584):
remove log:
hdfs://localhost:51193/user/enis/test-data/de5f1aa9-4a7d-4871-88b5-7bb10b58c159/MasterProcWALs/state-00000000000000000051.log
2015-04-21 15:42:41,725 INFO [IPC Server handler 0 on 51193]
blockmanagement.BlockManager(1074): BLOCK* addToInvalidates:
blk_1073741888_1064 127.0.0.1:51194
{code}
[~mbertozzi] do you want to take a look at the above exceptions?
I have also seen an unexpected state coming from FSHLog with 50 threads
appending. [[email protected]] I think you are most familiar with this area,
do you mind taking a look? The ring buffer queue is filling up (maybe due to 50
threads appending?). If the queue being full is a valid condition, we should be
handling gracefully?
{code}
2015-04-21 14:29:11,395 DEBUG [pool-57-thread-7] regionserver.HRegion(3609):
rollbackMemstore rolled back 1
2015-04-21 14:29:11,396 DEBUG [pool-57-thread-10] regionserver.HRegion(3609):
rollbackMemstore rolled back 1
2015-04-21 14:29:11,395 DEBUG [pool-57-thread-2] regionserver.HRegion(3609):
rollbackMemstore rolled back 1
2015-04-21 14:29:11,395 DEBUG [pool-57-thread-41] regionserver.HRegion(3609):
rollbackMemstore rolled back 1
2015-04-21 14:29:11,395 DEBUG [pool-57-thread-39] regionserver.HRegion(3609):
rollbackMemstore rolled back 1
2015-04-21 14:29:11,396 WARN [sync.2] wal.FSHLog$SyncRunner(1360): UNEXPECTED,
continuing
java.lang.IllegalStateException
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.releaseSyncFuture(FSHLog.java:1261)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.releaseSyncFutures(FSHLog.java:1276)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1350)
at java.lang.Thread.run(Thread.java:745)
2015-04-21 14:29:11,396 ERROR
[localhost:49706.activeMasterManager.append-pool1-t1]
wal.FSHLog$RingBufferEventHandler(1979): UNEXPECTED!!! syncFutures.length=5
java.lang.IllegalStateException: Queue full
at java.util.AbstractQueue.add(AbstractQueue.java:98)
at java.util.AbstractQueue.addAll(AbstractQueue.java:187)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.offer(FSHLog.java:1249)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1971)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
at
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-04-21 14:29:11,398 WARN [sync.2] wal.FSHLog$SyncRunner(1360): UNEXPECTED,
continuing
java.lang.IllegalStateException
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.releaseSyncFuture(FSHLog.java:1261)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1348)
at java.lang.Thread.run(Thread.java:745)
2015-04-21 14:29:11,399 WARN [sync.2] wal.FSHLog$SyncRunner(1360): UNEXPECTED,
continuing
java.lang.IllegalStateException
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.releaseSyncFuture(FSHLog.java:1261)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1348)
at java.lang.Thread.run(Thread.java:745)
2015-04-21 14:29:11,399 WARN [sync.2] wal.FSHLog$SyncRunner(1360): UNEXPECTED,
continuing
java.lang.IllegalStateException
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.releaseSyncFuture(FSHLog.java:1261)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1348)
at java.lang.Thread.run(Thread.java:745)
2015-04-21 14:29:11,400 WARN [sync.2] wal.FSHLog$SyncRunner(1360): UNEXPECTED,
continuing
java.lang.IllegalStateException
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.releaseSyncFuture(FSHLog.java:1261)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1348)
at java.lang.Thread.run(Thread.java:745)
2015-04-21 14:29:11,401 DEBUG [pool-57-thread-10]
region.EmbeddedDatabase$EmbeddedTable(647): Received java.io.IOException:
java.lang.IllegalStateException: Queue full retrying, attempts:0/350
2015-04-21 14:29:11,401 DEBUG [pool-57-thread-41]
region.EmbeddedDatabase$EmbeddedTable(647): Received java.io.IOException:
java.lang.IllegalStateException: Queue full retrying, attempts:0/350
2015-04-21 14:29:11,401 DEBUG [pool-57-thread-2]
region.EmbeddedDatabase$EmbeddedTable(647): Received java.io.IOException:
java.lang.IllegalStateException: Queue full retrying, attempts:0/350
2015-04-21 14:29:11,401 DEBUG [pool-57-thread-39]
region.EmbeddedDatabase$EmbeddedTable(647): Received java.io.IOException:
java.lang.IllegalStateException: Queue full retrying, attempts:0/350
2015-04-21 14:29:11,402 DEBUG [pool-57-thread-7]
region.EmbeddedDatabase$EmbeddedTable(647): Received java.io.IOException:
java.lang.IllegalStateException: Queue full retrying, attempts:0/350
Wrote 89000 procedures in 27001 ms
{code}
> Bootstrap Tables for fun and profit
> ------------------------------------
>
> Key: HBASE-13260
> URL: https://issues.apache.org/jira/browse/HBASE-13260
> Project: HBase
> Issue Type: Bug
> Reporter: Enis Soztutar
> Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.1.0
>
> Attachments: hbase-13260_bench.patch, hbase-13260_prototype.patch
>
>
> Over at the ProcV2 discussions(HBASE-12439) and elsewhere I was mentioning an
> idea where we may want to use regular old regions to store/persist some data
> needed for HBase master to operate.
> We regularly use system tables for storing system data. acl, meta, namespace,
> quota are some examples. We also store the table state in meta now. Some data
> is persisted in zk only (replication peers and replication state, etc). We
> are moving away from zk as a permanent storage. As any self-respecting
> database does, we should store almost all of our data in HBase itself.
> However, we have an "availability" dependency between different kinds of
> data. For example all system tables need meta to be assigned first. All
> master operations need ns table to be assigned, etc.
> For at least two types of data, (1) procedure v2 states, (2) RS groups in
> HBASE-6721 we cannot depend on meta being assigned since "assignment" itself
> will depend on accessing this data. The solution in (1) is to implement a
> custom WAL format, and custom recover lease and WAL recovery. The solution in
> (2) is to have the table to store this data, but also cache it in zk for
> bootrapping initial assignments.
> For solving both of the above (and possible future use cases if any), I
> propose we add a "boostrap table" concept, which is:
> - A set of predefined tables hosted in a separate dir in HDFS.
> - A table is only 1 region, not splittable
> - Not assigned through regular assignment
> - Hosted only on 1 server (typically master)
> - Has a dedicated WAL.
> - A service does WAL recovery + fencing for these tables.
> This has the benefit of using a region to keep the data, but frees us to
> re-implement caching and we can use the same WAL / Memstore / Recovery
> mechanisms that are battle-tested.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)