[ 
https://issues.apache.org/jira/browse/HBASE-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12846623#action_12846623
 ] 

Todd Lipcon commented on HBASE-2340:
------------------------------------

A reasonably simple test would also be the following:

Write a simple client that does something like:

{code}
Random r = new Random(myClientId);
int i = 0;
while (true) {
  long toWrite = r.nextLong();
  writeKey(toWrite);
  writeToLocalDiskAndFsync(i);
}
{code}

then run N of these clients concurrently while injecting various bits of churn 
into the cluster (kill -9, kill -STOP, pull network jacks, etc).

Success criterion 1 is that all of the clients proceed without throwing 
exceptions. Success criterion 2 is to start the clients again in verification 
mode and run the same number of iterations as recorded on the local disk, 
checking that all of the data is there.

> Add end-to-end test of sync/flush
> ---------------------------------
>
>                 Key: HBASE-2340
>                 URL: https://issues.apache.org/jira/browse/HBASE-2340
>             Project: Hadoop HBase
>          Issue Type: Task
>            Reporter: stack
>            Assignee: stack
>            Priority: Blocker
>             Fix For: 0.20.4, 0.21.0
>
>
> Add a test to do the following:
> {code}
> + Start a HBase/HDFS cluster (local node is fine).
>  + Use top-level (HTable) level APIs to put items. 
> + Try about single column puts, as well as puts which span multiple 
> columns/multiple column families, etc.
> + Then kill one region server.
> + Wait for recovery to happen.
> + And then check the rows exist.
> {code}
> Assigning myself.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to