[ 
https://issues.apache.org/jira/browse/HBASE-16980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-16980:
--------------------------
    Assignee: Yu Li

Ok, here comes the analysis.

I could reproduce the failure in my local environment, but not consistently. 
{{testMultipleRows}} fails more frequently than {{testReadModifyWrite}}, and 
each time it fails, I could see {{RetriesExhaustedException}} caused by 
{{CallQueueTooBigException}}, like below:
{noformat}
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
after attempts=2, exceptions:
Tue Nov 01 14:53:14 CST 2016, RpcRetryingCaller{globalStartTime=1477983194439, 
pause=100, retries=2}, 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException):
 Call queue is full on localhost,59045,1477983189616, too many items queued ?
Tue Nov 01 14:53:14 CST 2016, RpcRetryingCaller{globalStartTime=1477983194439, 
pause=100, retries=2}, 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException):
 Call queue is full on localhost,59045,1477983189616, too many items queued ?

        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:157)
        at 
org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:108)
        at 
org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73)
        ... 6 more
Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException):
 Call queue is full on localhost,59045,1477983189616, too many items queued ?
        at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267)
        at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
        at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:34118)
        at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1631)
        at 
org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104)
        at 
org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:1)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
        ... 8 more
com.google.protobuf.ServiceException: Error calling method 
RowProcessorService.Process
        at 
org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:75)
        at 
org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$RowProcessorService$BlockingStub.process(RowProcessorProtos.java:1631)
        at 
org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint.swapRows(TestRowProcessorEndpoint.java:272)
        at 
org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint.access$3(TestRowProcessorEndpoint.java:265)
        at 
org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint$SwapRowsRunner.run(TestRowProcessorEndpoint.java:258)
        at 
org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint$1.run(TestRowProcessorEndpoint.java:225)
        at java.lang.Thread.run(Thread.java:745)
{noformat}

And when such exception happens, the design of the test cases could not assure 
the correctness, let's see them one by one:

For {{testMultipleRows}}, it will launch 100 threads to swap two rows in 
parallel, and since the thread number is even, finally the two rows will not be 
swapped, but only if *all operations succeeds* or *even number of operations 
failed*.

For {{testReadModifyWrite}} the similar reason, if any operation fails because 
of RetriesExhaustedException, the final check of {{assertEquals(numThreads + 1, 
finalCounter)}} will fail.

Currently there's a {{failures}} counter but in either {{IncrementRunner}} or 
{{SwapRowsRunner}} we catch {{Throwable}} but never increase it...

To enforce the UT cases, we should
1) don't assert failures count to be zero;
2) count the failures for {{testReadModifyWrite}}
3) take {{swapped}} flag into account when assert result for 
{{testMultipleRows}}

Regarding why HBASE-16195 makes the case failed more frequently, I've no much 
clue... It seems to me below change is relative
{code}
-        this.chunkQueue.add(c);
+        if (chunkQueue != null && !this.closed && !this.chunkQueue.offer(c)) {
+          if (LOG.isTraceEnabled()) {
+            LOG.trace("Chunk queue is full, won't reuse this new chunk. 
Current queue size: "
+                + chunkQueue.size());
+          }
+        }
{code}
After HBASE-16195 it won't add the chunk into {{chunkQueue}} anymore, so it 
seems the {{chunkQueue!=null}} check is more expensive than 
{{this.chunkQueue.add(c)}}? Unlikely by theory though, right?...

Anyway I believe this is some UT case design issue and not that relative to 
HBASE-16195 change. Will upload a patch soon to reinforce the UT.

[~apurtell] and [~busbey] please let me know your thoughts. Thanks.

Assigning the issue to myself, btw.

> TestRowProcessorEndpoint failing consistently
> ---------------------------------------------
>
>                 Key: HBASE-16980
>                 URL: https://issues.apache.org/jira/browse/HBASE-16980
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 1.2.4
>            Reporter: Andrew Purtell
>            Assignee: Yu Li
>
> Found while evaluating 1.2.4 RC1
> {noformat}
>   TestRowProcessorEndpoint.testMultipleRows:246 expected:<3> but was:<2>
>   TestRowProcessorEndpoint.testReadModifyWrite:184 expected:<101> but was:<91>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to