[ 
https://issues.apache.org/jira/browse/HBASE-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13120689#comment-13120689
 ] 

stack commented on HBASE-3581:
------------------------------

Tried it on 0.92 branch head and got these failures.  Look into it:

{code}

Results :

Failed tests:   
testLogRollOnDatanodeDeath(org.apache.hadoop.hbase.regionserver.wal.TestLogRolling):
 New log file should have the default replication

Tests in error:
  
testMultiSlaveReplication(org.apache.hadoop.hbase.replication.TestMultiSlaveReplication):
 test timed out after 300000 milliseconds
  
testCyclicReplication(org.apache.hadoop.hbase.replication.TestMasterReplication):
 test timed out after 300000 milliseconds
  
testSimplePutDelete(org.apache.hadoop.hbase.replication.TestMasterReplication): 
Cluster already running at 
/home/stack/0.92/target/test-data/18b3c603-4c68-434f-8701-7cfcf558129b
  
testExceptionFromCoprocessorWhenCreatingTable(org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove):
 test timed out after 30000 milliseconds
  testClusterRestart(org.apache.hadoop.hbase.master.TestRestartCluster): test 
timed out after 300000 milliseconds
  testBasicRollingRestart(org.apache.hadoop.hbase.master.TestRollingRestart): 
test timed out after 300000 milliseconds
  
testUsingMetaAndBinary(org.apache.hadoop.hbase.regionserver.TestGetClosestAtOrBefore):
 Cannot lock storage /home/stack/0.92/build/hbase/test/dfs/name1. The directory 
is already locked.
  
testGetClosestRowBefore3(org.apache.hadoop.hbase.regionserver.TestGetClosestAtOrBefore):
 Cannot lock storage /home/stack/0.92/build/hbase/test/dfs/name1. The directory 
is already locked.
  
testGetClosestRowBefore2(org.apache.hadoop.hbase.regionserver.TestGetClosestAtOrBefore):
 Cannot lock storage /home/stack/0.92/build/hbase/test/dfs/name1. The directory 
is already locked.
  testWideScanBatching(org.apache.hadoop.hbase.regionserver.TestWideScanner): 
Cannot lock storage /home/stack/0.92/build/hbase/test/dfs/name1. The directory 
is already locked.
  testBasicHalfMapFile(org.apache.hadoop.hbase.regionserver.TestStoreFile): 
Call to localhost/127.0.0.1:0 failed on connection exception: 
java.net.ConnectException: Connection refused
  testReference(org.apache.hadoop.hbase.regionserver.TestStoreFile): Call to 
localhost/127.0.0.1:0 failed on connection exception: 
java.net.ConnectException: Connection refused
  testBloomFilter(org.apache.hadoop.hbase.regionserver.TestStoreFile): Call to 
localhost/127.0.0.1:0 failed on connection exception: 
java.net.ConnectException: Connection refused
  testBloomTypes(org.apache.hadoop.hbase.regionserver.TestStoreFile): Call to 
localhost/127.0.0.1:0 failed on connection exception: 
java.net.ConnectException: Connection refused
  testBloomEdgeCases(org.apache.hadoop.hbase.regionserver.TestStoreFile): Call 
to localhost/127.0.0.1:0 failed on connection exception: 
java.net.ConnectException: Connection refused
  testFlushTimeComparator(org.apache.hadoop.hbase.regionserver.TestStoreFile): 
Call to localhost/127.0.0.1:0 failed on connection exception: 
java.net.ConnectException: Connection refused
  
testMROnTableWithCustomMapper(org.apache.hadoop.hbase.mapreduce.TestImportTsv): 
java.io.IOException: File 
/tmp/hadoop-stack/mapred/system/job_local_0002/libjars/zookeeper-3.3.3.jar 
could only be replicated to 0 nodes, instead of 1
  testSimpleLoad(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles): 
No such file or directory
  testReconstruction(org.apache.hadoop.hbase.TestFullLogReconstruction): test 
timed out after 300000 milliseconds
  testFlushCommitsWithAbort(org.apache.hadoop.hbase.client.TestMultiParallel): 
test timed out after 300000 milliseconds
  testFlushCommitsNoAbort(org.apache.hadoop.hbase.client.TestMultiParallel): 
test timed out after 300000 milliseconds
  test2481(org.apache.hadoop.hbase.client.TestScannerTimeout): test timed out 
after 300000 milliseconds
  test2772(org.apache.hadoop.hbase.client.TestScannerTimeout): test timed out 
after 300000 milliseconds

Tests run: 897, Failures: 1, Errors: 20, Skipped: 18
{code}
                
> hbase rpc should send size of response
> --------------------------------------
>
>                 Key: HBASE-3581
>                 URL: https://issues.apache.org/jira/browse/HBASE-3581
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: ryan rawson
>            Assignee: stack
>            Priority: Critical
>             Fix For: 0.92.0
>
>         Attachments: 3581-v2.txt, 3581-v3.txt, 3581-v4.txt, 
> HBASE-rpc-response.txt
>
>
> The RPC reply from Server->Client does not include the size of the payload, 
> it is framed like so:
> <i32> callId
> <byte> errorFlag
> <byte[]> data
> The data segment would contain enough info about how big the response is so 
> that it could be decoded by a writable reader.
> This makes it difficult to write buffering clients, who might read the entire 
> 'data' then pass it to a decoder. While less memory efficient, if you want to 
> easily write block read clients (eg: nio) it would be necessary to send the 
> size along so that the client could snarf into a local buf.
> The new proposal is:
> <i32> callId
> <i32> size
> <byte> errorFlag
> <byte[]> data
> the size being sizeof(data) + sizeof(errorFlag).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to