[ 
https://issues.apache.org/jira/browse/HDFS-941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13048979#comment-13048979
 ] 

stack commented on HDFS-941:
----------------------------

I put back TestDataXceiver.  It does this:

{code}
-    List<LocatedBlock> blkList = util.writeFile(TEST_FILE, FILE_SIZE_K);
+    // Create file.
+    util.writeFile(TEST_FILE, FILE_SIZE_K);
+    // Now get its blocks.
+    List<LocatedBlock> blkList = util.getFileBlocks(TEST_FILE, FILE_SIZE_K);
{code}


rather than change the writeFile signature (writeFile is used in a few other 
places so the change would ripple).


I also added back BlockSender.isBlockReadFully so the tests before we call 
verifiedByClient are as they were before this patch application:

{code}
-        if (DataTransferProtocol.Status.read(in) == CHECKSUM_OK) {
-          if (blockSender.isBlockReadFully() && datanode.blockScanner != null) 
{
-            datanode.blockScanner.verifiedByClient(block);
+      if (blockSender.didSendEntireByteRange()) {
+        // If we sent the entire range, then we should expect the client
+        // to respond with a Status enum.
+        try {
+          DataTransferProtocol.Status stat = 
DataTransferProtocol.Status.read(in);
+          if (stat == null) {
+            LOG.warn("Client " + s.getInetAddress() + "did not send a valid 
status " +
+                     "code after reading. Will close connection.");
+            IOUtils.closeStream(out);
+          } else if (stat == CHECKSUM_OK) {
+            if (blockSender.isBlockReadFully() && datanode.blockScanner != 
null) {
+              datanode.blockScanner.verifiedByClient(block);
+            }
           }
{code}

I ran the bundled tests and they pass.  Am currently running all.

> Datanode xceiver protocol should allow reuse of a connection
> ------------------------------------------------------------
>
>                 Key: HDFS-941
>                 URL: https://issues.apache.org/jira/browse/HDFS-941
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, hdfs client
>    Affects Versions: 0.22.0
>            Reporter: Todd Lipcon
>            Assignee: bc Wong
>         Attachments: 941.22.txt, 941.22.txt, 941.22.v2.txt, 941.22.v3.txt, 
> HDFS-941-1.patch, HDFS-941-2.patch, HDFS-941-3.patch, HDFS-941-3.patch, 
> HDFS-941-4.patch, HDFS-941-5.patch, HDFS-941-6.22.patch, HDFS-941-6.patch, 
> HDFS-941-6.patch, HDFS-941-6.patch, fix-close-delta.txt, hdfs-941.txt, 
> hdfs-941.txt, hdfs-941.txt, hdfs-941.txt, hdfs941-1.png
>
>
> Right now each connection into the datanode xceiver only processes one 
> operation.
> In the case that an operation leaves the stream in a well-defined state (eg a 
> client reads to the end of a block successfully) the same connection could be 
> reused for a second operation. This should improve random read performance 
> significantly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to