[ 
https://issues.apache.org/jira/browse/HDFS-3672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13429942#comment-13429942
 ] 

Aaron T. Myers commented on HDFS-3672:
--------------------------------------

Breaking up DFSClient#getDiskBlockLocations makes the code a lot more readable 
IMO. Thanks for doing that.

A few more comments:

# This exception message shouldn't include "getDiskBlockLocations". I recommend 
you just say "DFSClient#getDiskBlockLocations expected to be given instances of 
HdfsBlockLocation"
# In the "re-group the locatedblocks to be grouped by datanodes..." loop, it 
seems like instead of the {{if (...)}} check, you could just put the 
initialization of the LocatedBlock list inside the outer loop, before the inner 
loop.
# Rather than using a hard-coded 10 threads for the ThreadPoolExecutor, please 
make this configurable. I think it's reasonable to not document it in a 
*-default.xml file, since most users will never want to change this value, but 
if someone does find the need to do it it'd be nice to not have to recompile.
# Rather than reusing the socket read timeout as the timeout for the RPCs to 
the DNs, I think this should be separately configurable. That conf value is 
used as the timeout for reading block data from a DN, and defaults to 60s. I 
think it's entirely reasonable that callers of this API will want a much lower 
timeout. For that matter, you might consider calling the version of 
ScheduledThreadPoolExecutor#invokeAll that takes a timeout as a parameter.
# You should add a comment explaining the reasoning for having this loop. (I 
see why it is, but it's not obvious, so should be explained.)
{code}
+    for (int i = 0; i < futures.size(); i++) {
+      metadatas.add(null);
+    }
{code}
# In the final loop in DFSClient#queryDatanodesForHdfsBlocksMetadata, I 
recommend you move the fetching of the callable and the datanode objects to the 
catch clause, since that's the only place those variables are used.
# In the same catch clause mentioned above, I recommend you log the full 
exception stack trace if LOG.isDebugEnabled().
# "did not" should be two words:
{code}
+            LOG.debug("Datanode responded with a block disk id we did" +
+                "not request, omitting.");
{code}
# I think we should make it clear in the HdfsDiskId javadoc that it only 
uniquely identifies a data directory on a DN _when paired with that DN._ i.e. 
it is not the case that DiskId is unique between DNs.
# You shouldn't be using protobuf ByteString outside of the protobuf translator 
code - just use a byte[]. For that matter, it's only necessary that the final 
result to clients of the API be an opaque identifier. In the DN-side 
implementation of the RPC, and even the DFSClient code, you could reasonably 
use a meaningful value that's not opaque.
# How could this possibly happen?
{code}
+        // Oddly, we got a blockpath that didn't match any dataDir.
+        if (diskIndex == dataDirs.size()) {
+          LOG.warn("Could not determine the data dir of block " 
+              + block.toString() + " with path " + blockPath);
+        }
{code}
                
> Expose disk-location information for blocks to enable better scheduling
> -----------------------------------------------------------------------
>
>                 Key: HDFS-3672
>                 URL: https://issues.apache.org/jira/browse/HDFS-3672
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: design-doc-v1.pdf, hdfs-3672-1.patch, hdfs-3672-2.patch, 
> hdfs-3672-3.patch, hdfs-3672-4.patch, hdfs-3672-5.patch, hdfs-3672-6.patch
>
>
> Currently, HDFS exposes on which datanodes a block resides, which allows 
> clients to make scheduling decisions for locality and load balancing. 
> Extending this to also expose on which disk on a datanode a block resides 
> would enable even better scheduling, on a per-disk rather than coarse 
> per-datanode basis.
> This API would likely look similar to Filesystem#getFileBlockLocations, but 
> also involve a series of RPCs to the responsible datanodes to determine disk 
> ids.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to