[ 
https://issues.apache.org/jira/browse/HDFS-3672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13419604#comment-13419604
 ] 

Todd Lipcon commented on HDFS-3672:
-----------------------------------

A few comments on the initial patch:

- I definitely think we need to separate the API for getting disk locations so 
that you can pass a list of LocatedBlocks. For some of the above-mentioned use 
cases (eg MR scheduler), you need to get the locations for many files, and you 
don't want to have to do a fan-out round for each of the files separately.
- Per above, I agree that we should make the disk IDs opaque. But a single byte 
seems short-sighted. Let's expose them as an interface "DiskId" which can be 
entirely devoid of getters for now -- its only contract would be that it 
properly implements comparison, equals, and hashcode, so users can use them to 
aggregate stats by disk, etc. Internally we can implement it with a wrapper 
around a byte[].
- In the protobuf response, given the above, I think we should do something 
like:
{code}
message Response {
  repeated bytes diskIds;
  repeated uint32 diskIndexes; // for each block, pointers into above diskId 
array, or MAX_INT to indicate blocks not found
}
{code}
- Per above, need to figure out what you're doing for blocks that aren't found 
on a given DN. We also need to specify in the JavaDoc what happens in the 
response for DNs which don't respond. I think it's OK that the result would 
have some "unknown" - it's likely if any of the DNs are down.
- Doing the fan-out RPC does seem important. Unfortunately it might be tricky, 
so I agree we should do it in a separate follow-up optimization.
                
> Expose disk-location information for blocks to enable better scheduling
> -----------------------------------------------------------------------
>
>                 Key: HDFS-3672
>                 URL: https://issues.apache.org/jira/browse/HDFS-3672
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: hdfs-3672-1.patch
>
>
> Currently, HDFS exposes on which datanodes a block resides, which allows 
> clients to make scheduling decisions for locality and load balancing. 
> Extending this to also expose on which disk on a datanode a block resides 
> would enable even better scheduling, on a per-disk rather than coarse 
> per-datanode basis.
> This API would likely look similar to Filesystem#getFileBlockLocations, but 
> also involve a series of RPCs to the responsible datanodes to determine disk 
> ids.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to