[ 
https://issues.apache.org/jira/browse/HADOOP-1296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12491903
 ] 

Owen O'Malley commented on HADOOP-1296:
---------------------------------------

It shouldn't be 100's of thousands of records, because this is per a file. In 
the worst case (100TB table in 1000 fragments with 128MB blocks), it will be 
750 blocks or so. 

Furthermore, the proposed interface lets you ask about a subrange of the file, 
if it is more that you want all at once.

> Improve interface to FileSystem.getFileCacheHints
> -------------------------------------------------
>
>                 Key: HADOOP-1296
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1296
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Owen O'Malley
>         Assigned To: dhruba borthakur
>
> The FileSystem interface provides a very limited interface for finding the 
> location of the data. The current method looks like:
> String[][] getFileCacheHints(Path file, long start, long len) throws 
> IOException
> which returns a list of "block info" where the block info consists of a list 
> host names. Because the hints don't include the information about where the 
> block boundaries are, map/reduce is required to call the name node for each 
> split. I'd propose that we fix the naming a bit and make it:
> public class BlockInfo extends Writable {
>   public long getStart();
>   public String[] getHosts();
> }
> BlockInfo[] getFileHints(Path file, long start, long len) throws IOException;
> So that map/reduce can query about the entire file and get the locations in a 
> single call.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to