[ 
https://issues.apache.org/jira/browse/HDFS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12793728#action_12793728
 ] 

dhruba borthakur commented on HDFS-814:
---------------------------------------

Code looks good. Can we also enhance DFSClient.getFileInfo() to return the 
current length of a file (for a file that is being written into)... something 
like this:

{quote}
 
  public FileStatus getFileInfo(String src) throws IOException {
    checkOpen();
    try {
      FileStatus stat = namenode.getFileInfo(src);
      if (stat.isUnderConstruction()) {
        stat.length = DFSClient.open(src).getFileLength();
      }
    } catch(RemoteException re) {
      throw re.unwrapRemoteException(AccessControlException.class);
    }
  }

{quote}

The benefit of this approach might reduce confusion to users...especially if 
DFSClient.getFileInfo() and DfsClient.getFileLength() returns different file 
sizes for the same file. Also, I am guessing that this will not introduce any 
new performance impact.

> Add an api to get the visible length of a DFSDataInputStream.
> -------------------------------------------------------------
>
>                 Key: HDFS-814
>                 URL: https://issues.apache.org/jira/browse/HDFS-814
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs client
>            Reporter: Tsz Wo (Nicholas), SZE
>            Assignee: Tsz Wo (Nicholas), SZE
>             Fix For: 0.21.0, 0.22.0
>
>         Attachments: h814_20091221.patch
>
>
> Hflush guarantees that the bytes written before are visible to the new 
> readers.  However, there is no way to get the length of the visible bytes.  
> The visible length is useful in some applications like SequenceFile.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to