[ 
https://issues.apache.org/jira/browse/HDFS-196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14579709#comment-14579709
 ] 

Kevin Beyer commented on HDFS-196:
----------------------------------

I am not alone with this problem:

http://stackoverflow.com/questions/5347293/hdfs-says-file-is-still-open-but-process-writing-to-it-was-killed

http://stackoverflow.com/questions/19565791/hbase-distributed-log-splitting-keeps-failing-because-unable-to-get-a-lease

http://stackoverflow.com/questions/23833318/crashed-hdfs-client-how-to-close-remaining-open-files


> File length not reported correctly after application crash
> ----------------------------------------------------------
>
>                 Key: HDFS-196
>                 URL: https://issues.apache.org/jira/browse/HDFS-196
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Doug Judd
>
> Our application (Hypertable) creates a transaction log in HDFS.  This log is 
> written with the following pattern:
> out_stream.write(header, 0, 7);
> out_stream.sync()
> out_stream.write(data, 0, amount);
> out_stream.sync()
> [...]
> However, if the application crashes and then comes back up again, the 
> following statement
> length = mFilesystem.getFileStatus(new Path(fileName)).getLen();
> returns the wrong length.  Apparently this is because this method fetches 
> length information from the NameNode which is stale.  Ideally, a call to 
> getFileStatus() would return the accurate file length by fetching the size of 
> the last block from the primary datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to