[ https://issues.apache.org/jira/browse/HADOOP-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12651404#action_12651404 ]
Christian Kunz commented on HADOOP-4735: ---------------------------------------- All 0 size files had similar log messages like (initial size 0 --> update to non-zero during closing) 2008-11-16 22:52:04,302 INFO org.apache.hadoop.dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: xxx.yyy.zzz.vvv:port is added to blk_6021851155145233808 size 0 2008-11-16 22:52:04,540 WARN org.apache.hadoop.dfs.StateChange: BLOCK* NameSystem.addStoredBlock: Redundant addStoredBlock request received for blk_6021851155145233808 on xxx.yyy.zzz.vvv:port size 38807484 Also I checked that the file corruption started after a single namenode restart after the file creation. So I would conclude that the edits file must have contained the initial 0 size instead of the final size. The fact that the files continue to exist after the namenode restart indicates that they got properly closed. Correct? Unfortunately, there are no log messages about closing the file. I start to suspect that there is rare race condition between updating the block with the correct size and storing it to the edits file during the closing process. Would that be possible? > NameNode reporting 0 size for originally non-empty files > -------------------------------------------------------- > > Key: HADOOP-4735 > URL: https://issues.apache.org/jira/browse/HADOOP-4735 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.17.2 > Reporter: Christian Kunz > > NameNode reports 0 size for a handful of files that were non-empty originally. > The corresponding blocks on the DataNodes are non-empty. > NameNode must have reported correct size at some time, because applications > that would have failed with 0 size files, executed successfully. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.