[ 
https://issues.apache.org/jira/browse/HDFS-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14109966#comment-14109966
 ] 

Colin Patrick McCabe commented on HDFS-6898:
--------------------------------------------

bq. Colin Patrick McCabe Will fallocate trigger disk I/O? I thought it only 
manipulates metadata: http://linux.die.net/man/1/fallocate – "this is done 
quickly by allocating blocks and marking them as uninitialized, requiring no IO 
to the data blocks"

It depends on the underlying file system.  For a filesystem which uses extents, 
like ext4, not much I/O would be required.  For something like ext2, you have 
to allocate each ext2 block individually and link it to the rest of them, which 
could be quite a lot of I/O.  That's a fair point, though... it will not be 128 
MB of I/O in either case... it would be less than that.  It's all metadata I/O 
essentially.

bq. However we still have the other issue that Colin mentioned - we cannot 
deduce the end of file on restart since our block file format lacks any 
header/meta information.

Yeah.

bq. Colin Patrick McCabe, the attached .03.patch is current.

Thanks.  Will try to take a look later today

> DN must reserve space for a full block when an RBW block is created
> -------------------------------------------------------------------
>
>                 Key: HDFS-6898
>                 URL: https://issues.apache.org/jira/browse/HDFS-6898
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.5.0
>            Reporter: Gopal V
>            Assignee: Arpit Agarwal
>         Attachments: HDFS-6898.01.patch, HDFS-6898.03.patch
>
>
> DN will successfully create two RBW blocks on the same volume even if the 
> free space is sufficient for just one full block.
> One or both block writers may subsequently get a DiskOutOfSpace exception. 
> This can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to