[
https://issues.apache.org/jira/browse/HDFS-14896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16963862#comment-16963862
]
Stephen O'Donnell commented on HDFS-14896:
------------------------------------------
I am not sure allocating more blocks at a time is the solution to this. It may
prevent it from occurring, but its not clear how many blocks at a time you
would need to allocate to avoid this and it is just masking some other issue.
As I understand it, assuming nothing else is sharing the DN disks (eg yarn
jobs), the namenode knows how much space is free on each DN disk and when you
open a new block it reserves 1 block size amount of space. Then it adjusts the
space used when the block is closed depending on how large the block turns out
to be. This prevents the NN allocating too many blocks on any DN that would use
more than the available space.
If it is allocating blocks and then there is no space for them, it suggests
something else is using the DN disks, such as yarn tasks, or there is something
wrong with the space accounting on the namenode.
If other things are using the DN disks, then you need to set a larger reserved
space so the other usage does not exceed the reserved space
(dfs.datanode.du.reserved) and start using the space the datanode expects to
have.
When you see this issue, does the DN have high non-dfs-used and is it beyond
the reserved space on some or all of the disks?
> Make MIN_BLOCKS_FOR_WRITE configurable
> --------------------------------------
>
> Key: HDFS-14896
> URL: https://issues.apache.org/jira/browse/HDFS-14896
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Lisheng Sun
> Assignee: Lisheng Sun
> Priority: Minor
> Attachments: HDFS-14896.001.patch, HDFS-14896.002.patch,
> HDFS-14896.003(2).patch, HDFS-14896.003.patch, HDFS-14896.004.patch,
> HDFS-14896.005.patch
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]