[ 
https://issues.apache.org/jira/browse/HDFS-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285707#comment-17285707
 ] 

Luca Toscano commented on HDFS-9536:
------------------------------------

Hi! I have recently upgraded an Hadoop cluster from 2.6.0 (CDH 5.16.2, so not a 
vanilla 2.6) to 2.10.1 (Apache Bigtop 1.5) and I have experienced a lot of 
troubles while upgrading DNs. Before the upgrade we used to run the DNs with 
-Xmx 4G without any issue, but during the upgrade we had to bump the value to 
16G on some nodes to avoid OOMs and allow the cluster to complete the upgrade. 
The nodes causing the most problems were the one with more blocks (order of 
some millions of blocks).

Is this an issue that happens only when upgrading from 2.6.0 (due to 
https://issues.apache.org/jira/browse/HDFS-6482) or can it happen also on later 
versions? If so, how should the heap sizes of DNs be tuned? And would reducing 
`dfs.datanode.block.id.layout.upgrade.threads` help?

Thanks in advance :)

> OOM errors during parallel upgrade to Block-ID based layout
> -----------------------------------------------------------
>
>                 Key: HDFS-9536
>                 URL: https://issues.apache.org/jira/browse/HDFS-9536
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Vinayakumar B
>            Assignee: Vinayakumar B
>            Priority: Major
>
> This is a follow-up jira for the OOM errors observed during parallel upgrade 
> to Block-ID based datanode layout using HDFS-8578 fix.
> more clue 
> [here|https://issues.apache.org/jira/browse/HDFS-8578?focusedCommentId=15042012&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15042012]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to