[
https://issues.apache.org/jira/browse/HADOOP-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12793884#action_12793884
]
Suresh Srinivas commented on HADOOP-6460:
-----------------------------------------
Based on Raghu's input I will attach a new simpler patch.
As regards to starting the buffer with 10K size and shrinking back to it, I was
wondering if 10K is a good choice. In getListing() operation, the returned file
path is full path (we should consider changing that and return only file name).
Assuming 128 bytes per FileStatus object, the response could easily grow beyond
10K for a directory with just 80 files. Should we consider increasing the
initial size sufficiently large to 128K to accommodate 1K files, to avoid
having to grow the buffer by doubling from 10K all the time and incurring the
cost of memory allocation and copies?
> Namenode runs of out of memory due to memory leak in ipc Server
> ---------------------------------------------------------------
>
> Key: HADOOP-6460
> URL: https://issues.apache.org/jira/browse/HADOOP-6460
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 0.20.1, 0.21.0, 0.22.0
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
> Priority: Blocker
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: hadoop-6460.1.patch, hadoop-6460.patch
>
>
> Namenode heap usage grows disproportional to the number objects supports
> (files, directories and blocks). Based on heap dump analysis, this is due to
> large growth in ByteArrayOutputStream allocated in
> o.a.h.ipc.Server.Handler.run().
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.