[
https://issues.apache.org/jira/browse/HDFS-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13790420#comment-13790420
]
Kihwal Lee commented on HDFS-5323:
----------------------------------
I am guilty of knowing about this but ignoring. Thanks for fixing this.
The following is from building the eclipse target.
{noformat}
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 932096 bytes for
Arena::Amalloc
# An error report file with more information is saved as:
#
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hs_err_pid29135.log
{noformat}
This was during the build of mapreduce streaming and the audit warning is
against the hotspot vm error file from it. It doesn't run out of memory on my
machine with the latest trunk. Kicking PreCommit again.
> Remove some deadcode in BlockManager
> ------------------------------------
>
> Key: HDFS-5323
> URL: https://issues.apache.org/jira/browse/HDFS-5323
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Affects Versions: 2.3.0
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Priority: Minor
> Attachments: HDFS-5323.001.patch
>
>
> {{BlockManager#DEFAULT_MAP_LOAD_FACTOR}} is deadcode. It no longer does
> *anything* since blocks are now stored in a GSet whose size is fixed.
> {{BlocksMap#blocks}} does not need to be volatile. Whenever it is accessed,
> it is accessed under the {{FSNamesystem}} lock. Furthermore, access to this
> data structure is not thread-safe.
--
This message was sent by Atlassian JIRA
(v6.1#6144)