[ https://issues.apache.org/jira/browse/HDFS-1114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880423#action_12880423 ]
Scott Carey commented on HDFS-1114: ----------------------------------- bq. This is a good point. Is there a way to determine if UseCompressedOops is set in runtime? Well, there is ManagementFactory.getRuntimeMXBean().getInputArguments(), but later versions of Java are going to be making +UseCompressedOops the default. There is also a way to check if the VM is 64 bit or 32 bit, either its out of ManagementFactory or one of the system properties. Digging around I don't see it, but I have used it before. I think it is vendor specific though. > Reducing NameNode memory usage by an alternate hash table > --------------------------------------------------------- > > Key: HDFS-1114 > URL: https://issues.apache.org/jira/browse/HDFS-1114 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node > Reporter: Tsz Wo (Nicholas), SZE > Assignee: Tsz Wo (Nicholas), SZE > Fix For: 0.22.0 > > Attachments: benchmark20100618.patch, GSet20100525.pdf, > gset20100608.pdf, h1114_20100607.patch, h1114_20100614b.patch, > h1114_20100615.patch, h1114_20100616b.patch, h1114_20100617.patch, > h1114_20100617b.patch > > > NameNode uses a java.util.HashMap to store BlockInfo objects. When there are > many blocks in HDFS, this map uses a lot of memory in the NameNode. We may > optimize the memory usage by a light weight hash table implementation. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.