[
https://issues.apache.org/jira/browse/HDFS-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070021#comment-14070021
]
Kai Zheng commented on HDFS-6709:
---------------------------------
Hi Colin,
I learned the patch. The Slab stuffs look great. A question, look at the
following codes I doubt if we need the path to call unsafe.getByte(). Either
allocated from direct buffer or heap buffer, ByteBuffer interface can be used
to get/set byte from/to the buffer. Note it would be good to avoid using Unsafe
if possible. Please clarify if I misunderstand anything here, thanks.
{code}
+ byte getByte(int offset) {
+ if (base != 0) {
+ return NativeIO.getUnsafe().getByte(null, base + offset);
+ } else {
+ buf.position(offset);
+ return buf.get();
+ }
+ }
{code}
> Implement off-heap data structures for NameNode and other HDFS memory
> optimization
> ----------------------------------------------------------------------------------
>
> Key: HDFS-6709
> URL: https://issues.apache.org/jira/browse/HDFS-6709
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-6709.001.patch
>
>
> We should investigate implementing off-heap data structures for NameNode and
> other HDFS memory optimization. These data structures could reduce latency
> by avoiding the long GC times that occur with large Java heaps. We could
> also avoid per-object memory overheads and control memory layout a little bit
> better. This also would allow us to use the JVM's "compressed oops"
> optimization even with really large namespaces, if we could get the Java heap
> below 32 GB for those cases. This would provide another performance and
> memory efficiency boost.
--
This message was sent by Atlassian JIRA
(v6.2#6252)