[ 
https://issues.apache.org/jira/browse/HDFS-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13688557#comment-13688557
 ] 

Suresh Srinivas commented on HDFS-4465:
---------------------------------------

Some early comments:
# {{LightWeightGSet.computeCapacity(2.0, "BlockMap"));}}. Given datanode is not 
as tighly written as namenode, and generally uses much more memory, do you 
think we need 2% of total java heap or just 1% is sufficient?
# I would prefer to keep BlockScanInfo to contain Block instead of extending 
Block.
# Please make part of setDirInternal where a directory name is processed 
baseDirPath and list integers into a static method and write a unit test for 
it. Also giving few examples of what you expect as path name and how you are 
processing would make the code more understandable.
# It would be good to quantify the savings.

                
> Optimize datanode ReplicasMap and ReplicaInfo
> ---------------------------------------------
>
>                 Key: HDFS-4465
>                 URL: https://issues.apache.org/jira/browse/HDFS-4465
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 2.0.5-alpha
>            Reporter: Suresh Srinivas
>            Assignee: Aaron T. Myers
>         Attachments: dn-memory-improvements.patch, HDFS-4465.patch
>
>
> In Hadoop a lot of optimization has been done in namenode data structures to 
> be memory efficient. Similar optimizations are necessary for Datanode 
> process. With the growth in storage per datanode and number of blocks hosted 
> on datanode, this jira intends to optimize long lived ReplicasMap and 
> ReplicaInfo objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to