[ 
https://issues.apache.org/jira/browse/HDFS-1114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12879582#action_12879582
 ] 

Suresh Srinivas commented on HDFS-1114:
---------------------------------------

# BlocksMap.java 
#* typo exponient. Should be exponent?
#* Capacity should be divided by a reference size 8 or 4 depending on the 64bit 
or 32bit java version
#* Current capacity calculation seems quite complex. Add more explanation on 
why it is implemented that way.
# LightWeightGSet.java
#* "which uses a hash table for storing the elements" should this say "uses 
array"?
#* Add a comment that the size of entries is power of two
#* Throw HadoopIllegalArgumentException instead of IllegalArgumentException 
(for 20 version of the patch it could remain IllegalArugmentException)
#* remove() - for better readability no need for else if and else since the 
previous block returns
#* toString() - prints the all the entries. This is a bad idea if some one 
passes this object to Log unknowingly. If all the details of the HashMap is 
needed, we should have some other method such as dump() or printDetails() to do 
the same.
# TestGSet.java
#* In exception tests, instead of printing log when expected exception 
happened, print a log in Assert.fail(), like Assert.fail("Excepected exception 
was not thrown"). Check for exceptions should be more specific, instead 
Exception. It is also good idea to document these exceptions in javadoc for 
methods in GSet.
#* println should use Log.info instead of System.out.println?
#* add some comments to classes on what they do/how they are used
#* add some comments to GSetTestCase members denominator etc. and constructor
#* add comments to testGSet() on what each of the case is accomplishing 



> Reducing NameNode memory usage by an alternate hash table
> ---------------------------------------------------------
>
>                 Key: HDFS-1114
>                 URL: https://issues.apache.org/jira/browse/HDFS-1114
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: Tsz Wo (Nicholas), SZE
>            Assignee: Tsz Wo (Nicholas), SZE
>         Attachments: GSet20100525.pdf, gset20100608.pdf, 
> h1114_20100607.patch, h1114_20100614b.patch, h1114_20100615.patch
>
>
> NameNode uses a java.util.HashMap to store BlockInfo objects.  When there are 
> many blocks in HDFS, this map uses a lot of memory in the NameNode.  We may 
> optimize the memory usage by a light weight hash table implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to