Hi Jonathan, Thank you.
The number of files impact on the memory usage in Namenode. I just want to get the real memory usage situation in Namenode. The memory used in heap always changes so that I have no idea about which value is the right one. Thanks, Tim > On Feb 22, 2015, at 12:22 AM, Jonathan Aquilina <[email protected]> > wrote: > > I am rather new to hadoop, but wouldnt the difference be potentially in how > the files are split in terms of size? > > > --- > Regards, > Jonathan Aquilina > Founder Eagle Eye T > On 2015-02-21 21:54, Fang Zhou wrote: > >> Hi All, >> >> I want to test the memory usage on Namenode and Datanode. >> >> I try to use jmap, jstat, proc/pid/stat, top, ps aux, and Hadoop website >> interface to check the memory. >> The values I get from them are different. I also found that the memory >> always changes periodically. >> This is the first thing confused me. >> >> I thought the more files stored in Namenode, the more memory usage in >> Namenode and Datanode. >> I also thought the memory used in Namenode should be larger than the memory >> used in each Datanode. >> However, some results show my ideas are wrong. >> For example, I test the memory usage of Namenode with 6000 and 1000 files. >> The "6000" memory is less than "1000" memory from jmap's results. >> I also found that the memory usage in Datanode is larger than the memory >> used in Namenode. >> >> I really don't know how to get the memory usage in Namenode and Datanode. >> >> Can anyone give me some advices? >> >> Thanks, >> Tim
