of what it does.
>
> On Mon, 19 Jun 2017 at 14:20 Sidharth Kumar <sidharthkumar2...@gmail.com>
> wrote:
>
>> Hi Team,
>>
>> How feasible will it be, if I configure CMS Garbage collector for Hadoop
>> daemons and configure G1 for Map Reduce jobs which run for hour
t; How feasible will it be, if I configure CMS Garbage collector for Hadoop
> daemons and configure G1 for Map Reduce jobs which run for hours?
>
> Thanks for your help ...!
>
>
> --
> Regards
> Sidharth Kumar | Mob: +91 8197 555 599 <081975%2055599> | LinkedIn
> <https://www.linkedin.com/in/sidharthkumar2792/>
>
Hi Team,
How feasible will it be, if I configure CMS Garbage collector for Hadoop
daemons and configure G1 for Map Reduce jobs which run for hours?
Thanks for your help ...!
--
Regards
Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn
<https://www.linkedin.com/in/sidharthkumar2792/>
I am reading the Hadoop The Definitive Guide, and in the page 71, it said,
when there are too many small files, the memory of the NameNode will be eat
out since each file need to keep its metadata in NameNode. The book also
suggest using Hadoop Archives, or HAR files to pack files into HDFS