Thanks Ramdas and Wei,  memory is fine, my only worry about ratio of of 
files-directories vs 
Blocks as Wei-Chou mentioned. I will work on this, it’s over partitioned.

> On Jan 29, 2019, at 5:02 PM, Ramdas Singh <ramdas.si...@gmail.com> wrote:
> 
> As a rule of thumb for sizing purposes, we should have 1000 MB memory for one 
> million blocks.
> 
> Thanks,
> 
> Ramdas
> 
> 
>> On Tue, Jan 29, 2019 at 5:53 PM Wei-Chiu Chuang 
>> <weic...@cloudera.com.invalid> wrote:
>> I don't feel this is strictly a small file issue (since I am not seeing the 
>> average file size)
>> But it looks like your directory/file ratio is way too low. I've seen that 
>> when Hive creates too many partitions. That can render Hive queries 
>> inefficient.
>> 
>>> On Tue, Jan 29, 2019 at 2:09 PM Sudhir Babu Pothineni 
>>> <sbpothin...@gmail.com> wrote:
>>> 
>>> One of Hadoop cluster I am working
>>> 
>>> 85,985,789 files and directories, 58,399,919 blocks = 144,385,717 total 
>>> file system objects
>>> 
>>> Heap memory used 132.0 GB of 256 GB Heap Memory.
>>> 
>>> I feel it’s odd the ratio of files vs blocks way higher showing more of 
>>> small files problem, 
>>> 
>>> But the cluster working fine. Am I worrying unnecessarily? we are using 
>>> Hadoop 2.6.0
>>> 
>>> Thanks
>>> Sudhir
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: user-h...@hadoop.apache.org
>>> 

Reply via email to