Hi Ram,

Sorry I did not share that details of 32gb with you.

I am saying 32gb is allocated because, I observed the same on UI, when the 
application is running. But now, as the DAG is failed, I cannot take a 
screenshot and send!!


Regards,
Raja.

From: Munagala Ramanath <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, July 12, 2016 at 11:06 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: DAG is failing due to memory issues

How do you know it is allocating 32GB ? The diagnostic message you posted does 
not show
that.

Ram

On Tue, Jul 12, 2016 at 8:51 AM, Raja.Aravapalli 
<[email protected]<mailto:[email protected]>> wrote:

Thanks for the response Sandesh.

Since our yarn-site is configured with value 32768 for the property 
yarn.scheduler.maximum-allocation-mb, it is allocating a max of 32gb and not 
more than that!!


Wish to know, is there a way I can increase the max allowed value ? OR, since 
it is configured in yarn-site.xml, I cannot increase it ?



Regards,
Raja.

From: Sandesh Hegde <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, July 12, 2016 at 10:46 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: DAG is failing due to memory issues

Quoting from the doc shared by the Ram, those parameters control operator 
memory size.


 actual container memory allocated by RM has to lie between

[yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]

On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli 
<[email protected]<mailto:[email protected]>> wrote:

Hi Ram,

I see in the cluster yarn-site.xml, below two properties are configured with 
below settings..

yarn.scheduler.minimum-allocation-mb ===> 1024
yarn.scheduler.maximum-allocation-mb ===> 32768


So with the above settings at cluster level, I can’t increase the memory 
allocated for my DAG ?  Is there is any other way, I can increase the memory ?


Thanks a lot.


Regards,
Raja.

From: Munagala Ramanath <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, July 12, 2016 at 9:31 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: DAG is failing due to memory issues

Please see: http://docs.datatorrent.com/troubleshooting/#configuring-memory

Ram

On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli 
<[email protected]<mailto:[email protected]>> wrote:

Hi,

My DAG is failing with memory issues for container. Seeing below information in 
the log.



Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is running 
beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory 
used; 2.9 GB of 2.1 GB virtual memory used. Killing container.


Can someone help me on how I can fix this issue. Thanks a lot.



Regards,
Raja.


Reply via email to