Quoting from the doc shared by the Ram, those parameters control operator memory size.
actual container memory allocated by RM has to lie between [yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb] On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <[email protected]> wrote: > > Hi Ram, > > I see in the cluster yarn-site.xml, below two properties are configured > with below settings.. > > yarn.scheduler.minimum-allocation-mb ===> 1024 > yarn.scheduler.maximum-allocation-mb ===> 32768 > > > So with the above settings at cluster level, I can’t increase the memory > allocated for my DAG ? Is there is any other way, I can increase the > memory ? > > > Thanks a lot. > > > Regards, > Raja. > > From: Munagala Ramanath <[email protected]> > Reply-To: "[email protected]" <[email protected]> > Date: Tuesday, July 12, 2016 at 9:31 AM > To: "[email protected]" <[email protected]> > Subject: Re: DAG is failing due to memory issues > > Please see: > http://docs.datatorrent.com/troubleshooting/#configuring-memory > > Ram > > On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli < > [email protected]> wrote: > >> >> Hi, >> >> My DAG is failing with memory issues for container. Seeing below >> information in the log. >> >> >> >> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is >> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB >> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing >> container. >> >> >> Can someone help me on how I can fix this issue. Thanks a lot. >> >> >> >> Regards, >> Raja. >> > >
