I have tried changing all the jobs configuration to this:

yarn.container.memory.mb=128
yarn.am.container.memory.mb=128

and on the startup I can see:

2015-09-15 12:40:18 ClientHelper [INFO] set memory request to 128 for 
application_1442313590092_0002

On the web interface of hadoop I see that every job is still getting 2 gb each. 
In fact, only two of the jobs are in state running, while the rest are accepted.

Any ideas?

Thanks,

    Jordi

-----Mensaje original-----
De: Yan Fang [mailto:yanfang...@gmail.com] 
Enviado el: viernes, 11 de septiembre de 2015 20:56
Para: dev@samza.apache.org
Asunto: Re: memory limits

Hi Jordi,

I believe you can change the memory by* yarn.container.memory.mb* , default is 
1024. And *yarn.am.container.memory.mb* is for the AM memory.

See
http://samza.apache.org/learn/documentation/0.9/jobs/configuration-table.html

Thanks,
Fang, Yan
yanfang...@gmail.com

On Fri, Sep 11, 2015 at 4:21 AM, Jordi Blasi Uribarri <jbl...@nextel.es>
wrote:

> Hi,
>
> I am trying to implement an environment that requires multiple 
> combined samza jobs for different tasks. I see that there is a limit 
> to the number of jobs that can be running at the same time as they block 1GB 
> of ram each.
> I understand that this is a reasonable limit in a production 
> environment (as long as we are speaking of Big Data, we need big 
> amounts of resources ☺
> ) but my lab does not have so much ram. Is there a way to reduce this 
> limit so I can test it properly? I am using Samza 0.9.
>
> Thanks in advance,
>
>    Jordi
> ________________________________
> Jordi Blasi Uribarri
> Área I+D+i
>
> jbl...@nextel.es
> Oficina Bilbao
>
> [http://www.nextel.es/wp-content/uploads/Firma_Nextel_2015.png]
>

Reply via email to