Isnt essentially task = thread within my java app? Or maybe I am mistaken.

My main concern is that I need an upper bound to limit the memory footprint for 
each java thread. Are there any configurations in Hadoop that can help me do so?

-SB

-----Original Message-----
From: Serge Blazhievsky [mailto:serge.blazhiyevs...@nice.com] 
Sent: Friday, April 13, 2012 2:20 PM
To: common-user@hadoop.apache.org
Subject: Re: 1gb allocated per thread for input read

Per thread or per task?



On 4/13/12 2:17 PM, "Barry, Sean F" <sean.f.ba...@intel.com> wrote:

>*FYI this is a proof of concept cluster*
>
>In my two node cluster that consists of Master - Jobtracker, Datanode, 
>Namenode, tasktracker, Secondarynamenode And Slave - Datenode , 
>tasktraker
>
>I have no more than 8g of ram on my slave and even less on the master 
>and I am currently running 4 tasks on the slave and 2 on the master. My 
>issue is that: is there a way where I can make sure that no more than 
>1g per thread  is allocated to read a large input file for my job?
>
>Thanks,
>SB

Reply via email to