Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-14 Thread Viswanathan J
Hi guys, Appreciate your response. Thanks, Viswa.J On Oct 12, 2013 11:29 PM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi Guys, But I can see the jobtracker OOME issue fixed in hadoop - 1.2.1 version as per the hadoop release notes as below. Please check this URL,

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-14 Thread Viswanathan J
Thanks a lot and lot Antonio. I'm using the Apache hadoop, hope this issue will be resolved in upcoming apache hadoop releases. Do I need the restart whole cluster after changing the mapred site conf as you mentioned? What is the following bug id,

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-14 Thread Viswanathan J
Hi, Not yet updated in production environment. Will keep you posted once it is done. In which Apache hadoop release this issue will be fixed? Or this issue already fixed in hadoop-1.2.1 version as in the given below link,

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-12 Thread Viswanathan J
Hi Harsh, Appreciate the response. Thanks Reyane. Thanks, Viswa.J On Oct 12, 2013 5:04 AM, Reyane Oukpedjo oukped...@gmail.com wrote: Hi there, I had a similar issue with hadoop-1.2.0 JobTracker keep crashing until I set HADOOP_HEAPSIZE=2048 I did not have this kind of issue with

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-12 Thread Viswanathan J
Thanks Antonio, hope the memory leak issue will be resolved. Its really nightmare every week. In which release this issue will be resolved? How to solve this issue, please help because we are facing in production environment. Please share the configuration and cron to do that cleanup process.

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-12 Thread Viswanathan J
Hi Guys, But I can see the jobtracker OOME issue fixed in hadoop - 1.2.1 version as per the hadoop release notes as below. Please check this URL, https://issues.apache.org/jira/browse/MAPREDUCE-5351 How come the issue still persist? I'm I asking a valid thing. Do I need to configure anything

Hadoop Jobtracker heap size calculation and OOME

2013-10-11 Thread Viswanathan J
Hi, I'm running a 14 nodes of Hadoop cluster with datanodes,tasktrackers running in all nodes. *Apache Hadoop :* 1.2.1 It shows the heap size currently as follows: *Cluster Summary (Heap Size is 5.7/8.89 GB)* * * In the above summary what is the *8.89* GB defines? Is the *8.89* defines maximum

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-11 Thread Reyane Oukpedjo
Hi there, I had a similar issue with hadoop-1.2.0 JobTracker keep crashing until I set HADOOP_HEAPSIZE=2048 I did not have this kind of issue with previous versions. But you can try this if you have memory and see. In my case the issue was gone after I set as above. Thanks Reyane OUKPEDJO