Vinod,

You look to be using CDH3. Lets carry this on at [email protected]
[https://groups.google.com/a/cloudera.org/group/cdh-user/topics],
since the issue may just be CDH3 specific.

(Moving to that list, bcc'd common-user@)

Could you tell us about your job and about what the per-node RAM
configuration is (Is this on Amazon?), and how many slots its
configured to run for maps and reduces? You may simply be running out
of RAM given your configuration, so am looking to eliminate or confirm
that possibility to proceed.

Also, do all your tasks fail with this error, or just a few of them?

On Thu, Jan 12, 2012 at 1:16 PM, T Vinod Gupta <[email protected]> wrote:
> Harsh, did you mean my <job id>_conf.xml? for some strange reason, i do see
> these 3 lines -
>
> <property><!--Loaded from
> /media/ephemeral3/hadoop/mapred/local/jobTracker/job_201201120656_0001.xml--><name>mapred.reduce.child.java.opts</name><value>-Xmx2048M</value></property>
> <property><!--Loaded from
> /media/ephemeral3/hadoop/mapred/local/jobTracker/job_201201120656_0001.xml--><name>mapred.child.java.opts</name><value>-Xmx200m</value></property>
> <property><!--Loaded from
> /media/ephemeral3/hadoop/mapred/local/jobTracker/job_201201120656_0001.xml--><name>mapred.map.child.java.opts</name><value>-Xmx2048M</value></property>
>
> the 1st and 3rd is what i set. but i don't know if the middle property
> overrides the others.
>
> btw, my hadoop version is below -
>
> Hadoop 0.20.2-cdh3u1
> Subversion file:///tmp/topdir/BUILD/hadoop-0.20.2-cdh3u1 -r
> bdafb1dbffd0d5f2fbc6ee022e1c8df6500fd638
> Compiled by root on Mon Jul 18 09:40:22 PDT 2011
> From source with checksum 3127e3d410455d2bacbff7673bf3284c
>
> thanks
>
> On Wed, Jan 11, 2012 at 10:57 PM, Koji Noguchi <[email protected]>wrote:
>
>> > but those new settings have not yet been
>> > added to mapred-default.xml.
>> >
>> It's intentionally left out.
>> If set in mapred-default.xml, user's mapred.child.java.opts would be
>> ignored
>> since mapred.{map,reduce}.child.java.opts would always win.
>>
>> Koji
>>
>> On 1/11/12 9:34 PM, "George Datskos" <[email protected]>
>> wrote:
>>
>> > Koji, Harsh
>> >
>> > mapred-478 seems to be in v1, but those new settings have not yet been
>> > added to mapred-default.xml.  (for backwards compatibility?)
>> >
>> >
>> > George
>> >
>> > On 2012/01/12 13:50, Koji Noguchi wrote:
>> >> Hi Harsh,
>> >>
>> >> Wasn't MAPREDUCE-478 in 1.0 ?  Maybe the Jira is not up to date.
>> >>
>> >> Koji
>> >>
>> >>
>> >> On 1/11/12 8:44 PM, "Harsh J"<[email protected]>  wrote:
>> >>
>> >>> These properties are not available on Apache Hadoop 1.0 (Formerly
>> >>> known as 0.20.x). This was a feature introduced in 0.21
>> >>> (https://issues.apache.org/jira/browse/MAPREDUCE-478), and is
>> >>> available today on 0.22 and 0.23 line of releases.
>> >>>
>> >>> For 1.0/0.20, use "mapred.child.java.opts", that applies to both map
>> >>> and reduce commonly.
>> >>>
>> >>> Would also be helpful if you can tell us what doc guided you to use
>> >>> these property names instead of the proper one, so we can fix it.
>> >>>
>> >>> On Thu, Jan 12, 2012 at 8:44 AM, T Vinod Gupta<[email protected]>
>> >>> wrote:
>> >>>> Hi,
>> >>>> Can someone help me asap? when i run my mapred job, it fails with this
>> >>>> error -
>> >>>> 12/01/12 02:58:36 INFO mapred.JobClient: Task Id :
>> >>>> attempt_201112151554_0050_m_000071_0, Status : FAILED
>> >>>> Error: Java heap space
>> >>>> attempt_201112151554_0050_m_000071_0: log4j:ERROR Failed to flush
>> writer,
>> >>>> attempt_201112151554_0050_m_000071_0: java.io.IOException: Stream
>> closed
>> >>>> attempt_201112151554_0050_m_000071_0:   at
>> >>>> sun.nio.cs.StreamEncoder.ensureOpen(StreamEncoder.java:44)
>> >>>> attempt_201112151554_0050_m_000071_0:   at
>> >>>> sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:139)
>> >>>> attempt_201112151554_0050_m_000071_0:   at
>> >>>> java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
>> >>>> attempt_201112151554_0050_m_000071_0:   at
>> >>>> org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>> >>>> attempt_201112151554_0050_m_000071_0:   at
>> >>>>
>> org.apache.hadoop.mapred.TaskLogAppender.flush(TaskLogAppender.java:94)
>> >>>> attempt_201112151554_0050_m_000071_0:   at
>> >>>> org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:260)
>> >>>> attempt_201112151554_0050_m_000071_0:   at
>> >>>> org.apache.hadoop.mapred.Child$2.run(Child.java:142)
>> >>>>
>> >>>>
>> >>>> so i updated my mapred-site.xml with these settings -
>> >>>>
>> >>>>   <property>
>> >>>>     <name>mapred.map.child.java.opts</name>
>> >>>>     <value>-Xmx2048M</value>
>> >>>>   </property>
>> >>>>
>> >>>>   <property>
>> >>>>     <name>mapred.reduce.child.java.opts</name>
>> >>>>     <value>-Xmx2048M</value>
>> >>>>   </property>
>> >>>>
>> >>>> also, when i run my jar, i provide -
>> >>>> "-Dmapred.map.child.java.opts="-Xmx4000m" at the end.
>> >>>> inspite of this, the task is not getting the max heap size im setting.
>> >>>>
>> >>>> where did i go wrong?
>> >>>>
>> >>>> after changing mapred-site.xml, i restarted jobtracker and
>> tasktracker.. is
>> >>>> that not good enough?
>> >>>>
>> >>>> thanks
>> >>
>> >
>> >
>>
>>



-- 
Harsh J
Customer Ops. Engineer, Cloudera

Reply via email to