@Koji - Ah, I had not checked the sources, rather just the JIRA, which I've 
updated now since there is indeed an svn commit listed there. Thanks for 
correcting me there!

@George - I do not think it matters if these are present in default, but they 
should be present, yes. Mind filing a JIRA if this is still not the case in 
0.23+?

@Vinod - Do you see these props you set appear in your submitted job.xml? What 
does your job do exactly, and how much of RAM:Slots are configured/available 
for your slave machine?

On 12-Jan-2012, at 11:04 AM, George Datskos wrote:

> Koji, Harsh
> 
> mapred-478 seems to be in v1, but those new settings have not yet been added 
> to mapred-default.xml.  (for backwards compatibility?)
> 
> 
> George
> 
> On 2012/01/12 13:50, Koji Noguchi wrote:
>> Hi Harsh,
>> 
>> Wasn't MAPREDUCE-478 in 1.0 ?  Maybe the Jira is not up to date.
>> 
>> Koji
>> 
>> 
>> On 1/11/12 8:44 PM, "Harsh J"<[email protected]>  wrote:
>> 
>>> These properties are not available on Apache Hadoop 1.0 (Formerly
>>> known as 0.20.x). This was a feature introduced in 0.21
>>> (https://issues.apache.org/jira/browse/MAPREDUCE-478), and is
>>> available today on 0.22 and 0.23 line of releases.
>>> 
>>> For 1.0/0.20, use "mapred.child.java.opts", that applies to both map
>>> and reduce commonly.
>>> 
>>> Would also be helpful if you can tell us what doc guided you to use
>>> these property names instead of the proper one, so we can fix it.
>>> 
>>> On Thu, Jan 12, 2012 at 8:44 AM, T Vinod Gupta<[email protected]>  
>>> wrote:
>>>> Hi,
>>>> Can someone help me asap? when i run my mapred job, it fails with this
>>>> error -
>>>> 12/01/12 02:58:36 INFO mapred.JobClient: Task Id :
>>>> attempt_201112151554_0050_m_000071_0, Status : FAILED
>>>> Error: Java heap space
>>>> attempt_201112151554_0050_m_000071_0: log4j:ERROR Failed to flush writer,
>>>> attempt_201112151554_0050_m_000071_0: java.io.IOException: Stream closed
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> sun.nio.cs.StreamEncoder.ensureOpen(StreamEncoder.java:44)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:139)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> org.apache.hadoop.mapred.TaskLogAppender.flush(TaskLogAppender.java:94)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:260)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> org.apache.hadoop.mapred.Child$2.run(Child.java:142)
>>>> 
>>>> 
>>>> so i updated my mapred-site.xml with these settings -
>>>> 
>>>>  <property>
>>>>    <name>mapred.map.child.java.opts</name>
>>>>    <value>-Xmx2048M</value>
>>>>  </property>
>>>> 
>>>>  <property>
>>>>    <name>mapred.reduce.child.java.opts</name>
>>>>    <value>-Xmx2048M</value>
>>>>  </property>
>>>> 
>>>> also, when i run my jar, i provide -
>>>> "-Dmapred.map.child.java.opts="-Xmx4000m" at the end.
>>>> inspite of this, the task is not getting the max heap size im setting.
>>>> 
>>>> where did i go wrong?
>>>> 
>>>> after changing mapred-site.xml, i restarted jobtracker and tasktracker.. is
>>>> that not good enough?
>>>> 
>>>> thanks
>> 
> 
> 

Reply via email to