I think we should add the IO SORT SIZE to the default JVM child size and we
will have a lot fewer OOM's. That will take out the first pass of OOM mapper
errors and we can start to work on the more real ones.

On Mon, Jan 26, 2009 at 12:11 PM, Thibaut (JIRA) <[email protected]> wrote:

>
>     [
> https://issues.apache.org/jira/browse/HADOOP-4976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]
>
> Thibaut updated HADOOP-4976:
> ----------------------------
>
>
> Try increasing the child heap size in the hadoop-site.xml configuration
> file.
>
> > Mapper runs out of memory
> > -------------------------
> >
> >                 Key: HADOOP-4976
> >                 URL: https://issues.apache.org/jira/browse/HADOOP-4976
> >             Project: Hadoop Core
> >          Issue Type: Bug
> >          Components: mapred
> >    Affects Versions: 0.19.0
> >         Environment: Amazon EC2 Extra Large instance (4 cores, 15 GB
> RAM), Sun Java 6 (1.6.0_10); 1 Master, 4 Slaves (all the same); each Java
> process takes the argument "-Xmx700m" (2 Java processes per Instance)
> >            Reporter: Richard J. Zak
> >             Fix For: 0.19.1
> >
> >
> > The hadoop job has the task of processing 4 directories in HDFS, each
> with 15 files.  This is sample data, a test run, before I go to the needed 5
> directories of about 800 documents each.  The mapper takes in nearly 200
> pages (not files) and throws an OutOfMemory exception.  The largest file is
> 17 MB.
> > If this problem is something on my end and not truly a bug, I apologize.
>  However, after Googling a bit, I did see many threads of people running out
> of memory with small data sets.
>
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>
>

Reply via email to