I can just cap it at, say, 1024MB.

This isn't in the config because that would change it for all jobs, and it
is probably not a good idea in general to use so much memory for the
combiner. Here it's the right thing to do.


On Sun, Sep 18, 2011 at 2:26 PM, Grant Ingersoll <[email protected]>wrote:

> I'm trying to run the RecommenderJob (trunk as of this morning) and am
> getting:
> java.io.IOException: Invalid "io.sort.mb": 2048
>        at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:939)
>        at
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:673)
>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:755)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:369)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>
>
> My heap size is 4gb.  AFAICT, the issue is in the RecommenderJob, line 270.
>   The problem is due to
> https://issues.apache.org/jira/browse/MAPREDUCE-2308
>
> Is there a reason we are setting this in code as opposed to relying on the
> config?
>
> -Grant
>
>

Reply via email to