Hi,

I got the similar problem too. Then I have to keep the split size smaller to 
solve it. 

-Rui

----- Original Message ----
From: Ted Dunning <[EMAIL PROTECTED]>
To: hadoop-user@lucene.apache.org
Sent: Tuesday, December 25, 2007 1:56:16 PM
Subject: Re: question on Hadoop configuration for non cpu intensive jobs - 
0.15.1



What are your mappers doing that they run out of memory?  Or is it your
reducers?

Often, you can write this sort of program so that you don't have higher
memory requirements for larger splits.


On 12/25/07 1:52 PM, "Jason Venner" <[EMAIL PROTECTED]> wrote:

> We have tried reducing the number of splits by increasing the block
> sizes to 10x and 5x 64meg, but then we constantly have out of memory
> errors and timeouts. At this point each jvm is getting 768M and I
 can't
> readily allocate more without dipping into swap.







      
____________________________________________________________________________________
Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

Reply via email to