This works! Thank you very much!

-----Original Message-----
From: Bejoy Ks [mailto:bejoy.had...@gmail.com] 
Sent: Tuesday, November 29, 2011 9:45 AM
To: common-user@hadoop.apache.org
Subject: Re: Multiple reducers

Hi Hoot

You can specify the number of reducers explicitly using -D
mapred.reduce.tasks=n.

hadoop jar wordcount.jar com.wc.WordCount -D mapred.reduce.tasks=n /input
/output

Currenty your word count is triggering just 1 reducer because the defaukt
value of mapred.reduce.tasks woulld be set as 1 in your configuration file

Hope it helps !.

Regards
Bejoy.K.S

On Tue, Nov 29, 2011 at 8:03 PM, Hoot Thompson <h...@ptpnow.com> wrote:

> I'm trying to prove that my cluster will in fact support multiple
reducers,
> the wordcount example doesn't seem to spawn more that one (1). Is that
> correct? Is there a sure fire way to prove my cluster is configured
> correctly in terms of launching the maximum (say two per node) number of
> mappers and reducers?
>
> Thanks!
>
>
>


Reply via email to