om: Harish Mallipeddi [mailto:harish.mallipe...@gmail.com]
>> Sent: Tuesday, August 18, 2009 10:37 AM
>> To: common-user@hadoop.apache.org
>> Subject: Re: utilizing all cores on single-node hadoop
>>
>> Hi Vasilis,
>>
>> Here's some info that I know:
t; and might reduce overall performance.
>
> Thanks,
> Amogh
> -Original Message-
> From: Harish Mallipeddi [mailto:harish.mallipe...@gmail.com]
> Sent: Tuesday, August 18, 2009 10:37 AM
> To: common-user@hadoop.apache.org
> Subject: Re: utilizing all cores
: Harish Mallipeddi [mailto:harish.mallipe...@gmail.com]
Sent: Tuesday, August 18, 2009 10:37 AM
To: common-user@hadoop.apache.org
Subject: Re: utilizing all cores on single-node hadoop
Hi Vasilis,
Here's some info that I know:
mapred.map.tasks - this is a job-specific setting. This is just a hi
Hi Vasilis,
Here's some info that I know:
mapred.map.tasks - this is a job-specific setting. This is just a hint to
InputFormat as to how many InputSplits (and hence MapTasks) you want for
your job. The default InputFormat classes usually keep each split size to
the HDFS block size (64MB default)
Hi,
I am a beginner trying to setup a few simple hadoop tests on a single
node before moving on to a cluster. I am just using the simple
wordcount example for now. My question is what's the best way to
guarantee utilization of all cores on a single-node? So assuming a
single node with 16-cores wha