Re: utilizing all cores on single-node hadoop

2009-08-23 Thread Vasilis Liaskovitis
om: Harish Mallipeddi [mailto:harish.mallipe...@gmail.com] >> Sent: Tuesday, August 18, 2009 10:37 AM >> To: common-user@hadoop.apache.org >> Subject: Re: utilizing all cores on single-node hadoop >> >> Hi Vasilis, >> >> Here's some info that I know:

Re: utilizing all cores on single-node hadoop

2009-08-19 Thread Jason Venner
t; and might reduce overall performance. > > Thanks, > Amogh > -Original Message- > From: Harish Mallipeddi [mailto:harish.mallipe...@gmail.com] > Sent: Tuesday, August 18, 2009 10:37 AM > To: common-user@hadoop.apache.org > Subject: Re: utilizing all cores

RE: utilizing all cores on single-node hadoop

2009-08-17 Thread Amogh Vasekar
: Harish Mallipeddi [mailto:harish.mallipe...@gmail.com] Sent: Tuesday, August 18, 2009 10:37 AM To: common-user@hadoop.apache.org Subject: Re: utilizing all cores on single-node hadoop Hi Vasilis, Here's some info that I know: mapred.map.tasks - this is a job-specific setting. This is just a hi

Re: utilizing all cores on single-node hadoop

2009-08-17 Thread Harish Mallipeddi
Hi Vasilis, Here's some info that I know: mapred.map.tasks - this is a job-specific setting. This is just a hint to InputFormat as to how many InputSplits (and hence MapTasks) you want for your job. The default InputFormat classes usually keep each split size to the HDFS block size (64MB default)