Hello all,
We have a 100 node hadoop cluster that is used for multiple purposes. I want to 
run few mapred jobs and I know 4 to 5 slaves should be enough. Is there anyway 
to restrict my jobs to use only 4 slaves instead of all 100. I noticed that 
more the number of slaves more overhead there is.

Also can I pass in hadoop parameters like mapred.child.java.opts so that the 
actual child processes gets the specified value for max heap size. I want to 
set the heap size to 2G instead of going with the default..

Thanks
Praveen

Reply via email to