Hi,

I am a newbie on Hadoop and have a quick question on optimal compute vs.
storage resources for MapReduce.

If I have a multiprocessor node with 4 processors, will Hadoop schedule
higher number of Map or Reduce tasks on the system than on a uni-processor
system? In other words, does Hadoop detect denser systems and schedule
denser tasks on multiprocessor systems?

If yes, will that imply that it makes sense to attach higher capacity
storage to store more number of blocks on systems with dense compute?

Any insights will be very useful.

Thanks,
Satheesh

Reply via email to