Do you know if you have enough job-load on the system? One way to look at this is to look for running map/reduce tasks on the JT UI at the same time you are looking at the node's cpu usage.
Collecting hadoop metrics via a metrics collection system say ganglia will let you match up the timestamps of idleness on the nodes with the job-load at that point of time. HTH, +vinod On Aug 29, 2012, at 6:40 AM, Terry Healy wrote: > Running 1.0.2, in this case on Linux. > > I was watching the processes / loads on one TaskTracker instance and > noticed that it completed it's first 8 map tasks and reported 8 free > slots (the max for this system). It then waited doing nothing for more > than 30 seconds before the next "batch" of work came in and started running. > > Likewise it also has relatively long periods with all 8 cores running at > or near idle. There are no jobs failing or obvious errors in the > TaskTracker log. > > What could be causing this? > > Should I increase the number of map jobs to greater than number of cores > to try and keep it busier? > > -Terry
