I'm setting up a cluster running Hadoop 0.19.1. Everything I've read says that, by default, Hadoop uses a FIFO queue that will run one map- reduce job to completion before starting the next job. I have a small 4-node cluster, so this is exactly the behavior I want.

However, if I start two instances of the Hadoop examples that came in the 0.19.1 tarball, the second job starts immediately, even while the first is still in the map phase. Shouldn't the second job get queued behind the first? If not, how can I configure the cluster for simple FIFO scheduling?

I can get the capacity scheduler or the fair scheduler to work properly, but neither is apparently able to give each job exclusive use of the cluster.

-Logan

Reply via email to