I have a Pig script that sometimes submits two mapreduce jobs at once.
 This runs double the number of mappers and reducers that the cluster
is configured for, which leads to oversubscription and thrashing.
This may be more of a scheduler thing, but does anyone know how to
tell Hadoop to only run one job at a time?  Thanks.

Reply via email to