Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.
The "LimitingTaskSlotUsage" page has been changed by SomeOtherAccount. http://wiki.apache.org/hadoop/LimitingTaskSlotUsage?action=diff&rev1=2&rev2=3 -------------------------------------------------- If a task absolutely must break the rules, there are a few things one can do: * Deploy ZooKeeper and use it as a persistent lock to keep track of how many tasks are running concurrently - * Use a scheduler with a maximum task-per-queue feature and submit the job to that queue + * Use a scheduler with a maximum task-per-queue feature and submit the job to that queue, such as FairShareScheduler or CapacityScheduler == Job consumes too much RAM/disk IO/etc on a given node ==
