I set up a hadoop task to run, and after 45 minutes it had completed all but one task. This one task had been killed and retried 3 times already, so I left it overnight, and 615 attempts later, it still hasn't been able to complete the one job. Is there some setting I'm missing that can tell it to just abort a job that doesn't complete, say, after the third try? I don't particularly like the idea of jobs that will never complete because of one bad input file.

Thanks.

Ross Boucher
[EMAIL PROTECTED]

Reply via email to