Hello, We have a job where all mappers finish except for one, which always hangs at the same spot (i.e. reaches 49%, then never progresses).
This is likely due to a bug in the wiki parser in our Pig UDF. We can afford to lose the data this mapper is working on if it would allow the job to finish. Question: is there a hadoop configuration parameter similar to mapred.skip.map.max.skip.records that would let us skip a map that doesn't progress after X amount of time? Any other possible workarounds for this case would also be useful. We are currently using hadoop 1.1.0 and Pig 0.10.1. Thanks, Chris
