Hi,

i'm running nutch with hadoop nightly build and everything works fine except the dedup job. I'm getting "Lock obtain timed out" all the way in DeleteDuplicates.reduce() after calling reader.deleteDocument(value.get()). I have 4 servers doing job in parallel through hadoop, so it is obvious that they can run in such kind of troubles.
What can I do to avoid this problem?

thx

des

Reply via email to