I agree with Andrzej that a thread dump would be best. Also what
version of nutch are you using?
Dennis
Andrzej Bialecki wrote:
Mike Smith wrote:
Hi Dennis,
But it doesn't make sense since the reducers' keys are URLs and the
heartbeat cannot be sent when the reduce task is called. Since I am
truncating my http content to be less than 100K and I don't get any
file,
how come reducing a single record which is a single URL and writing its
parsed data into DFS takes more than 10 min!! Even if you load the
cluster
that should never happen. There should be another bug involved.
Could you try to produce a thread dump of a task in such state? (kill
-SIGQUIT pid)
- Re: fetch fails at reduce stage because can not sense hea... Dennis Kubes
-