hi, maybe someone who has the same problem can help me:

I started a crawl, at a certain depth the fetchers logs out the urls
aparently correct, but from two days!! it seems  to
be fetching the same site (a big one but not so big). What disturbs me is
that the segment directory is always  the same size
(du -hs segmentdir) it only has crawl_generate as a subdir. Does nutch has a
temporary dir, where it  stores the fetches until it
write the other subdirs?...maybe it is hangup?. It hapened two times in
diferent crawls (I didi several crawls,not to common)

Reply via email to