That is not neccessary. At most you would delete the failed segment or delete 
all segment dirs except crawl_generate (or was it fetch_generate) so you can 
restart the fetch from the beginning.


What do you use? The crawl command? I don't see any evidence of you updating 
the DB ;). Anyway, never kill a running job unless you really have to. It 
cannot be resumed.

> I had to delete the contents of the  crawldb folder to recover from a
> failed fetch (was this the best response? i doubt it).  now I have a fetch
> running, successfully, but i don't see any evidence that is writing
> anything to crawldb.  Is it going to write all the crawldb stuff at the
> end, or should I go ahead and kill the crawl now?

Reply via email to