ok. i found that the crawl is writing crawldb to my home directory instead
of crawldb, presumably because I ran from the wrong place, and presumably I
will be able to index this in solr from the current location. so, good news!
thx




On Thu, Sep 22, 2011 at 14:03, Markus Jelsma <[email protected]>wrote:

> That is not neccessary. At most you would delete the failed segment or
> delete
> all segment dirs except crawl_generate (or was it fetch_generate) so you
> can
> restart the fetch from the beginning.
>
>
> What do you use? The crawl command? I don't see any evidence of you
> updating
> the DB ;). Anyway, never kill a running job unless you really have to. It
> cannot be resumed.
>
> > I had to delete the contents of the  crawldb folder to recover from a
> > failed fetch (was this the best response? i doubt it).  now I have a
> fetch
> > running, successfully, but i don't see any evidence that is writing
> > anything to crawldb.  Is it going to write all the crawldb stuff at the
> > end, or should I go ahead and kill the crawl now?
>

Reply via email to