Thank you.  I did a kill -TERM and I am watching it
write out to db.log.  The problem may have been the
fact that I believed it was hanging when actually it
was writing out to db.log and cleaning up, and
continued to send kill signals, msesing up the natural
course of things.

What is db.log?  I have many document names popping up
in there.  Is it what was left in the todo/to-crawl
list?

--- Geoff Hutchison <[EMAIL PROTECTED]> wrote:
> On Mon, 25 Mar 2002, Jessica Biola wrote:
> 
> > What is the best way to stop a current dig in
> progress
> > without corrupting the integrity of the db.*
> files?  I
> > find that when I send a kill level 9 (KILL) or 15
> > (TERM), it ruins the integrity of the data that
> has
> > already been crawled, or if I just send a level 1
> > (HUP), it doesn't interrupt it at all and it keeps
> on
> > crawling.
> > ...
> > I'm using one of the 3.2.0b4 versions on Linux.
> 
> With 3.1.6 and 3.2.0b2 and later, htdig installs a
> signal handler before
> begining indexing. Before it handles a KILL or TERM,
> it should finish up
> the current URL, write the current progress to the
> db.log file and quit
> cleanly. This may take a second or two.
> 
> If you're seeing it quit directly, the db.log isn't
> written and there's
> data corruption, this is a bug and more information
> would help to track
> down the problem (i.e. what compiler did you use,
> what version of Linux,
> how big was the database, etc.)
> 
> -Geoff
> 


__________________________________________________
Do You Yahoo!?
Yahoo! Movies - coverage of the 74th Academy Awards®
http://movies.yahoo.com/

_______________________________________________
htdig-dev mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/htdig-dev

Reply via email to