Sorry -- 

I meant htdig -- the process which actually goes out and searches the 
website(s).  "Seems logical" that, if htdig is initiated from a unix-shell 
script, there ought to be a way to limit elapsed time; does anyone have a 
working example of this/equivalent?  

<< According to [EMAIL PROTECTED]:
 > Appears that, in real world, htsearch HTDIG 3.1.5 will from time to time 
loop; due 
 > basically to configuration file not set up to deal with actual conditions 
at 
 > searched web site(s).  
 > 
 > Does Unix have any ability to limit elapsed time (and/or disk space) used 
by 
 > an attempt to run htsearch?  Hopefully, giving the ability to obtain/test 
a 
 > return code, etc.  
 
 Currently, htsearch sets a time limit of 5 minutes.  This can be changed
 by changing the alarm() call in htsearch/htsearch.cc, to whatever limit
 you want.  The only thing I can think of that would make it take that long
 is if you have a huge database and you search for a very common word.
 You may also want to have a look at http://www.htdig.org/FAQ.html#q5.10
 for suggestions.  By the way, htsearch shouldn't be consuming any disk
 space at all - its only output is the HTML it sends out to your browser,
 and possibly a small log entry. >>

------------------------------------
To unsubscribe from the htdig mailing list, send a message to
[EMAIL PROTECTED]
You will receive a message to confirm this.

Reply via email to