That did the trick.  Just like a l thought.  I had a stupid mistake in my rundig 
script.
thanks for the help
chad

>>> justin <[EMAIL PROTECTED]> 08/06/00 10:53PM >>>
Try http://www.htdig.org/files/contrib/scripts/rundig.sh 

I am using it on archived data on cdroms as well as the filesystem and
it correctly skips old data as "unchanged".  Index times will take ~1h
on a cd the first time, then about 2m from then on(basically how long
fine ./ takes).

On 4 Aug 00, at 15:13, Chad Phillips wrote:

> How can I do an update on the Htdig database, with out deleting what is there?  I 
>had thought that the original database would kept as long as htdig was not run with 
>the -i flag.
> 
> The database I have took a long time to build and I want to add one more site, but I 
>don't want to start the indexing from scratch.  Should I build another database for 
>the one site and then run htmerge?
> 
> thanks
> chad


Try using the <!--htdig_noindex--> tag to have htdig skip the dynamic
content 

> Depending on the site you are digging, the following 
> might be helpful as well:
> I have a server where all content is generated "on the fly" 
> from a database. So the document date is always "now" 
> causing htdig to retrieve the full site on every run.
> No very nice since this is a big site, and I can't run the
> dig via local filesystem naturally. 
> But this is actually not a ht://dig problem but lies with 
> the site's content. (And I haven't found a workaround 
> yet since the date of last document change is not even 
> in the database ;-( )
> 
> Cheers, Marcel

justin

------------------------------------
To unsubscribe from the htdig mailing list, send a message to
[EMAIL PROTECTED] 
You will receive a message to confirm this.



------------------------------------
To unsubscribe from the htdig mailing list, send a message to
[EMAIL PROTECTED]
You will receive a message to confirm this.

Reply via email to