Hello!
Because of an error in our server configuration an a bug in a webpage,
we had a problem with a recursion in a part of our url tree. This
affected htdig as well. So my question is, if there's a way to avoid
such a problem. max_hop_count is not really a solution, because of the
depth of our document tree and because such a recursion produces an
exponential amount of 'additional documents'. Perhaps there is a way to
look for repeated path fragments or combinations of path fragments and
filter such an url? Is there any known solution for this problem?
Thanks in advance
Berthold Cogel
--
Dr. rer. nat. Berthold Cogel University of Cologne
E-Mail: [EMAIL PROTECTED] ZAIK-US (RRZK)
Tel.: +49(0)221/478-7020 Robert-Koch-Str. 10
FAX: +49(0)221/478-5568 D-50931 Cologne - Germany
_______________________________________________
htdig-general mailing list <[EMAIL PROTECTED]>
To unsubscribe, send a message to <[EMAIL PROTECTED]> with a
subject of unsubscribe
FAQ: http://htdig.sourceforge.net/FAQ.html