I haven't the faintest idea why you e-mailed this question to the list 
software administrator address. It's obviously going to sit around until 
that mail is checked...

It can, in principle index several million files and it can definitely 
index local files. However, when you say "without spidering," I'm 
assuming you mean you want to hand it a list of files and have it index 
the whole lot. The production version of ht://Dig (3.1.6) is essentially 
specific to http:// URLs, though you can fudge this to some degree. As a 
result, it will not recursively descend through directories.

You could index all the files by assembling a list of "URLs" for them, 
which could be indexed locally. But since the "several million" items in 
the list would be assembled in memory at the same time, I'd guess you'd 
experience quite a bit of swapping as things got going. This is the 
distinct advantage of spidering--you don't need to have the whole list 
at once since URLs are added to the list as others are removed.

Regards,
--
-Geoff Hutchison
Williams Students Online
http://wso.williams.edu/

On Wednesday, April 24, 2002, at 09:08  AM, Rich Thomas wrote:

> Will ht-dig index several million small html and xml files?  Can it 
> index
> without spidering?  ie. index local files?
>
> I plan on running it on a Solaris 8 box with only 128 meg of Ram.  Does
> ht-dig use swap files?
>
> Thanks
>
> Rich Thomas
> Systems Administrator
> University Libraries
> 416 Capen Hall
> University at Buffalo
> Buffalo, NY 14260
> (716)645-3961


_______________________________________________
htdig-general mailing list <[EMAIL PROTECTED]>
To unsubscribe, send a message to <[EMAIL PROTECTED]> with a 
subject of unsubscribe
FAQ: http://htdig.sourceforge.net/FAQ.html

Reply via email to