I have no budget to replace a number of drives on the systems I use for indexing. Each
time the dig finishes and the sorting begins, I run out of space and lose it all and 
have
to start over.

A little unrelated but I've not been able to find the answer anywhere is, using a 
remote
drive. I built a separate system which has plenty of space and am making it available 
on
the local lan via nfs. For example, I created an export file in /etc which has;

/directory      machine-name (rw)

I restarted the NFS server to take the new config.
On the client, I've connected via nfs to the server but no matter what I do, I don't 
ever
have write access. If I could figure this problem out, I could then simply add the nfs
connection into my script when the sorting time comes. The sorting could then use the 
nfs
drive as it's temp dir and would never fail again.

Anyone know where I can find the answer to the nfs problem? I'm running red hat 6.2 and
red hat 7.0. I've looked all over RH's web site and the search engines and I just can't
find this answer.

Thanks much.

Mike



_______________________________________________
htdig-general mailing list <[EMAIL PROTECTED]>
To unsubscribe, send a message to <[EMAIL PROTECTED]> with a 
subject of unsubscribe
FAQ: http://htdig.sourceforge.net/FAQ.html

Reply via email to