On Feb 6, 2008 9:30 PM, Leonard Burton <[EMAIL PROTECTED]> wrote:

> HI All,
>
> I have a client who wants to be able to download/cache all web files
> on certain sites for archiving info (i.e. before purchases, anything
> where a record of what exactly was on a certain page is needed).  They
> would like to have this on the web server so employees could log in
> and enter the sites for caching and then it can have a cron job and do
> this at night.
>
> Is there an OS program that will take a url and crawl/cache all the links
> on it?


check out the recursive option for wget; its pretty sweet ;)
http://linuxreviews.org/quicktips/wget/

-nathan

Reply via email to