On 2/6/08, Leonard Burton [EMAIL PROTECTED] wrote:
Is there an OS program that will take a url and crawl/cache all the links on
it?
`wget -m -np http://example.com` will mirror the url and anything under it.
That's '-m' for mirror, and '-np' for no parent.. so you don't
download the whole
HI All,
I have a client who wants to be able to download/cache all web files
on certain sites for archiving info (i.e. before purchases, anything
where a record of what exactly was on a certain page is needed). They
would like to have this on the web server so employees could log in
and enter
On Feb 6, 2008 9:30 PM, Leonard Burton [EMAIL PROTECTED] wrote:
HI All,
I have a client who wants to be able to download/cache all web files
on certain sites for archiving info (i.e. before purchases, anything
where a record of what exactly was on a certain page is needed). They
would like
3 matches
Mail list logo