On Tue, Jun 3, 2008 at 2:20 PM, Peter Rundle <[EMAIL PROTECTED]> wrote:
> I'm looking for some recommendations for a *simple* Linux based tool to
> spider a web site and pull the content back into plain html files, images,
> js, css etc.
>
> I have a site written in PHP which needs to be hosted temporarily on a
> server which is incapable (read only does static content). This is not a
> problem from a temp presentation point of view as the default values for
> each page will suffice. So I'm just looking for a tool which will quickly
> pull the real site (on my home php capable server) into a directory that I
> can zip and send to the internet addressable server.
>
> I know there's a lot of code out there, I'm asking for recommendations.
>

I'd use 'wget'. From what you describe, 'wget -r' should be very close
to what you want. Consult the manpage for details about fiddling with
links etc.

jml
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to