Norman Khine wrote: > i did this, and it worked well > > wget -m -U "Mozilla/5.0 (compatible; Konqueror/3.2; Linux)" > --http-user=USERNAME --http-password=PASSWORD -r -p -nH -nd -Pdownload > --wait=2 -k -i ../url.txt > > but i still can't get to download the images, css and js files and > rewrite the downloaded page so that i can use it locally. > > i have the '-p' switch also i have tried > > with > > --convert-links -r > > but it did not work! > > what have i missed?
Well, try adding --debug and looking through the suggestions at http://wget.addictivecode.org/FrequentlyAskedQuestions#not-downloading - they might be forbidden by the robots file, for instance. Another possibility is that the images and css are not linked via HTML tags, but via CSS "url(...)" constructs. In that case, you need to make sure that you're using the latest version of Wget (1.12), as older versions are unable to parse CSS. A similar thing might be true for the JavaScript files: if they're being imported via JavaScript statements, rather than the "src" attribute of an HTML "script" tag or the like, Wget can't see them. -- Micah J. Cowan http://micah.cowan.name/
