On Wed, Feb 17, 2010 at 7:09 PM, Micah Cowan <[email protected]> wrote: > Norman Khine wrote: >> hello, >> i can download the page using: >> >> $ wget http://username:[email protected]/file.html >> >> but when i try to convert the links and download all the images it >> doesn't work and i get an error. >> >> HTTP request sent, awaiting response... 401 Authorization Required >> Authorization failed. >> >> what is the correct way to pass authentication information and >> download all the files for each url that is in my url.txt file >> >> -p --convert-links -i ../url.txt > > That method should work, though --user and --password are the preferred > method. > > Perhaps username and/or password contain special characters that would > need to be percent-encoded?
i did this, and it worked well wget -m -U "Mozilla/5.0 (compatible; Konqueror/3.2; Linux)" --http-user=USERNAME --http-password=PASSWORD -r -p -nH -nd -Pdownload --wait=2 -k -i ../url.txt but i still can't get to download the images, css and js files and rewrite the downloaded page so that i can use it locally. i have the '-p' switch also i have tried with --convert-links -r but it did not work! what have i missed? thanks > > -- > Micah J. Cowan > http://micah.cowan.name/ > -- %>>> "".join( [ {'*':'@','^':'.'}.get(c,None) or chr(97+(ord(c)-83)%26) for c in ",adym,*)&uzq^zqf" ] )
