Hello all,

I found the wget project while searching on the web for web downloaders and crawlers. 
My question is, whether it is possible to let wget crawl or spider over the web (given 
a certain start url) and follow all the unique urls that it runs into and download 
only the images it finds on the webpages corresponding to these urls?

Thanks in andvance,

Menno

Reply via email to