If I remember correctly most of the popular web spiders can do the job (i.e.
webscarab, burpsuite) can do the job. But you may have to configure them
properly.

In my opinion, burpsuite is the best, but on its free version you cannot
save your results. Hence, I use webscarab.

Hope that helps

Antonios

2010/9/25 Adrian Crenshaw <[email protected]>

> Hi all,
>     I'm looking at some of the tools in BT4R1, and will be looking at what
> Samurai WTF has to offer once I finish downloading the latest version. I'm
> looking for some sort of spider that lets me do the following:
>
> 1. Follow every link on a page, even onto other domains, as long as the top
> level domain name is the same (edu, com, cn, whatever)
> 2. For every page it visits, it collect the file names of all resources.
> 3. The headers so I can see the server version.
> 4. Grab the robots .txt if possible.
>
> Any ideas on the best tool for the job, or do I need to roll my own?
>
> Thanks,
> Adrian
>
> _______________________________________________
> Pauldotcom mailing list
> [email protected]
> http://mail.pauldotcom.com/cgi-bin/mailman/listinfo/pauldotcom
> Main Web Site: http://pauldotcom.com
>
_______________________________________________
Pauldotcom mailing list
[email protected]
http://mail.pauldotcom.com/cgi-bin/mailman/listinfo/pauldotcom
Main Web Site: http://pauldotcom.com

Reply via email to