Hi all,

i've been using w3af mostly with spiderMan proxy and manual discovery,
b/c the application needs a login with username/password.

now i would like to scan the same webapp multiple times with different
sets of audit plugins enabled.  i already have a list of fuzzable URLs
from previous scans.

>> the goal is to repeat a scan (with same or other plugins) to check if the 
>> found vuln's have been fixed, if possible without the need of spiderMan 
>> proxy. (i would like to be able to configure and start a scan from remote 
>> with ssh without an open proxy port)

i found the 2 plugins "importResults" and "urllist_txt", where the
documentation of the first one seems outdated (only 1 parameter:
input_file) and the second one seems undocumented here:
http://w3af.sourceforge.net/plugin-descriptions.php#discovery

- what's the difference between the two?  which one should be preferred?

- what's the format of "input_csv" from importResults? (e.g. 1 URL per
line, with or without URL parameters? is there any separation by
comma, or why CSV?)

- could i provide a login (username/password or session cookie)
somehow without using spiderMan proxy?

(maybe if it's possible create a GET request in the URL list file
which does a login? [unless it's POST only] or else how?)

thanks for any feedback and answers.

Cheers,
Tom

------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
W3af-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/w3af-users

Reply via email to