Manori,

On Wed, Mar 4, 2015 at 1:31 AM, Manori Wijesooriya
<man...@orangehrm.us.com> wrote:
> Hi...
>
> I'm a QA Engineer and doing a research on Security Testing. I found that
> w3af is a very good tool which supports for so many vulnerability types and
> authenticated tests as well.
> I just tried the tool for testing CSRF attacks in a web application and same
> application was tested with Tamper Data firefox add-on as well (need to test
> page by page manually). I'm happy to say that I got the results almost
> similar in both the tools.

All sounds good! Happy to hear you're using w3af,

> My intention was to evaluate w3af in order to get it adopted to our security
> testing process and reduce the time taken for testing with Tamper tool.
> Seems my evaluation got succeeded with positive results and seems we can use
> w3af instead of Tamper Data and reduce lot of time.
>
> I used w3af with spider_man plugin and accessed the system manually and let
> the tool run for auditing CSRF.

That's a good practice, yes.

> However, here we are having monthly releases and have to do the same testing
> every month. In that case do I need to enable spider_man plugin and access
> the whole system manually for each time? Isn't there a way to do this only
> one time and get the urls saved and reuse them?

Crawling can be an expensive process, which in some cases requires
manual intervention (spider man plugin). In order to save all the URLs
found during a scan it's possible to use the output.export_requests
plugin which will write the URLs to a user configured file.

Loading the saved data is achieved using the import_results plugin,
which reads all the information and feeds it into w3af's core.

Just added those two paragraphs to the docs [0], thanks for mentioning
that it wasn't documented. My only extra comment is that by using this
method you're missing on all the new URLs which might be added between
releases. Another way to achieve the same would be to configure your
automated tests (which might use Selenium or similar technology) to
use spider_man as a proxy. That will feed all the URLs to spider_man
automagically.

[0] 
https://github.com/andresriancho/w3af/blob/develop/doc/sphinx/common-use-cases.rst#saving-urls-and-using-them-as-input-for-other-scans

> Even though there are lots of tutorials and mailing lists over the internet,
> couldn't find an answer for this. Please be kind enough to help me and let
> me know whether this is possible with w3af. Thanks in advance.
>
>
> Regards,
> --
>
> Manori Wijesooriya
> QA Engineer  | OrangeHRM Inc.
>
> www.orangehrm.com | www.orangehrmlive.com
> Twitter | LinkedIn | Facebook
>
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> by Intel and developed in partnership with Slashdot Media, is your hub for
> all
> things parallel software development, from weekly thought leadership blogs
> to
> news, videos, case studies, tutorials and more. Take a look and join the
> conversation now. http://goparallel.sourceforge.net/
> _______________________________________________
> W3af-users mailing list
> W3af-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/w3af-users
>



-- 
Andrés Riancho
Project Leader at w3af - http://w3af.org/
Web Application Attack and Audit Framework
Twitter: @w3af
GPG: 0x93C344F3

------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
W3af-users mailing list
W3af-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/w3af-users

Reply via email to