Hi Tiago

thanks for the reply.  i missed adding a scope at the beginning, but tried
to do it afterwards in the proxy-history tab by selecting the list of url's.

from proxy-history there is also the option of "save selected items" which
generates a XML file (with items, time, url, request, response etc. as
elements).

what's the format expected by importResults input_burp?

thanks,
Tom


On Tue, Apr 27, 2010 at 11:54 AM, Tiago Mendo <[email protected]>wrote:

>
> On 2010/04/27, at 10:33, Tom Ueltschi wrote:
>
> Hi Andres and list,
>
> instead of the spiderMan plugin I would like to use another proxy (burp,
> webscarab) and import the URL's from a file. This way I just have to do it
> once for multiple scans (no interaction required).
>
> - The latest version from importResults says in its description:
>
>        Three configurable parameter exist:
>            - input_csv
>            - input_burp
>            - input_webscarab
>
> I've used paros proxy extensively, but don't know if I could export a url
> list in the "inpuc_csv" format.
>
> Has anyone done this with burp or webscarab proxy? Which on is easier to
> just create an url list?
>
>
> I know you can easily generate a list of URL GET requests with the free
> Burp. Just define a scope for your site, access it through the Burp proxy,
> and then right click the site in the history tab (I think it is the first
> one). Choose spider from here (or similar) and then right click again and
> choose one of the two export options. One of them will fill the clipboard
> with a list of GETs.
>
> I don't recall doing it with webscarab, so I can't give you more
> information.
>
>
>
> Can you do this with the free version of burp?
>
>
> yes.
>
>
> Do you know of the right menu entry to save the url file from burp or
> webscarab?  (I will try to find it myself with burp first)
>
>
> read above
>
>
> Thanks for any help.
>
> Cheers,
> Tom
>
>
> On Wed, Mar 10, 2010 at 2:04 PM, Tom Ueltschi <[email protected]
> > wrote:
>
>> Andres,
>>
>> thanks for the prompt response and the great work you (and the other
>> developers) are doing with w3af !
>>
>>
>> >> - could i provide a login (username/password or session cookie)
>> >> somehow without using spiderMan proxy?
>>
>> >    Yes, please see the http-settings, there is a way for you to
>> > specify a cookie, or add arbitrary headers with headersFile parameter.
>>
>> this would still require me to do a login and copy/save the session-cookie
>> to be used. (session expiration issues)
>> i would prefer to provide username/password for the login form (maybe
>> along with the URL and parameter-names of the login page).
>>
>> i'll try the importResults plugin with a Login-POST request in the
>> input_csv file and see if that would work (and obsolete the need for
>> spiderMan proxy to repeat a scan with login).
>>
>> i assume the same could be achieved using the formAuthBrute plugin, giving
>> one (or more) valid username/password combinations in the input files (maybe
>> even using stopOnFirst).
>>
>> - will in this case the successful login session be used for the rest of
>> the scan?
>>
>> - is there a way to influence the order of audit plugins being executed?
>> i think they are not executed in the order listed (in the w3af script file)
>>
>> this would be necessary to do the formAuthBrute first to do the login, and
>> then the rest of the audits with the logged-in users session.
>>
>>
>> right now i'm doing a scan with the latest SVN, but still the old way.
>> (using VNC viewer from my windows box to configure and start the test on my
>> ubuntu box, using spiderMan proxy).
>>
>> there is one more suggestion i have ;-)
>>
>> the spiderMan proxy seems to be listening only on the "local loopback"
>> interface (127.0.0.1), but not on the ethernet interface. from security
>> perspective this is good.  but from usability it would be nice, if it would
>> listen on all (or user configured) interfaces, so i wouldn't need to use VNC
>> viewer anymore.
>>
>> this would also have to advantage, that if some (stupid) webapp only works
>> right with IE and i don't have IE on linux, i could use IE on windows and
>> configure the proxy port of the ubuntu box.
>>
>> i prefer running w3af on ubuntu, not on windows, since my windows box is
>> not running 24/7, but the linux box is.
>>
>> is it already possible to configure spiderMan proxy for all interfaces or
>> would that need code change?
>>
>> thanks again for the great work!
>>
>> cheers,
>> Tom
>>
>>
>> On Tue, Mar 9, 2010 at 2:29 PM, Andres Riancho 
>> <[email protected]>wrote:
>>
>>> Tom,
>>>
>>> On Tue, Mar 9, 2010 at 9:12 AM, Tom Ueltschi
>>> <[email protected]> wrote:
>>> > Hi all,
>>> >
>>> > i've been using w3af mostly with spiderMan proxy and manual discovery,
>>> > b/c the application needs a login with username/password.
>>> >
>>> > now i would like to scan the same webapp multiple times with different
>>> > sets of audit plugins enabled.  i already have a list of fuzzable URLs
>>> > from previous scans.
>>> >
>>> >>> the goal is to repeat a scan (with same or other plugins) to check if
>>> the found vuln's have been fixed, if possible without the need of spiderMan
>>> proxy. (i would like to be able to configure and start a scan from remote
>>> with ssh without an open proxy port)
>>>
>>> Nice use case. I like what you're trying to achieve.
>>>
>>> > i found the 2 plugins "importResults" and "urllist_txt", where the
>>> > documentation of the first one seems outdated (only 1 parameter:
>>> > input_file) and the second one seems undocumented here:
>>> > http://w3af.sourceforge.net/plugin-descriptions.php#discovery
>>>
>>> - urllist_txt will read the urllist.txt file from the web server
>>> (http://host.tld/urllist.txt). This is not what you want.
>>> - The latest version from importResults says in its description:
>>>
>>>        Three configurable parameter exist:
>>>            - input_csv
>>>            - input_burp
>>>            - input_webscarab
>>>
>>> Please make sure that you have the latest version of w3af from the
>>> SVN. The (http://w3af.sourceforge.net/plugin-descriptions.php#discovery)
>>> page is outdated, I'll fix that in a while.
>>>
>>> > - what's the difference between the two?  which one should be
>>> preferred?
>>>
>>>     For your use case, please use importResults with input_csv.
>>>
>>> > - what's the format of "input_csv" from importResults? (e.g. 1 URL per
>>> > line, with or without URL parameters? is there any separation by
>>> > comma, or why CSV?)
>>>
>>>     method, uri, postdata
>>>
>>> > - could i provide a login (username/password or session cookie)
>>> > somehow without using spiderMan proxy?
>>>
>>>     Yes, please see the http-settings, there is a way for you to
>>> specify a cookie, or add arbitrary headers with headersFile parameter.
>>>
>>> > (maybe if it's possible create a GET request in the URL list file
>>> > which does a login? [unless it's POST only] or else how?)
>>>
>>>     Hmm... I'm not sure if that's going to work, but its worth a try!
>>> I think its a smart idea.
>>>
>>> > thanks for any feedback and answers.
>>>
>>>     Thank you!
>>>
>>> > Cheers,
>>> > Tom
>>> >
>>>
>>> --
>>>
>>> Andrés Riancho
>>> Founder, Bonsai - Information Security
>>> http://www.bonsai-sec.com/
>>> http://w3af.sf.net/
>>>
>>
>>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> W3af-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/w3af-users
>
>
> *
> Tiago Mendo*
> [email protected]
>
> +351 215000959
> +351 963618116
>
> Portugal Telecom / SAPO / DTS / Equipa de Segurança
> http://www.sapo.pt
>
> PGP: 0xF962B36970A3DF1D
>
>
------------------------------------------------------------------------------
_______________________________________________
W3af-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/w3af-users

Reply via email to