We are having a good thread going on fuzzing, commercial tools, etc. on the
fuzzing list. This is a large forward but I thought some of you might want
to weigh in, or at least take a look at the thread.
JS
Hello all,
Although we at Codenomicon do not fuzz in the true meaning of the word
(that depends on the definition), I would like to comment on these issues
Charlie brought up.
Date: Tue, 07 Nov 2006 08:28:26 -0600
From: Charlie Miller [EMAIL PROTECTED]
My take on this is that any type of data that is read in and parsed by
an application can be fuzzed.
Yes, and I suppose most of these have been tried. Fuzzing (or any type of
black box testing) is possible for any interfaces whether they are APIs,
network protocols, wireless stacks, GUI, files, return values, ... Even we
at Codenomicon already cover more than 100 different interfaces with
robustness tests...
I also think that fuzzing can only find certain types of
vulnerabilities, i.e. relatively simple memory corruption bugs.
This is not true. You can easily make a study on this. Take any protocol and
all vulnerabilities found in the implementations of that protocol, and map
that to the test coverage of black-box tools such as fuzzers. That would be
an interesting comparison!
Luckily, there are plenty of these [bugs] around.
True, and that is why intelligence is not often required from fuzzing tools.
Heck you can crash most network devices by just sending /dev/random to them.
;)
Good luck finding a command injection vulnerability or a bug that
requires three different simultaneous anomalies.
Well, this is a really good comment, and the reason why I could not resist
commenting on this thread! Why would you want to involve luck in the
equation? We at Codenomicon/PROTOS have noted that careful test design will
change luck and skill into engineering practise. With file fuzzers for
example it is easy to generate millions of tests but with systematic
testing you will still find most of these flaws and more. Being able to
optimize millions of tests into tens of thousands without compromising test
coverage is the goal. And it is also a requirement for many testers.
The combinations of anomalies is a bigger issue. I know (and even during
PROTOS we found these) that there are flaws that require combination of two
or three anomalies, and those where two different messages need to be sent
in a specific order. But when the tests are optimized in number, this is
made easier also. We cannot test all three-field combinations, but in the
real life we do not have to either. I would look forward to hearing if
anyone has an example vulnerability in mind that is not covered by
Codenomicon tools. Please nothing from proprietary protocols as I would not
be able to disclose the fact if we cover it or not. ;)
I think smart researchers, like these guys, move on to fuzzing new
types of data, be it new protocols, file types, etc.
This is why I think general purpose fuzzing frameworks like PROTOS
mini-simulation engine (first launched in 1999 but not publicly
available) and GPF (by DeMott) are so powerful. Basically we will never run
out of protocols, interface specifications, use cases, and traffic
captures...
It doesn't make a lot of sense to fuzz the HTTP protocol against IIS
at this point, as very many people have done this with a number of
tools.
Oh definitely it does make sense. All products are full of flaws. You just
need to build more intelligence to the tests. Even though companies like
Codenomicon do not ever disclose any flaws, it does not mean that these
flaws do not exist.
Based on the success of this project, I'm guessing they are the first
ones to seriously try fuzzing filesystems.
As far as I know, all commercial fuzzers support testing of file systems...
Software companies are just not interested in PAYING for security when they
can get it for free... ;) So blame the software developers, not the tool
vendors...
After those bugs are shaken out, we'll move on to the next type of
data.
Oh you do not need to move forward. How about just taking a fuzzer from 1999
such as the WAP test tools from PROTOS or from @Stake, and you will discover
that everything is still broken. That is the problem with the industry. Test
it once, and after few years everything is back to where it was. But just
using tools from other people is not interesting, is it. People want to find
new stuff to make them famous?
;)
This is reminiscent of when everyone fuzzed network protocols and then
someone started fuzzing file types.
Again, Codenomicon had file format fuzzers before anyone was aware of that
risk. And we had lots of problems developing those tools as the development
environments kept crashing all the time (I am not naming any OS products
here). But again the industry was not ready for our tools... They needed to
learn it the bad way. Thanks to all who contributed! ;)
If I knew what the next new thing to fuzz was, I'd be doing it