> There are many SQLI patterns that are hard for automated tools to > find. This is an obvious point, so I'm sorry to pedantic, but I think > a survey based on automated scanning is a misleading starting point > for the discussion.
Well, the definition of a web application is a surprisingly challenging problem, too. This is particularly true for any surveys that randomly sample Internet destinations. Should all the default "it works!" webpages produced by webservers be counted as "web applications"? In naive counts, they are, but analyzing them for web app vulnerabilities is meaningless. In general, at what level of complexity does a "web application" begin, and how do you measure that when doing an automated scan? Further, if there are 100 IPs that serve the same www.youtube.com front-end to different regions, are they separate web applications? In many studies, they are. On the flip side, is a single physical server with 10,000 parked domains a single web application? Some studies see it as 10,000 apps. Heck, is www.google.com a web application, or a collection of several hundred web apps? In my view, it's the latter, but how do you tell with a script? Would it be considered a single application were it running on a single physical machine? The intuitive answer is "no", but then, from the perspective of SQLi or an RCE bug, there is a difference of sorts. There's more... are foo.blogspot.com and bar.blogspot.com separate "web applications"? If not, then what about *.appspot.com? How does an automated tool determine the difference between these environments? The list goes on... In such cases, manually constructed and carefully vetted data is actually quite likely to be more meaningful than any automated studies. /mz _______________________________________________ Dailydave mailing list [email protected] http://lists.immunityinc.com/mailman/listinfo/dailydave
