Following on from the previous post, this one contains my thoughts on how to
"fix" the state of the web as it is today.  My outlook is not to just bitch
about something, but try to do something about it (if you are not part of
the solution…).  Not saying that these are in any way perfect, but what I
feel would incrementally improve the current state of play.  I’ll follow
the old "people, process, technology" framework that my old boss kept
repeating, as I still think it’s a good way of looking at security.

People

At the root of this are people - the developers writing bad code in the
first place (both new and old), the mangers responsible for current and
legacy systems, and the executives that have to manage the resources and
risk.  Let’s look at the developers first.

When developers are writing code, the only thing they are really focused on
is solving the problem at hand.  Other aspects like security, performance,
scalability, maintenance, etc, may also be in their minds, but primarily my
experience (a lot of it in retrospect from watching/interviewing lot’s of
programmers while doing my PhD research) there’s a core task they are
concentrating on; distract them from that task, and often you’ll get bugs
(or vulnerabilities) as a result.  It’s the "Miller’s span of absolute
judgement" (better known as 7
<http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two>
± 2) in that only so many things can be "thought of" at any one time.  

With this in mind, what we really have to do is remove the necessity of
developers to think about security while they are coding.  ASP.NET is doing
this to some degree IMO, as it seems that Microsoft is doing it’s best to
architect the platform in ensuring developers don’t "shoot themselves in
the foot", and provide libraries for common security tasks.  I’ll leave
discussion of this aspect for the technology section as it better fits
there, but it seems that education isn’t working as well as it should (XSS
and SQLi should be very well known by now, but still sites that should know
better are having vulnerabilities disclosed).  There’s some work out there
around giving training/guidance in the IDE when developers need it, but it’
s not widely adopted.  I’m hopeful that this might help, but not
optimistic.  My own research <http://mikeandrews.com/projects/signpost/>  a
long time ago looked at this problem, and although there was interest, as
with most research project, it didn’t go anywhere. 

All managers want to do is know what issues there are, and how to fix them
with the limited resources/budget they have available.  Not wanting to
minimize their job, but managers are simply performing trade-offs for
requirements and resources all the time.  I believe that what would help
them are better estimation tools/techniques/methods on how much code needs
changing to mitigate a given vulnerability.  Certainly, there’s been lots
of work in the software engineering field with this, but when it comes to
the web, "true" software engineering tends to go out of the window! It’s
rare to even have a accurate design for a webapp, so what chance is there in
having any meaningful metrics.  This is where managers and executive share
pain - lack of usable metrics, ROI, and knowing what it actually "takes"
(effort/time) to make secure software software .

Executives on the other hand are concerned with the risk to the business,
and controlling that risk to an appropriate level - the age old spending
$100 to protect a $1 resource.  When it comes to the web (there’s lots of
other areas they are concerned with, but I’m focusing on just
websites/servers here), knowing the risk to each site (type of data,
criticality to the business, etc), potential exposure (what vulns they may
have, time since last review, who has access, etc), and events that may
change that risk (code changes, "incidents", etc).  With this info they
should be able to determine the level of risk and the necessary steps (if
any) to get the site to a level which they are comfortable in accepting that
risk. (I hear all the "we should have 100% security", and I believe we
should shoot for it, but in real life it seldom works that way)

There is technology to help gather and present this info (Archer
<http://www.archer-tech.com/>  is one example that springs to mind; I’ve
heard mixed feedback over these, so this is in no way an endorsement),
although it seems to be rare for companies to have this data immediately
(and in real time) at hand to make decisions.  We can’t rely on people
populating these knowledge bases, so it has to be automated/automatic.  Even
if executives do have this information, raw data (vuln counts, code changes,
etc) isn’t really all that useful - the average exec isn’t technical and,
for example, from a CVE number tell the risk it has added to a system.
Gathering and displaying this data isn’t difficult, or what is needed IMO.
What I feel is needed is expert systems to turn raw data, gathered
automatically, into actionable tasks.

Process

Although there’s lots of discussion about this (see earlier post
<http://www.mikeandrews.com/2008/04/24/does-sdl-work/> ), I believe that
Microsoft (as a vendor - several others from consultancies to government
entities have been walking the path before them) have led the way forward in
how to develop secure software.  The SDL
<http://msdn.microsoft.com/en-us/library/ms995349.aspx>  integrates security
into building software throughout the process, and (like any good
engineering discipline should) learns by it’s mistakes by feeding back into
the next version (and thus products that go through the process).  To me at
least, it seems to be working as products coming through the process "seem"
(both feel, and looking at the vuln stats) more secure.  As mentioned above,
Microsoft it’s the only one - Cigital have "touchpoints
<http://www.cigital.com/training/touchpoints/> " that encompass pretty much
the same thing.  This works great for "new" code/sites, but what do we do
with all the "legacy" stuff out there on the web.

To some degree, the technology that touches the web has a much higher
probability of being totally rewritten at some point in it’s future that
any other system that I’ve encountered.  A long time ago I worked debugging
EIU.com, and revisiting it there’s been at least 3 major technology
revisions (for a totally non-scientific look, just look at pattern how the
internet archive been indexing the changes
<http://web.archive.org/web/*/eiu.com>  for this, or any similar site). So,
my argument at some level is that sites are going to get
redesigned/rewritten at increasingly small intervals, remove legacy code,
and if they go through some SDL process, things are going to be good right?

Well, not really.  Some sites just aren’t going to be updated in any
reasonable timeframe (time/cost/complexity), and we can’t just sit
vulnerable waiting for a rewrite.  That means we have to put protection on
these systems in a retroactive way.  Some of these ideas I’ll explore in
the technology section below, but also some part of the answer is above in
the people section - understanding the risks, what it takes to make changes,
the cost vs benefit of these changes - allow us to make the right decisions
and put money/resources on the necessary tasks to retrofit, redesign, or
retire .

Technology

I’m a technologist at heart, so this is probably where I’m most
comfortable in my thoughts on what I think is required to "fix" the web.
People and process take us some of the way, but (and maybe this is just my
bias as an engineer/technologist) technology has to actually take us there -
we can have all the people required (educated in the issues, knowledge at
their fingertips), and have a process all laid out (addressing the previous
failures, mitigation techniques, touchpoints), but without the blocks,
pulleys, rollers, and other technology required, we would never have been
able to build the pyramids.

These might be "pie-in-the-sky" ideas, and I’d welcome any thoughts people
have on anything below, including links to people already working in these
areas so I can retroactively update this post. I spend a lot of time in the
field, but I can’t even begin to claim that I see everything.  Please get
in contact in the usual way.

Platform

I spend a lot of time writing code in ASP.NET at the moment, and although I
was initially a PHP programmer (and C/C++ before that), I like the platform
more than any other I’ve worked in as I don’t have to reinvent any of the
security functionality I lean a lot on.  From all the sites I review in my
day-to-day work, ASP.NET sites are the most difficult to find issues with.
The more effort that is put in making the platforms themselves "safe" and
less likely to shoot the developer in the foot, the less vulnerabilities I
believe will be out there.  If the default is to do something secure, and
the developer has to go out of their way to be insecure, then the result has
to be more secure code.

If we can fix the "validate input / encode output" problem though, 90% of
vulnerabilities would disappear overnight.  From a generic point of view
though there are two things I would like to see.  It’s possible for many of
these to be done already in custom code, but if we could push them to the
platform and make them "transparent", a lot of exploit vectors could be
closed.

*       If inputs to webapps were annotated, a bit like like SAL
(http://blogs.msdn.com/michael_howard/archive/2006/05/19/602077.aspx), but
simpler, then the platform itself can throw exceptions when it receives data
that it doesn’t expect.  It needn’t be as complex as SAL either - simply
"I expect this page to be called from here, with this type of data matching
this pattern", would, I feel, go a long way.  Certainly, this can already be
done in code using regex’s, validators, etc, but once again that’s extra
code a developer has to write and potentially get wrong. 
*       Very (very) few webapps dynamically generate client-side code, and a
lot of the XSS issues out there rely on injecting "foreign" code into a
page.  ASP.NET once again attempts to provide some automatic protection by
looking out for potential scripting being delivered via page parameters.
This is input validation though, and there are ways around it. What would be
nice is to have a form of "parameterized" page, just as we have in SQL - it
should be easy to know what parts of a page change based on user input or
application behavior, so  the rest of the page should be "static" (just like
in SQL parameterized queries - any input can’t possibly alter the "meaning"
of the query) 

Testing

Out of all the activities we go through when producing software, testing is
the activity which generally gets overlooked the most.  It’s completely
understandable that automation is required in testing so it can be scalable
and repeatable.  I feel though that most webapp security testing is "dumb" -
throw a bunch of pre-canned tests and watch for certain indicators.

What I think is needed is automated testing that "understands" applications.
Model-based testing has been working in this area for a while, but it’s
effort-consuming to set-up.  Security testing is a smaller (but still not
<http://blogs.msdn.com/sdl/archive/2007/12/07/reliability-vs-security.aspx>
by much) subset of functional testing, so I feel there’s a lot to learn
from that discipline.  The simplistic "crawl looking for pages" followed by
"inject and watch the output" that many of the current tools use just isn’t
cutting it.  Testing tools should be able to discover the site the same way
as users (reuse of logs/monitoring might help here), and "understand" a page
based on it’s inputs and outputs to know if a vulnerability has been
discovered.  Basic behavioral comparison of "known good" vs "bad" vs "bad,
but caught" for given page+parameters a might be a first step forward (if
people aren’t doing this already).

Monitoring

We are never going to be 100% secure all the time - humans have bugs - so
some attacks will slip though and/or new attacks discovered after the fact.
Monitoring sites after they "go live" for anything other than stats is even
more overlooked than testing from what I encounter.  Only when something
goes terribly wrong, or a really obvious attack takes place, does anyone
take notice and look at logs.  Monitoring an app not only gives us an idea
of usage stats that everyone wants to see, but the raw data can give
indication of attacks, both successful and unsuccessful, that may have
otherwise gone unnoticed.

WAF’s to some degree attempt to do this, currently employing very simple
"blocking" activities for single requests or sequences, just like
traditional firewalls.  An incremental step for them is to understand what
"correct" vs "malicious" behavior looks like based on prior activity, much
like an anomaly-based IDS.  I believe though, that knowing what "good"
behavior looks like though is simpler in the web world than the general
network world - especially with the help of application analysis and
annotation - anything outside this "envelope" may be counted as malicious.

One of the issues with any kind of "blocking" functionality though is that
companies are very reluctant to turn this on because of false-positive
results it may have (an inadvertent DoS).  This is why most WAF’s are not
in "block" mode, but instead in "report".  A while back at FloridaTech I ran
a project for an ONR <http://www.onr.navy.mil/>  grant where we used the
behavioral signature recognition from my PhD research, and the "undo"
technology of FIT in Prof. Whittaker’s old research group to develop a
behavioral anti-virus solution that delays blocking operations until it
knows a sequence of activities are malicious, which proved to be very
successful (this project has been taken up
<http://www.virusbtn.com/conference/vb2004/abstracts/rford.xml>  and
continued by Dr Ford).  I would suggest, that research on a similar line
(behavioral monitoring and "undo" capability) could be just as successful in
protecting webapps.  Consider this to be a "WAF on steroids" that could,
with integration with the OS, DB, etc, "undo" identified attacks with a
great deal of accuracy and very little impact or overhead.

Conclusion

It’s really frustrating to see industry not learning and taking advantage
of previous failures on the one hand, and making obvious strides at
improving technology/techniques on the other, all the while pointing fingers
at each other on who’s fault anything is, or the (de)merits of a given
technology/technique.  I really believe that although things are a lot worse
out there that many can currently imagine they are, it’s not all that
difficult to turn around with some effort.  I guess it’s the same with the
current fuel crisis - everyone can see the effects, and the cause, but very
little is apparently being done on alternative solutions.  Perhaps there are
companies/institutions/research labs doing work that I’m just not aware of
that are waiting in the wings with AK-47s full of silver bullets, but we
shouldn’t be waiting for them - deep down we know what has to be done. 

 

 

[Ph4nt0m] <http://www.ph4nt0m.org/>  

[Ph4nt0m Security Team]

                   <http://blog.ph4nt0m.org/> [EMAIL PROTECTED]

          Email:  [EMAIL PROTECTED]

          PingMe:
<http://cn.pingme.messenger.yahoo.com/webchat/ajax_webchat.php?yid=hanqin_wu
hq&sig=9ae1bbb1ae99009d8859e88e899ab2d1c2a17724> 

          === V3ry G00d, V3ry Str0ng ===

          === Ultim4te H4cking ===

          === XPLOITZ ! ===

          === #_# ===

#If you brave,there is nothing you cannot achieve.#

 

 


--~--~---------~--~----~------------~-------~--~----~
 要向邮件组发送邮件,请发到 [email protected]
 要退订此邮件,请发邮件至 [EMAIL PROTECTED]
-~----------~----~----~----~------~----~------~--~---

<<inline: image001.gif>>

回复