Re: [SC-L] Web Services vs. Minimizing Attack Surface

2006-08-15 Thread Nash

Thinking about attackable surface area is a good metaphor, but I
think it's breaking down on you.

Think about a classic forms-driven (MVC) web application. If it's at
all complex, it'll contain a variety of form processing programs that
are all interlinked with a complex state-sharing mechanism. Such an
application might be hosted on just a single port or service, but
it has huge surface area. It's also devilishly difficult to verify the

On the other hand, many web services look like lots and lots of
services, but each of them has extremely limited surface area on its
own. WS programs are typically smaller than their forms-processing
cousins-- even with all the automagic frameworks for MVC.

Web services tend to be specified syntactically as opposed to
semantically. In other words, the behavior of the RPC service is
defined by how you've structured your requests and is often not based
upon the content of an server-internal state sharing mechanism. This
is a huge advantage for security because it means that the scope of a
WS service is narrowly limited to its syntactic function. It shouldn't
tend to bleed out into other functional areas. 

Finally, because web services are smaller and easier to write, they
should be (much) easier to verify for correctness. Many WS frameworks
also provide really nice abstractions of authentication and
authorization, so that you can check those separately without even
having to look at business logic in the process.

So, point being that I think that claiming that WS/SOA architectures
have greater surface area is ignoring the big picture. Our notion of
surface area needs to become more sophisticated to account for the
architectural differences between WS and classic-MVC apps.

If web developers want to use web services, I can't see why shouldn't
do so immediately. It shouldn't be THAT difficult for WS/SOA to make a
net positive impact on security.

Security folks shouldn't be scared of WS/SOA, we should be welcoming
it. It's a great opportunity to reintegrate seurity in a way that we
just never had with the Web 1.0 universe.


On Tue, Aug 15, 2006 at 10:03:07AM +0200, John Wilander wrote:
 The security principle of minimizing your attack surface (Writing
 Secure Code, 2nd Ed.) is all about minimizing open sockets, rpc
 endpoints, named pipes etc. that facilitate network communication
 between applications. Web services and Service Oriented Architecture
 on the other hand are all about exposing functionality to offer
 interoperability.  Have any of you had discussions on the seemingly
 obvious conflict between these things? I would be very happy to hear
 your conclusions and opinions!
 Regards, John
  John Wilander, PhD student Computer and
 Information Sc.  Linkoping University, Sweden
 ___ Secure Coding
 mailing list (SC-L) List information,
 subscriptions, etc - List
 charter available at -

Please do not mock other religons
in your quest for the Spaghetti god.

- anonymous
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] Resource limitation

2006-07-17 Thread Nash

On Mon, Jul 17, 2006 at 05:48:59PM -0400, [EMAIL PROTECTED]
 I was recently looking at some code to do regular expression
 matching, when it occurred to me that one can produce fairly small
 regular expressions that require huge amounts of space and time.
 There's nothing in the slightest bit illegal about such regexp's -
 it's just inherent in regular expressions that such things exist.

Yeah... the set of regular languages is big. And, some have pretty
pathological FSM representations.

 In addition, the kinds of resources that you can exhaust this way is
 broader than you'd first guess.  Memory is obvious; overrunning a
 thread stack is perhaps less so.  ... How about file descriptors?
 File space? Available transmission capacity for a variety of kinds
 of connections?

One place to look is capability systems. They're more flexible and
should have all the features you want, but are still largely

That said, every decent Unix system I'm aware of has ulimit, which you
can use to restrict virtual memory allocations, total open files, etc:

nash @ quack% ulimit -a
virtual memory(kbytes, -v) unlimited

nash @ quack% ulimit -v 1024 # just 1M RAM, this'll be fun :-)

nash @ quack% ( find * )
find: error while loading shared libraries: failed to map
segment from shared object: Cannot allocate memory

Alternately, you can implement your own allocator library for your
application and then impose per-thread limits using that library. How
you do that is going to depend alot on the language. Obviously, there
are lots for C/C++ floating around.

In Java, you don't get nice knobs on Objects and Threads, but you get
several nice knobs on the VM itself: -Xm, -XM, etc. Other high level
languages have similar problems to Java. I.e., how do you abstract the
size of a thing when you don't give access to memory as a flat byte
array? Well, you can do lots of fun things using LIFO queues, or LRU
caches, and so forth. There are performance impacts to consider, but
they you can often tweak things so it sucks primarily for the abuser.

None of these is really that hard to implement. So, do we really need
new theory for this? Dunno. One's mileage does vary.



the lyf so short, the craft so long to lerne.
- Geoffrey Chaucer
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] ddj: beyond the badnessometer

2006-07-13 Thread Nash
On Thu, Jul 13, 2006 at 07:56:16AM -0400, Gary McGraw wrote:
 Is penetration testing good or bad?

Test coverage is an issue that penetration testers have to deal with,
without a doubt. Pen-tests can never test every possible attack
vector, which means that pen-tests can not always falsify a security

Ok. But... 

First, pen-testers are highly active. The really good ones spend alot
of time in the hacker community keeping up with the latest attack
types, methods, and tools. Hence, the relevance of the test coverage
you get from a skilled pen-tester is actually quite good. In addition,
the tests run are similar to real attacks you're likely to see in the
wild. Also, pen-testsing is often intelligent, focused, and highly
motivated. After all, how would you like to have to go back to your
customer with a blank report? And, the recommendations you get can be
quite good because pen-testers tend to think about the entire
deployment environment, instead of just the code. So, they can help
you use technologies you already have to fix problems instead of
having to write lots and lots of new code to fix them. All of these
make pen-testing a valuable exercise for software environments to go

Second, every software application in deployment has an associated
level of acceptable risk. In many cases, the level of acceptable risk
is low enough that penetration testing provides all the verificaton
capabilities needed. In some cases, the level of acceptable risk is
really low and even pen-testing is overkill. I do mostly code review
work these days, but I find that pen-testing has more general
applicability to my customers. There are exceptions, but not that

Third, pen-tests also have real business advantages that don't
directly address risk mitigation. Pen-test reports are typically more
down to earth. That is, they can be read more easily and the attacks
can be demonstrated more easily to business leaders, executives, and
other stakeholders. In my experience, recommendations from both
pen-tests and code reviews are commonly ignored. But, a good pen-test
gets the executive blood flowing in a way that code-oriented security
evaluations just don't.

Fourth, assertion falsification isn't always what you're after. Being
able to falsify the statement, this app is secure enough, is a
common objective, but it's not really that useful for most businesses.
What exactly is secure enough? How do you define it? How do you
measure it?  How much accuracy do you need? How do you get more
accuracy, if you want it? How much do you trust your expert's opinion?

Sometimes, it's better to simply demonstrate a positive assertion,
such as:

- This application is not subject to known, automatic attacks.
- This application demonstrates the same security profile in all
  supported deployment environments.
- This application demonstrates different security profiles,
  depending upon the deployment environment.
- The latest MS patch does not affect the testable security
  profile of this application.

These are all assertions that pen-testing is arguably pretty good for
demonstrating. In some cases it might even be better than code
anlaysis--e.g., the effects of new environments or upgrades to
low-level libraries, virtual machines, operating systems.

Finally, my freind Sam pointed out that only during some kind of
pen-testing can you really identify what the actual attack surface of
an application looks like in its final deployment environment. This is
especially relevant in today's world because applications are now made
as much through integration of existing, off-the-shelf components as
through new development. A new application might only have a few
thousand lines of original code, but might be resting on top of a
software stack that has millions.  Whether it's J2EE, .NET, or LAMP,
all those environments are only really practical to test using some
form of pen-test.

Every security assessment methodology has its limits. Pen-testing has
limited falsification capabilities.  Code review, various kinds of
code analysis, unit testing, whatever else. These methods can all have
practical financial limitations and information accesibility problems. 

Of course, all these are good approaches and a wise security manager
will employ as wide a variety of assessment methods as he can afford
so that they compliment each other. But, affordability is a real
concern for most busineses and pen-testing is pretty affordable.

In the end, no assessment methodology produces results that are as
good as having a skilled Security Developer on your team during the
application design stage. Getting a security architecture in place
that matches your risk tolerance and functional requirements is the
single best way to prevent intrusions, bar none.

nash e. foster
Stratum Security, LLC


the lyf so short, the craft so long to lerne

Re: [SC-L] Theoretical question about vulnerabilities

2005-04-11 Thread Nash
Pascal Meunier wrote:
 Do you think it is possible to enumerate all the ways
 all vulnerabilities can be created? Is the set of all
 possible exploitable programming mistakes bounded?

By bounded I take you to mean finite. In particular with reference
to your taxonomy below. By enumerate I take you to mean list out in
a finite way. Please note, these are not the standard mathematical
meanings for these terms. Though, they may be standard for CS folks.

If I interpreted you correctly, then the answer is, no, as Crispin

However, let's take enumerate to mean list out, one by one and allow
ourselves to consider infinite enumerations as acceptable. In this case,
the answer becomes, yes.

This proof is abbreviated, but should be recognizable as a pretty
standard argument by those familiar with computable functions and/or
recursive function theory.

   Thm. The set of exploits for a program is enumerable.


   Let P(x) be a program computing the n-ary, partially computable
   function F(x). Let an exploit be a natural number input, y, such
   that at some time, t, during the computation performed by P(y) the
   fixed memory address, Z, contains the number k.**

   Then, there exists a computable function G(x,t) such that:

- G(x, t) = 1 if and only if P(x) gives value k to address Z at
some time less than or equal to t.

- G(x, t) = 0 otherwise.

   The values of x for which G(x,t) = 1 is effectively enumerable (in
   the infinite sense) because it is the domain of a computable function.


You can look up the relevent theory behind this proof in [Davis].

So, where does this leave us? Well, what we don't have is a computable
predicate, Exploit(p,y), that always tells us if y is an exploit for
the program p. That's what Crispin was saying about Turing. This predicate
is equivalently hard to Halt(p,y), which is not computable.

However, we can enumerate all the inputs that eventually result in the
computer's state satisfying the (Z == k) condition. I suspect this is
probably all you really need for a given program, as a practical matter.
Since, for example, most attackers probably will not wait for hours and
hours while an exploit develops.*

I think the real issue here is complexity, not computability. It takes a
long time to come up with the exploits. Maybe the time it takes is too
long for the amount of real economic value gained by the knowledge of
what's in that set. That seems to be part of Crispin's objection (more or

 I would think that what makes it possible to talk about design patterns and
 attack patterns is that they reflect intentional actions towards desirable
 (for the perpetrator) goals, and the set of desirable goals is bounded at
 any given time (assuming infinite time then perhaps it is not bounded).

I think this is a very reasonable working assumption. It seems
consistent with my experience that given any actual system at any actual
point in time there are only finitely many desirable objectives in
play. There are many more theoretical objectives, though, so how you
choose to pare down the list could determine whether you end up with a
useful scheme, or not.

 All we can hope is to come reasonably close and produce something useful,
 but not theoretically strong and closed.

I think that there's lots of work going on in proof theory and Semantics
that makes me hopeful we'll eventually get tools that are both useful
and strong. Model Checking is one approach and it seems to have alot of
promise. It's relatively fast, e.g., and unlike deductive approaches it
doesn't require a mathematician to drive it. See [Clarke] for details.
[Clarke] is very interesting, I think. He explicitly argues that model
checking beats other formal methods at dealing with the state space
explosion problem.

Those with a more practical mind-set are probably laughing that beating
the other formal methods isn't really saying much because they are all
pretty awful. ;-)

 Is it enough to look for violations of some
 invariants (rules) without knowing how they happened?

In the static checking sense, I don't see how this could be done.

 Any thoughts on this?  Any references to relevant theories of failures and
 errors, or to explorations of this or similar ideas, would be welcome.

There are academics active in this field of research. Here's a few




** This definition of exploit is chosen more or less arbitrarily. It
seems reasonable to me. It might not be. I would conjecture that any
definition of exploit would be equivalent to this issue, though.

 Halt(x,y) is not computable, but it is enumerable. That is, I can
list out, one by one, all the inputs y on which program x

Re: [SC-L] Top security papers

2004-08-10 Thread Nash
On Sat, Aug 07, 2004 at 06:41:49PM -0700, Matt Setzer wrote:
 Specifically, what are the top five or ten
 security papers that you'd recommend to anyone wanting to learn more about
 security?  What are the papers that you keep printed copies of and reread
 every few years just to get a new perspective on them?  

These won't teach you much about security, per se, but they're fun to read
and provide some really interesting insights into the personalities involved,
which is sometimes more important.

An Evening with Berferd In Which a Cracker is Lured, Endured, and
Studied, Bill Cheswick.

_Cuckoo's_Egg_, Clifford Stall.

[Ed. That's Cliff Stoll, not Stall.  Great book, though -- IMHO!  KRvW]


Beware of bugs in the above code, I have only proved
it correct, not tried it.

- Donald Knuth