Re: [SC-L] Resource limitation

2006-07-18 Thread Pete Shanahan
[EMAIL PROTECTED] wrote:
 I was recently looking at some code to do regular expression matching,
 when it occurred to me that one can produce fairly small regular
 expressions that require huge amounts of space and time.  There's
 nothing in the slightest bit illegal about such regexp's - it's just
 inherent in regular expressions that such things exist.
 

Been there, done that, watched computers go down again and again from this.

 Or consider file compression formats.  Someone out there has a hand-
 constructed zip file that corresponds to a file with more bytes than
 there are particles in the universe.  Again, perfectly legal as it
 stands.
 
 Back in the old days, when users ran programs in their own processes and
 operating systems actually bothered to have a model of resource usage
 that they enforced, you could at least ensure that the user could only
 hurt himself if handed such an object.  These days, OS's tend to ignore
 resource issues - memory and time are, for most legitimate purposes,
 too cheap to meter - and in any case this has long moved outside of
 their visibility:  Clients are attaching to multi-thread servers, and
 all the OS sees is the aggregate demand.
 

Most typical unix/linux environments contain aggregate resource meters - per
process limitations on resource usage.

 Allocating huge amounts of memory in almost any multi-threaded app is
 likely to cause problems.  Yes, the thread asking for the memory will
 die - but unless the code is written very defensively, it stands a
 good chance of bring down other threads, or the whole application,
 along with it:  Memory is a global resource.

Ah, now this would be due to the standard definition of a thread. If you used
something more akin to light weight processes then you could isolate this
resource consumption problem a little bit better.

A thread is the basic unit of processing, it was never intended to be a unit of
resource consumption.

 
 We recently hardened a network protocol against this kind of problem.
 You could transfer arbitrary-sized strings over the link.  A string
 was sent as a 4-byte length in bytes, followed by the actual data.
 A request for 4 GB would fail quickly, breaking the connection.  But
 a request for 2 GB might well succeed, starving the rest of the
 application.  Worse, the API supports groups of requests - e.g.,
 arguments to a function.  Even though the individual requests might
 look reasonable, the sum of them could crash the application.  This
 makes the hardened code more complex:  You can't just limit the
 size of an individual request, you have to limit the total amount
 of memory allocated in multiple requests.  Also, because in general
 you don't know what the total will be ahead of time, you end up
 having to be conservative, so that if a request gets right up close
 to the limit, you won't cause the application problems.  (This, of
 course, could cause the application *other* problems.)
 

Yes, and this falls into general application design. Most network protocols are
designed around the concept of front loading information into the stack. Every
level puts more information at the front, not at the end.
This means that you can make decisions based on a very small piece of data,
allowing you to quickly process it, or kill it should it causes you problems.

If you're allowing such huge data packets and you haven't got the back-end
system in place to process them quickly, and without resource starvation, then
you're just looking to shoot yourself in the foot.

Every system on the planet has had to deal with these problems. From fork-bombs
through to excess network connections. A lot of them can be prevented using
resource limits. Depending on the OS, you can limit resource usage by either
individual process, or group of processes (typically referred to as a 
task-group).

Should an operating system not provide you with integrated features to protect
you from these resource consumptions, then you can quite easily create
monitoring tools that are integrated into the application to monitor and prevent
these kinds of things.
Under an OS like Solaris, you could use a facility like dtrace to monitor
resource use from both the application and OS level to make resource allocation
decisions. This facility would not need to be integrated into the application.

the problem is that a lot of the resource decisions that are made with
applications are more dependent on the administrator rather than the application
developer. After all, while an application developer may say '10% of physical
memory left is OK', and administrator might say 'but what about that other
service there that needs 15%'.

 Is anyone aware of any efforts to control these kinds of vulnerabili-
 ties?  It's something that cries out for automation:  Getting it right
 by hand is way too hard.  Traditional techniques - strong typing,
 unavoidable checking of array bounds and such - may be required for a
 more sophisticated approach, but 

Re: [SC-L] Resource limitation

2006-07-17 Thread Nash


On Mon, Jul 17, 2006 at 05:48:59PM -0400, [EMAIL PROTECTED]
wrote:
 I was recently looking at some code to do regular expression
 matching, when it occurred to me that one can produce fairly small
 regular expressions that require huge amounts of space and time.
 There's nothing in the slightest bit illegal about such regexp's -
 it's just inherent in regular expressions that such things exist.

Yeah... the set of regular languages is big. And, some have pretty
pathological FSM representations.

 In addition, the kinds of resources that you can exhaust this way is
 broader than you'd first guess.  Memory is obvious; overrunning a
 thread stack is perhaps less so.  ... How about file descriptors?
 File space? Available transmission capacity for a variety of kinds
 of connections?


One place to look is capability systems. They're more flexible and
should have all the features you want, but are still largely
theoretical.

http://en.wikipedia.org/wiki/Capability-based_security


That said, every decent Unix system I'm aware of has ulimit, which you
can use to restrict virtual memory allocations, total open files, etc:

nash @ quack% ulimit -a
...
virtual memory(kbytes, -v) unlimited

nash @ quack% ulimit -v 1024 # just 1M RAM, this'll be fun :-)

nash @ quack% ( find * )
find: error while loading shared libraries: libc.so.6: failed to map
segment from shared object: Cannot allocate memory


Alternately, you can implement your own allocator library for your
application and then impose per-thread limits using that library. How
you do that is going to depend alot on the language. Obviously, there
are lots for C/C++ floating around.

http://en.wikipedia.org/wiki/Memory_allocation

In Java, you don't get nice knobs on Objects and Threads, but you get
several nice knobs on the VM itself: -Xm, -XM, etc. Other high level
languages have similar problems to Java. I.e., how do you abstract the
size of a thing when you don't give access to memory as a flat byte
array? Well, you can do lots of fun things using LIFO queues, or LRU
caches, and so forth. There are performance impacts to consider, but
they you can often tweak things so it sucks primarily for the abuser.

None of these is really that hard to implement. So, do we really need
new theory for this? Dunno. One's mileage does vary.

-nash

-- 

the lyf so short, the craft so long to lerne.
- Geoffrey Chaucer
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php