I am going to make good on my offer to give $100 to the first 5 RLUG
members to find a significant, unique, bug in my Summit Sage Validation
Server.  The server is a reverse HTTP proxy that I have been developing
over the past 2 years.  If this goes well, I may continue this type of
contest in the future.

While I have a few vertical proxy applications in the works, this release
is limited to basic reverse proxying and general HTTP validation. The bugs
must be unique (ie two people can't claim the same bug), and significant
by my definition.  I will list the reported bugs on my web site, and the
first person to report a bug that I can reproduce, will get the $100.  I
also have a new 3ware 8000 series SATA RAID controller, and that is up for
grabs for the first winner that wants it in lieu of the cash.  I will make
the rewards at next month's meeting.

* What is a Significant Bug?

To get the reward the bug must be a significant.  A significant bug is any
bug that results from network data from the client or server side of the
proxy that causes the proxy to:

1) crash
2) stop accepting requests
3) corrupt data
4) handle more or fewer concurrent requests than are specified by the
configuration file
5) accept requests based on buffers sizes that should be denyed
6) deny requests that should be accepted

Also the bug MUST be reproducible on Fedora Core 2, which is my current
development platform.  You must provide any tools used to produce the bug.

I am also only interested in bugs that occur AFTER the validation server
starts accepting requests.  If a command line switch or funky
configuration settings prevent the server from starting, or cause other
anomalies, while I'd like to here about this, these bugs are not eligible
for the reward.  For instance I know there is not validation of very large
values in the configuration file.  Right now I am most concerned about the
core networking code, and this is where I need your help.

* Configuring the server

Here's an example configuration file.  It assumes you have a web server
running on the localhost on port 8080, and the proxy will listen for
requests on localhost 80.  I have put comments in line.

<?xml version="1.0" encoding="ISO-8859-1"?>
<proxy-config>
<!-- set to yes to run as a daemon, no to start as a foreground process -->
<run-as-daemon>yes</run-as-daemon>

<!-- IP Address to listen for HTTP requests on.  This IP Address must be on
your machine -->
<listen-address>localhost</listen-address>

<!-- Port to listen to HTTP requests on -->
<listen-port>80</listen-port>

<!-- IP Address to forward requests to.  This can be any address that your
system can route to, but
I recommend that it is a private server under your control.  In this case
I assume you have a web server
running on port 8080 -->
<forward-address>localhost</forward-address>

<!-- Port to forward requests to-->
<forward-port>8080</forward-port>

<!-- The maximum number of concurrent connections -->
<max-connections>500</max-connections>

<!-- This is the number of bytes to attempt to read at a time.  This
should probably be greater than
1500 for best performance.  You can change this to see how it affects
performance. -->

<input-buffer-size>5000</input-buffer-size>

<!-- The following configurations allow you to "fine tune" how HTTP
requests will be handled
The intention is to prevent buffer overflows in the HTTP implementation on
the origin server.
For instance the Cicero HTTP implementation suffered from a buffer
overflow when processing
chunk sizes.

You should attempt to break these limits.  If you do, you'll get the
reward. -->

<!-- max method length (GET, POST, HEAD, etc.).  This is generous. -->
<max-method-length>256</max-method-length>

<!-- maximum length of the URI (ie /foo?bar=10)
<max-URI-length>256</max-URI-length>

<!-- This is the maximum number of header lines to accept in one request. 
-->
<max-num-headers>50</max-num-headers>

<!-- The maximum length of an individual header name -->
<max-header-name-length>256</max-header-name-length>

<!-- The maximum length of a header value -->
<max-header-value-length>256</max-header-value-length>

<!-- This currently isn't being used, please ignore.  It is currently
based on the size of an int -->
<max-chunksize-length>10</max-chunksize-length>

<!-- How long to wait for data from the client, before timing out.  THIS
IS SOFT, and could increase
under load.-->
<client-timeout-milliseconds>30000</client-timeout-milliseconds>

<!-- How long to wait for data from the server before timing out.  THIS IS
SOFT, and could get
increase under load.-->
<server-timeout-milliseconds>10000</server-timeout-milliseconds>

</proxy-config>

The server can be started with:

ssvs -f /path/to/config/file

With no parameters the server looks for the config file at /etc/ssvs.conf

All errors are logged to syslogd.

ssvs --version prints the current build which is 475.

* Ideas to get you started.

First off, I DO NOT endorse attempts to exploit public servers.  Please
use this code with good judgment.  I also don't take responsibility for
any data that might be lost or corrupted by using the server.  This is
alpha quality software, USE AT YOUR OWN RISK.  I request that you use a
private origin web server rather than a public one.  THTTPD, apache, and
IIS are widely available.  If you need help setting up any of these to
work with the proxy, please let me know and I will help you out.

With that said, here are some ideas on how to break the server:

While sending random data is probably a good idea, I suspect if you
combine this data with valid and partially valid requests, you'll have a
better chance of breaking the server.  You have to remember that certain
logic only gets executed when an HTTP request is crafted in a specific
manner.  For example if one header of valid length followed by one of
invalid length.  It is rather unlikely that truly random data will
generate a partially valid HTTP request.

If you are serious about breaking the server you might want to take a look
at the HTTP specification:

http://www.w3.org/Protocols/rfc2616/rfc2616.html

If you want to get REALLY serious, take a look at HTTP request pipelining.
 The implementation of this is complicated, and likely to be buggy.

Also a network analyzer like ethereal http://www.ethereal.com/ can come in
handy to produce and verify bugs.

I am also willing to make a seperate distro of the testing code I've
written so far in Python.  I plan to release it under GPL or the Python
License.  I do insist that you use this for good (ie test my server),
rather than evil (running it against public servers).  I assume no
responsibilty for an attempts to exploit public servers with this code.

If you want a very detailed description of the software architecture
please read:

http://www.baus.net/archives/000137.html

* Why participate?

For anybody that participates and actively provides feed back I'll provide
priority for future feature requests, and free access to the final product
for 1 year (and potentially longer).  For example, are you looking for an
easy way to manage a large number virtual servers?  Alpha test the
software, provide feedback, and I will bump up the priority of your
request, plus give you the software free for a year.  I think that's a
pretty good deal.

This also a good way to learn about HTTP and proxy technologies.  It is
easy to under estimate the complexity of HTTP.  I know I did.  Two years
latter, after some serious development time, I'm still learning.  The 600
pages of O'reilly's HTTP, the Definitive Guide, will give you and
understanding of how complex this issue really is.

Lastly I am trying to jump start a software company based on Linux
technologies from my virtual garage.  This is a good thing for any serious
Linux users in the area, as I'd like nothing more than to make a living
developing software for linux, and to work with others that what to do the
same.  Northern Nevada needs more interesting technology companies.  It is
up to us to do it.

* Why not Open Source?

I'm going to answer this now as I'm sure the question will come up.  I
haven't decided on the final licensing.  My feeling is that it would be a
mistake to attach an Open Source license to something that is still in its
alpha stage.  I believe it would be more difficult to start a viable
company around developing code that is open source, and I want to leave
open the possibility of meeting potential OEM customers licensing needs. 
While some customers may like an Open Source license, it might be a deal
breaker for others.  Once code is released under an Open Source license,
it is difficult to take it back.  But this doesn't mean I might not change
my mind. ; )

If this sounds interesting to you, email me and I will tell you how to get
hold of the code, and I'll help you get started.  If you want, you can
also call me at 702-505-4748.

Thanks for your interest and good luck.

Sincerely,

Christopher Baus




_______________________________________________
RLUG mailing list
[EMAIL PROTECTED]
http://lists.rlug.org/mailman/listinfo/rlug

Reply via email to