Relevant to:  HTTP Web application security

I'm imagining (probably to no fruitful endeavor) a Web application security
proxy.  I was thinking about that old mod_security thing in Apache, where
you could essentially tell it what requests look like and it would reject
bad requests, and it got me thinking:  a secure proxy for this would be
interesting.

I could make a product... or I could make a specification and a reference
implementation.  That brought up some interesting thoughts.

Suppose I define, among other things, the ability to recognize
valid/invalid headers and requests.  Say I expect POST requests of the form
(normalized to equivalent GET requests, with regexes):

/login.php?u=[\w\d]{3,16}&p=[\w\d]{6,45}

So we have a rule:

login.php {
  "POST" {
    u ~ /[\w\d]{3,16}/;
    p ~ /[\w\d]{6,45}/;
    !*;
  }
  !*;
}

Only POST requests, with u and p fields, no other fields, defined as regex.

Now let's say we install an application at
http://www.example.com/myapp/which has
http://www.example.com/myapp/login.php and we want to define security rules
for it.  I guess we could load these into a proxy, do all kinds of
configuration?

Or...

We could set up a http://www.example.com/security.txt file:

/security.pxsd
/myapp/security.pxsd root=/myapp/

And in /myapp/security.pxsd we have the above rule, among others.

The Web server proper SHOULD deny access to security.txt and all pxsd files
by application-specific configuration, except to the trusted security
reverse proxy.

The trusted security reverse proxy SHOULD deny proxy access for these files
as well.

The reverse proxy will:

 - Checks security.txt for all PXSD files required (ProXy Security
Definition)
 - Checks the request against the full policy
 - Rejects the request if it violates

In implementation, there would be a caching period (seconds, minutes, etc),
an if-modified-since and/or hash check, etc. so that excess work fetching,
parsing, and integrating policy isn't done.

PXSD is root-relative; root is specified in security.pxsd.  Thus
/myapp/security.pxsd cannot specify rules for /otherapp/login.py or whatnot.

Thus a Web application may ship with security definitions dictating valid
data.  A Web server or a reverse proxy may read these definitions from a
security file and apply standard validation.  The Web server itself may
read a security file (/security.txt or some out-of-web-space file) and PXSD
files, applying policy internally; or a reverse proxy (Squid, Varnish,
nginx, etc.) may fetch and cache these policy files and prevent requests
from passing.

The advantage of a proxy doing such is that it is a bastion host:  broken
requests which pass as seemingly-valid HTTP but which are unorthodox and
cause buffer overruns and other nastiness will stop at the bastion host.
Broken requests which are wholly invalid and crash the software will either
stop at the bastion host or give an exploit onto the bastion host, which
itself may not carry anything critical and can be rebooted or replaced with
a functioning server in the event of an exploit.



In any case, the above is illustrative, wordy, and highly hypothetical.
The point is:  I believe there would be value in defining a DSL and
standard for Web application input validation, such that either a Web
server itself or a reverse proxy may read a set of standard-format files
(starting with the expected file /security.txt) and obtain a definition of
requests which wholly encompasses all valid requests (but may or may not
also encompass some invalid requests).

I'm only interested in HTTP query validation in this scope.  I have no
interest in access controls (i.e. only these IP addresses may do these
things; only these file types are valid; you may not pull .htaccess; etc.).

Thoughts?  Would this be something worth researching, designing, and RFCing?
_______________________________________________
websec mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/websec

Reply via email to