Paul, my old reverse-proxy friend :-)
>> Put them on their own isolated network, and use router ACLs to permit
>> inbound connections from the reverse proxy. Permit outbound traffic only
to
>> the reverse proxy, and deny everything else.
>Then why put them on the internal network? If they're to be isolated,
>then do it outside where public traffic belongs.
Internal, but protected as if it were a DMZ. Security in layers is the way
to go.
IMHO, an isolated DMZ webserver placement is fine if all you want is a
"Hello, World" website. But if you want to have a website that actually
*does* something useful, a website like Amazon.com or Fidelity.com, you
will inevitably need back-end connections to databases, or perhaps spawn
extranet connections to partners, vendors, suppliers, etc. At some point,
the rubber has to hit the road, you're going to have to let *something*
through to the internal network.
(And personally, I hate using the term "internal" to describe everything
behind the firewall. Most modern networks have grown too large and too
complex for any part of it to be classified as strictly "external" or
strictly "internal". The world isn't black-and-white, and neither is your
network; there can always be shades of gray.)
As the number of webserver back-end connections and dependencies grow, so
does the complexity of your perimeter architecture and firewall rules. At
that point, the "traditional" external-DMZ-internal architecture, IMHO,
becomes unsatisfactory. I would rather pull all of the dirty work to a
separate, hardened "internal DMZ" if you will, and just punch the HTTP
through to a reverse proxy.
Regards,
Christopher Zarcone
Network Security Consultant
RPM Consulting, Inc.
#include <std.disclaimer.h>
-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]