The more usual architecture is to also install a second NIC in the database
server, make sure that you have no IP forwarding on the DB and then connect
the backdoor NIC in the DMZ server directly to the DB Server.
If possible, I would then run filtering on the DB Server NIC to block any
sensitive ports that aren't required by the database. In *nix you'd block
telnet, all r-commands, SSH etc. In NT you'd block 137,138 and 139 (although
you may need 135 for RPC) etc.
I guess that it would be more or less the same thing to do this through a
filtering router though...and you could differentiate access from one WWW
server to another. Hmm. Sounds okay, I suppose.
But yes. I think that the limiting factor in securing this scenario is the
strength of the database security. Oh, and if you could do this with
application proxy we wouldn't be having this discussion, would we? You'd
just run the connection to the internal DB through the magic proxy and
everything would be reasonably clean.
If you MUST do it this way, I'd try really hard to get a good, strong WWW
server and platform and a DB that you trust to cope with deliberately
tainted input.
Cheers!
--
Ben Nagy
Network Consultant, CPM&S Group of Companies
PGP Key ID: 0x1A86E304 Mobile: +61 414 411 520
> -----Original Message-----
> From: Greg Bastian [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, 16 November 1999 7:05 AM
> To: 'Ben Nagy'
> Cc: Firewalls (E-mail)
> Subject: RE: Three NIC Firewall
>
>
> Hi Ben,
>
> Thanks for the input again !
>
> I am thinking of using the second 'backdoor' NIC in each of
> our web servers,
> connect them to another subnet of private addresses, and run
> them through
> our internal packet filtering router. The 'inside'
> connection from the
> bastion host will also use private addresses, and pass
> through the same
> packet filtering router.
>
> Internet -------- Bastion Host ----- Web Servers
> \ /
> \ /
> \ /
> \ /
> \ /
> \ /
> Internal Router
> |
> LAN
>
> I just can not see how I can make it more secure, whilst keeping our
> applications on the web server 'working'. It needs constant
> access to the
> database server, which also needs to be accessed real-time by
> our internal
> users.
>
> Is there any point in putting another application proxy
> server in place of
> the internal router ? If at all possible.
>
> So what you are all saying is that I cannot reasonably secure the
> architecture based upon the current configuration ! I can
> see that there is
> no problem until the web server is compromised, but then, watch out !
>
>
>
> Cheers,
>
> Greg.
>
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Ben Nagy
> Sent: Monday, 15 November 1999 17:52
> To: 'Breach, Geoff'; Firewalls (E-mail); 'Greg Bastian'
> Subject: RE: Three NIC Firewall
>
> Sorry, but I don't see how these two solutions are different.
> In fact, isn't
> it _worse_? If any other box in the DMZ can be compromised,
> they can change
> the MAC / Ethernet address to match the box that is allowed
> through the
> teeny-VPN, neh? At least with the backdoor NIC approach then
> you need to
> completely compromise that specific box.
>
> On one hand, we're looking at a backdoor NIC in the IIS
> server to talk to
> the database. In the VPN scenario, we're looking at a backdoor tunnel
> through the firewall to the internal database. Big deal.
>
> I don't think I like either solution. However, I think that
> there IS an
> argument for the backdoor NIC. If you trust the server that
> the backdoor NIC
> is leading to, this method _can_ be as secure as a firewall
> solution. At
> least you get to review only one service when you do the risk
> assessment.
> Basically, if you trust the service that is listening to the
> backdoor NIC,
> then this design can be made secure. In some cases, I'd
> rather do this than
> run a bunch of crazy DCOM juju through my firewall!
>
> The long and the short of it is that if a DMZ box is allowed
> to talk to an
> inside service, AND the DMZ box is compromised, then IF the
> inside service
> is vulnerable - you're toast. It doesn't matter how you slice
> the access
> methods.
>
> > I still don't like passing the db connection through the
> > firewall at all. Databases are sensitive things, and
> > companies have a habit of storing lots and lots of money
> > in databases (valuable data). A badly configured db
> > server is often all to keen to hand over that data to
> > anyone who asks for it.
>
> Yup. From here on, I'm with you. 8)
>
> > My preferred solution would be to have two db servers.
> > One on your DMZ for real-time access from the web server.
> > This one should contain the bare minimum amount of data
> > required to provide the online service.
> >
> > A second, real database server holding all your private
> > info sits on the private protected network. Updates
> > between the two databases (where possible) happen via
> > a 'disconnected' and not real-time method. Maybe ftp a
> > set of updates through every hour/day/whatever, or
> > sneakernet, or something like that.
> ^^^^^^^^^^
> Or the Whale Communications (R) Air Gap (TM) technology! Woohoo!
> (notethatidonotworkforwhaleorendorsetheirproductsthisishumouronly)
>
> Sadly, in many cases the database that is connected to the
> web _is_ all the
> valuable data. D'oh.
>
> >
> > If you require live updates and/or requests between
> > (1) the web server and the internal db, or (2) the
> > internal and external db servers, pay close attention
> > to rights, limits and security.
> >
> > Make sure the DMZ box[es] have very limited access
> > to read and write the internal database. Consider
> > quarantining writes from the external machines, with
> > some form of checking before they're absorbed into the
> > main data.
>
> And if you're just reading this now and thinking "Ooh! Taint
> checking! What
> a cool idea!" then go report to your management / shareholders for a
> spanking.
>
> > An ugly compromise. The firewall behaved perfectly - it
> > checked the requests at application level, and found them
> > to be legal HTTP from legal sources (public internet).
>
> This is one of the reasons why I've lost faith in HTTP
> application layer
> gateways. I'm so tempted just to say "Screw it. I can't
> police it, so I may
> as well make it nice and fast."
>
> This whole database backed websites issue is what has made me
> agree with all
> the pundits who are suggesting that the next generation of
> firewalls needs
> to employ pervasive technology. Picking demarcation points is
> no longer good
> enough to provide fine-grained access control.
>
> Basically, we can't afford to have application servers with
> soft, chewy
> centres any more. If the service you're putting up for
> external access can't
> handle the application level heat, then you really need to
> get it out of the
> kitchen.
>
> >
> > Geoff
> > --
> > CREDIT | FIRST Geoff Breach, [EMAIL PROTECTED], +61293944040
> > SUISSE | BOSTON Global Network Services - Asia Pacific Engineering
> > Opinions expressed herein are mine, not my
> > employer's
>
> Cheers,
>
> --
> Ben Nagy
> Network Consultant, CPM&S Group of Companies
> PGP Key ID: 0x1A86E304 Mobile: +61 414 411 520
> -
> [To unsubscribe, send mail to [EMAIL PROTECTED] with
> "unsubscribe firewalls" in the body of the message.]
>
> -
> [To unsubscribe, send mail to [EMAIL PROTECTED] with
> "unsubscribe firewalls" in the body of the message.]
>
-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]