Re: transparent DNS load-balancing with a Cisco ACE

2012-10-25 Thread John Miller
Thanks, Phil.  This makes perfect sense--unlike TCP, there's nothing 
inherent in UDP to make sure that packets come back from the right IP.


Thank you also for explaining this in terms of the socket APIs.  This is 
something I've only barely touched on--time for me to play around a bit 
and write some code.  I'd also just been doing an rndc stop/start to 
update the listening sockets--just what's bundled into the initscript. 
I'll keep reconfig in mind--might come in handy.


Aside: realized that I didn't reply to the list last time--doing so now.

John

On 10/25/2012 11:53 AM, Phil Mayers wrote:

On 25/10/12 15:54, John Miller wrote:


Is BIND associating each request with a particular socket, then?  It
would certainly make sense if that were the case.  This was something I
didn't fully realize.


Yes.


Something else I didn't fully realize was that by default, BIND binds to
_each_ of the available IP addresses on the system--_not_ to 0.0.0.0, as
happens with other network daemons (e.g. sshd).


It does this because the cross-platform AF_INET socket APIs are limited.
Binding a socket to each separate IP and replying from the same socket
is the simplest cross-platform way to guarantee that UDP replies come
from the right IP.

AF_INET6 has a newer API which solves this, and if you lsof -i :53
you'll see that bind only opens one socket for IPv6/UDP (unless you are
on a system which doesn't implement RFC 3493/3542, in which case it
falls back to one socket per IPv6 address).

TCP-based daemons can ignore this, because the TCP stack takes care of it.

Note that bind doesn't detect new IPs immediately - you need to do rndc
reconfig or wait for the timer (interface-interval in the options
block).

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: transparent DNS load-balancing with a Cisco ACE

2012-10-25 Thread Mark Andrews

In message cal5w20bysrz5o21eievdgybbg2hum7ydqzfio3cxxo5jzce...@mail.gmail.com
, jagan padhi writes:
 
 Hi,
 
 Is it possible to configure BIND for IPV4 and IPV6 in the same server?
 
 Regards,
 Jagan

Yes.  listen-on-v6 { any; };

By default it use both IPv4 and IPv6 when recursing.
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: transparent DNS load-balancing with a Cisco ACE

2012-10-24 Thread Phil Mayers

On 10/19/2012 07:25 PM, John Miller wrote:


Here's a question, however: how does one get probes working for a
transparent LB setup?  If an rserver listens for connections on all
interfaces, then probes work fine, but return traffic from the uses the
machine's default IP (not the VIP that was originally queried) for the
source address of the return traffic.


I'm not sure I understand this.

If a DNS request comes in on a particular IP, bind should reply from 
that IP, always. If it doesn't, something is going seriously wrong.



What have people done to get probes working with transparent LB?  Are
any of you using NAT to handle your dns traffic?  Not tying up NAT
tables seems like the way to go, but lack of probes is a deal-breaker on
this end.


We didn't have to do anything special, and I'm not sure why you have 
either. Our probes are just:


probe tcp TCP_53_RECDNS
  ip address public ip
  port 53
  interval 10

serverfarm host INTERNAL-DNS
  transparent
  predictor leastconns
  probe TCP_53_RECDNS
  rserver private IP 53
inservice

The ACE uses ARP to discover the destination MAC of the private IP, but 
sends an IP packet to that MAC with a destination of the public IP. The 
DNS reply comes back from that, and all is well.


I get the feeling I'm not understanding what isn't working for you; can 
you describe the failure in more detail? What server OS are you running, 
and can you describe the network config?


Cheers,
Phil
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


transparent DNS load-balancing with a Cisco ACE

2012-10-19 Thread John Miller

Hello everyone,

Perhaps a Cisco list is a better destination for this, but I've seen a 
similar post here in the past couple of months, so posting here as well.


I'm trying to get our Cisco ACE set up appropriately to handle DNS 
traffic.  So far, I've gotten it working using NAT (each rserver has a 
public and a private IP) and using transparent load-balancing (ACE talks 
directly to the public IP), aka direct server return.


Here's a question, however: how does one get probes working for a 
transparent LB setup?  If an rserver listens for connections on all 
interfaces, then probes work fine, but return traffic from the uses the 
machine's default IP (not the VIP that was originally queried) for the 
source address of the return traffic.


What have people done to get probes working with transparent LB?  Are 
any of you using NAT to handle your dns traffic?  Not tying up NAT 
tables seems like the way to go, but lack of probes is a deal-breaker on 
this end.


--
John Miller
Systems Engineer
Brandeis University
johnm...@brandeis.edu
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: transparent DNS load-balancing with a Cisco ACE

2012-10-19 Thread Chuck Swiger
Hi--

On Oct 19, 2012, at 11:25 AM, John Miller wrote:
 Hello everyone,
 
 Perhaps a Cisco list is a better destination for this, but I've seen a 
 similar post here in the past couple of months, so posting here as well.
 
 I'm trying to get our Cisco ACE set up appropriately to handle DNS traffic.  
 So far, I've gotten it working using NAT (each rserver has a public and a 
 private IP) and using transparent load-balancing (ACE talks directly to the 
 public IP), aka direct server return.

IMO, the only boxes which should have IPs in both public and private netblocks 
should be your firewall/NAT routing boxes.

 Here's a question, however: how does one get probes working for a transparent 
 LB setup?  If an rserver listens for connections on all interfaces, then 
 probes work fine, but return traffic from the uses the machine's default IP 
 (not the VIP that was originally queried) for the source address of the 
 return traffic.

That's the default routing behavior for most platforms.  Some of them might 
support some form of policy-based routing via ipfw fwd / route-to or similar 
with other firewall mechanisms which would let the probes get returned from 
some other source address if you want them to do so.

 What have people done to get probes working with transparent LB?  Are any of 
 you using NAT to handle your dns traffic?  Not tying up NAT tables seems like 
 the way to go, but lack of probes is a deal-breaker on this end.

The locals around here have the luxury of a /8 netblock, so they can setup the 
reals behind a LB using publicly routable IPs and never need to NAT upon DNS 
traffic.  Folks with more limited # of routable IPs might well use LB to reals 
on an unrouteable private network range behind NAT, but in which case they 
wouldn't configure those boxes with public IPs.

Regards,
-- 
-Chuck

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: transparent DNS load-balancing with a Cisco ACE

2012-10-19 Thread John Miller

IMO, the only boxes which should have IPs in both public and private netblocks 
should be your firewall/NAT routing boxes.


That's how we usually have our servers set up--the load balancer gets 
the public IPs, the servers get the private IPs, and we use NAT to 
translate between the two.



Here's a question, however: how does one get probes working for a transparent 
LB setup?  If an rserver listens for connections on all interfaces, then probes 
work fine, but return traffic from the uses the machine's default IP (not the 
VIP that was originally queried) for the source address of the return traffic.


That's the default routing behavior for most platforms.  Some of them might 
support some form of policy-based routing via ipfw fwd / route-to or similar 
with other firewall mechanisms which would let the probes get returned from 
some other source address if you want them to do so.


Good to know--you'd definitely expect traffic to come back on the main 
interface.  I've considered setting up some iptables rules to make this 
happen, but if I can avoid it, so much the better.  Sounds like this is 
what I need to do, however, if I want both probes and regular requests 
to work.



What have people done to get probes working with transparent LB?  Are any of 
you using NAT to handle your dns traffic?  Not tying up NAT tables seems like 
the way to go, but lack of probes is a deal-breaker on this end.


The locals around here have the luxury of a /8 netblock, so they can setup the 
reals behind a LB using publicly routable IPs and never need to NAT upon DNS 
traffic.  Folks with more limited # of routable IPs might well use LB to reals 
on an unrouteable private network range behind NAT, but in which case they 
wouldn't configure those boxes with public IPs.


We're on a /16, so we have plenty of public IPs (though not as many as 
you!) to play with, too.  The choice to NAT has historically been more 
about security than anything else--if something is privately IPed, we've 
got it on a special VLAN as well.


Presumably those reals are still behind a virtual ip address that's also 
public, right?  If that's the case, how do you keep your probes (to the 
IP behind the LB) working, while still sending back regular DNS traffic 
(that was originally sent to the virtual IP) with the VIP as a source 
address?  Seems like you get only one or the other unless you tweak 
iptables/ipfw/etc.


I appreciate the help, Chuck!  Would you mind PMing me or posting your 
configs?  That might be the most useful.


John

-
Configs:

eth0  Link encap:Ethernet  HWaddr DE:AD:CA:FE:BE:EF
  inet addr:129.64.x.11  Bcast:129.64.x.255  Mask:255.255.255.0

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING NOARP  MTU:16436  Metric:1

lo:1  Link encap:Local Loopback
  inet addr:129.64.x.53 (VIP)  Mask:255.255.255.255
  UP LOOPBACK RUNNING NOARP  MTU:16436  Metric:1

Here's my ACE config (IP addrs deliberately munged):

access-list anyone line 10 extended permit ip any any

probe dns brandeis.edu-dns
  description Query dns servers for brandeis.edu/A
  interval 5
  passdetect interval 10
  domain brandeis.edu
  expect address 129.64.99.138

rserver host dns1
  description dev-level recursive DNS server; running BIND9 in the 
xen-ha-environment.

  ip address 129.64.x.11
  inservice
rserver host dns2
  description dev-level recursive DNS server; running PowerDNS in the 
xen-ha-environment.

  ip address 129.64.x.12
  inservice
rserver host dns3
  description dev-level recursive DNS server; running BIND9 in the 
XenServer environment.

  ip address 129.64.x.13
  inservice
rserver host dns4
  description dev-level recursive DNS server; running PowerDNS in the 
XenServer environment.

  ip address 129.64.x.14
  inservice

serverfarm host dns-recursive
  description Dev-level recursive DNS servers--both BIND and PowerDNS
  transparent
  probe brandeis.edu-dns
  rserver dns1
inservice
  rserver dns2
inservice
  rserver dns3
inservice
  rserver dns4
inservice

class-map match-all VIP
  2 match virtual-address 129.64.x.53 udp eq domain

policy-map type loadbalance first-match L7SLBPOLICY
  class class-default
serverfarm dns-recursive

policy-map multi-match L4SLBPOLICY
  class VIP
loadbalance vip inservice
loadbalance policy L7SLBPOLICY
loadbalance vip icmp-reply active

interface vlan 100
  ip address 129.64.x.100 255.255.255.0
  peer ip address 129.64.x.101 255.255.255.0
  no normalization
  access-group input anyone
  service-policy input L4SLBPOLICY
  no shutdown

ip route 0.0.0.0 0.0.0.0 129.64.x.1
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: transparent DNS load-balancing with a Cisco ACE

2012-10-19 Thread Daniel McDonald



On 10/19/12 1:25 PM, John Miller johnm...@brandeis.edu wrote:

 Hello everyone,
 
 Perhaps a Cisco list is a better destination for this, but I've seen a
 similar post here in the past couple of months, so posting here as well.
 
 I'm trying to get our Cisco ACE set up appropriately to handle DNS
 traffic.  So far, I've gotten it working using NAT (each rserver has a
 public and a private IP) and using transparent load-balancing (ACE talks
 directly to the public IP), aka direct server return.

I've not bothered with nat - just place rservers with unique addresses
behind the ACE, let them use the ACE as their default gateway, and then
publish a vip.  The rservers use their real address for zone transfers with
the master, while clients only talk with the vip address.


-- 
Daniel J McDonald, CCIE # 2495, CISSP # 78281

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: transparent DNS load-balancing with a Cisco ACE

2012-10-19 Thread Chuck Swiger
Hi--

On Oct 19, 2012, at 1:04 PM, John Miller wrote:
 IMO, the only boxes which should have IPs in both public and private 
 netblocks should be your firewall/NAT routing boxes.
 
 That's how we usually have our servers set up--the load balancer gets the 
 public IPs, the servers get the private IPs, and we use NAT to translate 
 between the two.

OK.

 Here's a question, however: how does one get probes working for a 
 transparent LB setup?  If an rserver listens for connections on all 
 interfaces, then probes work fine, but return traffic from the uses the 
 machine's default IP (not the VIP that was originally queried) for the 
 source address of the return traffic.
 
 That's the default routing behavior for most platforms.  Some of them might 
 support some form of policy-based routing via ipfw fwd / route-to or similar 
 with other firewall mechanisms which would let the probes get returned from 
 some other source address if you want them to do so.
 
 Good to know--you'd definitely expect traffic to come back on the main 
 interface.  I've considered setting up some iptables rules to make this 
 happen, but if I can avoid it, so much the better.  Sounds like this is what 
 I need to do, however, if I want both probes and regular requests to work.

Perhaps I misunderstand, but if the internal boxes only have one IP, how can 
they not be using the right source address when replying to liveness probes 
from your LB or some other monitor?  Do you probe on an external IP and have 
something else doing NAT besides the LB itself?

Or do you setup a second IP on your reals which is what the LB sends traffic to?
(That's kinda what your lo:1 entry of 129.64.x.53 looked like.)

 What have people done to get probes working with transparent LB?  Are any 
 of you using NAT to handle your dns traffic?  Not tying up NAT tables seems 
 like the way to go, but lack of probes is a deal-breaker on this end.
 
 The locals around here have the luxury of a /8 netblock, so they can setup 
 the reals behind a LB using publicly routable IPs and never need to NAT upon 
 DNS traffic.  Folks with more limited # of routable IPs might well use LB to 
 reals on an unrouteable private network range behind NAT, but in which case 
 they wouldn't configure those boxes with public IPs.
 
 We're on a /16, so we have plenty of public IPs (though not as many as you!) 
 to play with, too.  The choice to NAT has historically been more about 
 security than anything else--if something is privately IPed, we've got it on 
 a special VLAN as well.

OK.  I've seen too many examples of traffic leaking between VLANs to completely 
trust their isolation, but good security ought to involve many layers which 
don't have to each be perfect to still provide worthwhile benefits.

 Presumably those reals are still behind a virtual ip address that's also 
 public, right?

Yes, presumably.  :)

 If that's the case, how do you keep your probes (to the IP behind the LB) 
 working, while still sending back regular DNS traffic (that was originally 
 sent to the virtual IP) with the VIP as a source address?  Seems like you get 
 only one or the other unless you tweak iptables/ipfw/etc.

There are two types of probes that I'm familiar with.

One involves liveness probes between the LB itself to the reals, which is done 
so that the LB can decide which of the reals are available and should be 
getting traffic.  For these, the reals are replying using their own IPs.  The 
other type of probe is to the VIP; the LB forwards traffic to the reals, gets a 
reply, and then proxies or rewrites these responses and returns them to the 
origin of the probe using the IP of the VIP.  Or you can short-cut replies 
going back via the LB using DSR (Direct Service Return), or whatever your LB 
vendor calls that functionality...

All of your normal clients would only be talking to the VIP, and would only see 
traffic coming from the VIP's IP.

 I appreciate the help, Chuck!  Would you mind PMing me or posting your 
 configs?  That might be the most useful.

Pretend that some folks nearby are using Citrix Netscaler MPX boxes rather than 
Cisco hardware, so this might not be too useful to your case; an example config 
for a webserver would look something like:

add serviceGroup SomeService-svg HTTP -maxClient 0 -maxReq 0 -cip ENABLED 
x-user-addr -usip NO -useproxyport YES -cltTimeout 120 -svrTimeout 300 -CKA YES 
-TCPB YES -CMP NO
add lb vserver LB-SomeService-80 HTTP 1.2.3.4 80 -persistenceType NONE 
-cltTimeout 120
bind lb vserver LB-SomeService-80 SomeService-svg
bind serviceGroup SomeService-svg rserver1 8080
bind serviceGroup SomeService-svg rserver2 8080
bind serviceGroup SomeService-svg rserver3 8080
bind serviceGroup SomeService-svg rserver4 8080

[ This is a generic example for a webserver, or for similar things which use 
HTTP to communicate.  Another group handles DNS, so I don't have a generic 
example for that handy.  And yeah, NDA issues prevent me from being as 

Re: transparent DNS load-balancing with a Cisco ACE

2012-10-19 Thread Michael Hoskins (michoski)
-Original Message-

From: Chuck Swiger cswi...@mac.com
Date: Friday, October 19, 2012 5:09 PM
To: John Miller johnm...@brandeis.edu
Cc: DNS BIND bind-us...@isc.org
Subject: Re: transparent DNS load-balancing with a Cisco ACE

 
 We're on a /16, so we have plenty of public IPs (though not as many as
you!) to play with, too.  The choice to NAT has historically been more
about security than anything else--if something is privately IPed, we've
got it on a special VLAN as well.

OK.  I've seen too many examples of traffic leaking between VLANs to
completely trust their isolation, but good security ought to involve many
layers which don't have to each be perfect to still provide worthwhile
benefits.

NAT is not a security mechanism :-)

If that's the case, how do you keep your probes (to the IP behind the
LB) working, while still sending back regular DNS traffic (that was
originally sent to the virtual IP) with the VIP as a source address?
Seems like you get only one or the other unless you tweak
iptables/ipfw/etc.

There are two types of probes that I'm familiar with.

One involves liveness probes between the LB itself to the reals, which is
done so that the LB can decide which of the reals are available and
should be getting traffic.  For these, the reals are replying using their
own IPs.  The other type of probe is to the VIP; the LB forwards traffic
to the reals, gets a reply, and then proxies or rewrites these responses
and returns them to the origin of the probe using the IP of the VIP.  Or
you can short-cut replies going back via the LB using DSR (Direct
Service Return), or whatever your LB vendor calls that functionality...

All of your normal clients would only be talking to the VIP, and would
only see traffic coming from the VIP's IP.

Hmm, I must have got lucky or this is being over-thought...  I use ACE
with Linux/BIND reals and DSR.  No problems with traffic or probes.  I
would avoid NAT for DNS.  It's certainly possible, though NDAs avoid
copy/paste.  :-(

Ugly URLs suck almost as much as NDAs:

http://docwiki.cisco.com/wiki/Cisco_Application_Control_Engine_%28ACE%29_Co
nfiguration_Examples_--_Server_Load-Balancing_Configuration_Examples#Exampl
e_of_a_UDP_Probe_Load-Balancing_Configuration

Better:

https://lists.isc.org/pipermail/bind-users/2012-March/087105.html

While you're at it, test your fixups...  :-)

https://www.dns-oarc.net/oarc/services/replysizetest/

Good luck!

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users