Re: Transparent Load Balancing Gateway
On 5/3/06, Hisham Mardam Bey [EMAIL PROTECTED] wrote: Persistent connections seem to disconnect after a while set timeout { adaptive.start 6000, adaptive.end 12000 } set limit states 2 You probably want to re-read how adaptive timeouts work. If the number of active states reaches adaptive.end, the entire state table is effectively flushed. -- Jon
Re: Transparent Load Balancing Gateway
On 5/4/06, Hisham Mardam Bey [EMAIL PROTECTED] wrote: Does anyone know why I am having those sudden drops in connections? I have an update on the situation. Here's what I did: [client]--[loadbal]--[my 2 backends]-[samba server] I mounted a samba share from the samba server onto the client machine, and I tried to use mplayer to play the file. The file starts up, plays a few frames, then it starts to break up. I had logging set to misc using pfctl, and this is what I noticed: May 4 21:24:12 openbsd-be1 /bsd: pf_map_addr: selected address 172.16.2.1 May 4 21:25:00 openbsd-be1 /bsd: pf_map_addr: selected address 172.16.2.2 May 4 21:25:00 openbsd-be1 /bsd: pf: loose state match: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:53533 [lo=3175794114 high=3175794116 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 PA seq=3175794115 ack=0 len=72 ackskew=0 pkts=2:0 May 4 21:25:00 openbsd-be1 /bsd: pf: loose state match: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:53533 [lo=3175794187 high=3175794116 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 PA seq=3175794187 ack=0 len=168 ackskew=0 pkts=3:0 May 4 21:25:00 openbsd-be1 /bsd: pf: loose state match: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:53533 [lo=3175794355 high=3175794116 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 PA seq=3175794355 ack=0 len=144 ackskew=0 pkts=4:0 .. .. May 4 21:25:21 openbsd-be1 /bsd: pf_map_addr: selected address 172.16.2.1 May 4 21:25:59 openbsd-be1 /bsd: pf_map_addr: selected address 172.16.2.2 May 4 21:25:59 openbsd-be1 /bsd: pf: loose state match: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:51976 [lo=3591429406 high=3591429408 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 PA seq=3591429407 ack=0 len=72 ackskew=0 pkts=2:0 May 4 21:25:59 openbsd-be1 /bsd: pf: loose state match: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:51976 [lo=3591429479 high=3591429408 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 PA seq=3591429479 ack=0 len=168 ackskew=0 pkts=3:0 .. .. May 4 21:27:47 openbsd-be1 /bsd: pf: loose state match: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:64164 [lo=2612578234 high=2612511261 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 A seq=2612578234 ack=0 len=0 ackskew=0 pkts=1964:0 May 4 21:27:47 openbsd-be1 /bsd: pf: BAD state: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:64164 [lo=2612578234 high=2612511261 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 PA seq=2612578234 ack=0 len=63 ackskew=0 pkts=1965:0 dir=in,fwd May 4 21:27:47 openbsd-be1 /bsd: pf: State failure on: 1 | 5 May 4 21:27:48 openbsd-be1 /bsd: pf: BAD state: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:64164 [lo=2612578234 high=2612511261 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 PA seq=2612578234 ack=0 len=63 ackskew=0 pkts=1965:0 dir=in,fwd May 4 21:27:48 openbsd-be1 /bsd: pf: State failure on: 1 | 5 .. .. May 4 21:27:58 openbsd-be1 /bsd: pf: loose state match: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:64164 [lo=2612578234 high=2612511261 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 RA seq=2612578234 ack=0 len=0 ackskew=0 pkts=1965:0 May 4 21:28:04 openbsd-be1 /bsd: pf_map_addr: selected address 172.16.2.2 May 4 21:28:04 openbsd-be1 /bsd: pf: loose state match: TCP 192.168.0.223:139 192.168.0.223:139 172.16.2.4:57934 [lo=2945241422 high=2945241424 win=65535 modulator=0] [lo=0 high=65535 win=1 modulator=0] 2:0 PA seq=2945241423 ack=0 len=72 ackskew=0 pkts=2:0 .. .. The first couple of selects were happening when I was listing directories over the smb share. Then some packets came in correctly when playing the movie. When I started getting breakups in the movie, I noticed the errors. Then by the time it had reslected an address from the map, mplayer quit. I hope this sheds some more light on the situation. Regards, hisham. -- Hisham Mardam Bey MSc (Computer Science) http://hisham.cc/ +9613609386 Codito Ergo Sum (I Code Therefore I Am)
Re: Transparent Load Balancing Gateway
On Thu, May 04, 2006 at 01:46:59PM +0300, Hisham Mardam Bey wrote: I have an update on the situation. Here's what I did: [client]--[loadbal]--[my 2 backends]-[samba server] Doing this with only one interface (and bouncing incoming packets out through the same interface) sounds like asking for a headache. What happens in your case is that the pf box doesn't see the reply packets, i.e. it only sees one half of the connection (client to server). The state entries don't advance properly in this case, pf doesn't see the server advertise window sizes, etc. and starts to block packets from the client to the server. The problem is similar to the one described on http://www.openbsd.org/faq/pf/rdr.html#reflect i.e. the server sends its replies directly to the client, as the client is on the same network and the server has learned its MAC address. If you want to filter statefully, you have to make sure pf sees all packets (both directions) of connections. If and how that's possible in your case, is, well, YOUR headache. I'd not try bouncing packet out with a single interface setup, but use two interfaces, possibly bridging. ;) Daniel
Re: Transparent Load Balancing Gateway
On 5/4/06, Daniel Hartmeier [EMAIL PROTECTED] wrote: Thanks a lot for the info Daniel. I have an update on the situation. Here's what I did: [client]--[loadbal]--[my 2 backends]-[samba server] Doing this with only one interface (and bouncing incoming packets out through the same interface) sounds like asking for a headache. Indeed, and it is. (= What happens in your case is that the pf box doesn't see the reply packets, i.e. it only sees one half of the connection (client to server). The state entries don't advance properly in this case, pf doesn't see the server advertise window sizes, etc. and starts to block packets from the client to the server. The problem is similar to the one described on http://www.openbsd.org/faq/pf/rdr.html#reflect i.e. the server sends its replies directly to the client, as the client is on the same network and the server has learned its MAC address. That makes a lot of sense and explains a lot of the problems. If you want to filter statefully, you have to make sure pf sees all packets (both directions) of connections. If and how that's possible in your case, is, well, YOUR headache. I'd not try bouncing packet out with a single interface setup, but use two interfaces, possibly bridging. ;) I was thinking about something of the sort. How would I be able to use the bridge to redirect the packets though? The clients need to see a single IP as their gateway, say 172.16.2.1, and when they send packets towards that gateway, it needs to load balance their requests. If we have a bridge, how would it act and what exactly would it do? -- Hisham Mardam Bey MSc (Computer Science) http://hisham.cc/ +9613609386 Codito Ergo Sum (I Code Therefore I Am)
Re: Transparent Load Balancing Gateway
On 5/4/06, Hisham Mardam Bey [EMAIL PROTECTED] wrote: I was thinking about something of the sort. How would I be able to use the bridge to redirect the packets though? The clients need to see a single IP as their gateway, say 172.16.2.1, and when they send packets towards that gateway, it needs to load balance their requests. If we have a bridge, how would it act and what exactly would it do? I managed to solve the problem. The main idea is to have a bridge that sits between the clients and the backend servers. I have two backend gateways: 172.16.2.1 172.16.2.2 The bridge has 172.16.2.3 on it for 2 reasons: 1- remote administration 2- makes it easier to keep track of all the ARPs (when I didn't give it an IP at all, the setup didn't work for some reason, it never got any ARPs). The bridge has the following pf.conf: # Server side nic servers_if = rl0 # Client side nic clients_if = fxp0 # Client internal network clients_net = { ! 172.16.2.1, ! 172.16.2.2, ! 172.16.2.3, 172.16.2.0/24 } # Backend servers be_servers = { 172.16.2.1, 172.16.2.2 } # Our servers servers = { $be_servers, 172.16.2.3 } # Internet, everything else internet = { ! 172.16.2.1, ! 172.16.2.2, ! 172.16.2.3 } pass out log on $servers_if route-to \ { ($servers_if 172.16.2.1), ($servers_if 172.16.2.2) } round-robin \ from $clients_net to any keep state # Allowed incoming services pass in log on $servers_if from any to any \ keep state # Allowed outgoing services pass out log on $clients_if all keep state The result of this is that, all clients have their gateway set to 172.16.2.1 (one of the backend servers) and their packets have to pass through the bridge to reach the gateway. As the packets pass through, the bridge intercepts them and routes them to one of the two gateways using round-robin while remembering state. My previous disconnect problems with SSH and IRC, and even the mplayer streaming problem have all disappeared. It seems like this setup is quite fast and stable, and allows the network to grow (with more backend servers being added) very easily. I'm going to test it some more for a couple of days and let you guys know if I run into any problems. Maybe after its proven to be stable, I'll write a small how-to about this scenario. I just want to thank everyone that helped out both on the mailing list and on irc (you guys know who you are, hehe). OpenBSD / PF is my new firewall and router recommendation from now on. Great docs, great community, great OS. Best Regards, hisham. -- Hisham Mardam Bey MSc (Computer Science) http://hisham.cc/ +9613609386 Codito Ergo Sum (I Code Therefore I Am)