Hello everyone,

Recently I designed a specific networking layout to be used on a Xen
system, currently running on two dedicated server at a certain hosting
company. I'd like to request your comments and ideas on it. It's a bit
off topic but not entirely.

While the whole thing works I do have some questions remaining, I'll
ask those at the end.

The ground work: a 'plain' Xen server with less than 10 VMs;
OpenVSwitch; routing; filtering and traffic analysis.

Each VM has its own global IP or subnet (like a /30). Dom0 also has
an IP. When I request a single new secondary IP my hosting company
assigns a 'random' one from one of their subnets. They have many and
they route those IPs over the main IP.

Via their website I am allowed to route any IP except for the main one
to any other server. That allows me to do some "poor man's HA" tasks:
each VM has its storage on a DRBD backed LVM2 LV and thus I can move a
VM from physical server A to B by shutting down that VM on server A,
moving the IP and the DRBD primary to server B, altering routes on A and
B and finally booting the VM on server B.

The server's physical connection (eth0) is assigned, with PCI
passthrough, to a firewall VM. So packets arrive at eth0(fw), are
filtered by shorewall, then are sent through eth1(fw) where they reach
dom0's OpenVSwitch.

dom0's port has proxyarp=1, so all packets flow from VM(fw) eth1(fw),
via dom0, to the other VMs. I know that this is not required, but I
wanted it this way because OpenVSwitch can do port mirroring. I am using
that to duplicate the packets from eth1(fw) to a switch port 'ids0'
where a separate VM is running Snort. The other VMs are each on their
own VLAN bridge (another function of OpenVSwitch).

Doing it in this way allows me to analyse only the filtered packets,
i.e.  after Shorewall. I couldn't find any other way to do this; packet
filtering goes after packet switching.

Here's how it works. Say I have one VM "foo", connected to "fake bridge"
vbr10 which has VLAN tag 10, and IP address 203.0.113.24. To make it
work this is how it looks and what I have to do:

.------.(fw).------.   <-----.
| eth1 |====| eth0 |===| ISP |
'------'    '------'   '----->
   ||
.----------.(mir).------.   .-----------.
| vSwitch0 |=====| ids0 |---| vm(snort) |
'----------'     '------'   '-----------'
   ||     \\
.------.  .-------.    .-------.
| dom0 |  | vbr10 |----| VMfoo |
'------'  '-------' 10 '-------'


In the above diagram I did not include the port used by VM(ids) for its
own ingress/egress. It actually has eth0 and eth1 where eth1 is the
mirrored traffic.

Commands:
VM(fw): sysctl net.ipv4.conf.eth1.proxy_arp=1
VM(fw): ip route add 203.0.113.24 dev eth1

dom0: sysctl net.ipv4.conf.vm_fw0.proxy_arp=1
dom0: sysctl net.ipv4.conf.vbr10.proxy_arp=1
dom0: ip route add 203.0.113.24 dev vbr10
dom0: ip route add default dev dom0

VM(foo): ip addr add 203.0.113.24/24 brd + dev eth0
VM(foo): ip route add default via 203.0.113.0 dev eth0
The 'via' address is bogus, but it works because the dom0 side has
proxyarp=1.

Maybe you already figured out the downside: routing has to be setup in
two places; on VM(fw) and on dom0. This means that, when I move VM(foo)
from server A to B, I have to issue on A:

dom0: ip route del 203.0.113.24 dev vbr10
vm(fw): ip route del 203.0.113.0 dev eth1

And the opposite on server B.

Also it can become confusing when a VM has been moved. Say VM(foo) runs
on server A, then that means that when VM(foo) contacts a VM on server
B, VM(fw) on server A sees the address 203.0.113.24 in zone INET and
VM(fw) on server B sees that address in zone LCL. Moving them around
means the opposite will happen. Luckily I have few VMs and don't move
them around often.

One remaining query I have is specific to shorewall and has to do with
zones. I figured there are various approaches:

1) make one big zone for the hosts that are local to the physical
server; i.e. on VM(fw) make a zone LCL with siblings LCL:foo, LCL:bar,
define their IPs in /etc/shorewall/hosts, add shared rules under the LCL
zone and specific rules under the foo/bar zone; or

2) Only assign a LCL zone to eth1 in /etc/shorewall/interfaces; use
params for the internal hosts, e.g.  FOO="LCL:203.0.113.24" and use LCL
for the shared rules and $FOO for the specific rules.

I am looking forward to your comments/suggestions/opinions; if you read
this far you probably have some... :)

-- 
Thanks,
Mark van Dijk.               ,---------------------------------
----------------------------'         Thu Aug 30 07:57 UTC 2012
Today is Boomtime, the 23rd day of Bureaucracy in the YOLD 3178

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Shorewall-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/shorewall-users

Reply via email to