On Monday, December 29, 2014 02:55:49 PM lee wrote:
> thegeezer <thegee...@thegeezer.net> writes:
> > On 08/12/14 22:17, lee wrote:
> >> "J. Roeleveld" <jo...@antarean.org> writes:
> >>> create 1 bridge per physical network port
> >>> add the physical ports to the respective bridges
> >> 
> >> That tends to make the ports disappear, i. e. become unusable, because
> >> the bridge swallows them.
> > 
> > and if you pass the device then it becomes unusable to the host
> 
> The VM uses it instead, which is what I wanted :)
> 
> >>> pass virtual NICs to the VMs which are part of the bridges.
> >> 
> >> Doesn't that create more CPU load than passing the port?  And at some
> >> point, you may saturate the bandwidth of the port.
> > 
> > some forward planning is needed. obviously if you have two file servers
> > using the same bridge and that bridge only has one physical port and the
> > SAN is not part of the host then you might run into trouble. however,
> > you can use bonding in various ways to group connections -- and in this
> > way you can have a virtual nic that actually has 2x 1GB bonded devices,
> > or if you choose to upgrade at a later stage you can start putting in
> > 10GbE cards and the virtual machine sees nothing different, just access
> > is faster.
> > on the flip side you can have four or more relatively low bandwidth
> > requirement virtual machines running on the same host through the same
> > single physical port
> > think of the bridge as an "internal, virtual, network switch"... you
> > wouldn't load up a switch with 47 high bandwidth requirement servers and
> > then create a single uplink to the SAN / other network without seriously
> > considering bonding or partitioning in some way to reduce the 47into1
> > bottleneck, and the same is true of the virtual-switch (bridge)
> > 
> > the difference is that you need to physically be there to repatch
> > connections or to add a new switch when you run out of ports. these
> > limitations are largely overcome.
> 
> That all makes sense; my situation is different, though.  I plugged a
> dual port card into the server and wanted to use one of the ports for
> another internet connection and the other one for a separate network,
> with firewalling and routing in between.  You can't keep the traffic
> separate when it all goes over the same bridge, can you?

Not if it goes over the same bridge. But as they are virtual, you can make as 
many as you need.

> And the file server could get it's own physical port --- not because
> it's really needed but because it's possible.  I could plug in another
> dual-port card for that and experiment with bonding.

How many slots do you have for all those cards?
And don't forget there is a bandwidth limit on the PCI-bus.

> However, I've changed plans and intend to use a workstation as a hybrid
> system to reduce power consumption and noise, and such a setup has other
> advantages, too.  I'll put Gentoo on it and probably use containers for
> the VMs.  Then I can still use the server for experiments and/or run
> distcc on it when I want to.

Most people use a low-power machine as a server and use the fast machine as a 
workstation to keep power consumption and noise down.

> >> The only issue I have with passing the port is that the kernel module
> >> must not be loaded from the initrd image.  So I don't see how fighting
> >> with the bridges would make things easier.
> > 
> > vif=[ 'mac=de:ad:be:ef:00:01,bridge=br0' ]
> > 
> > am i missing where the fight is ?
> 
> setting up the bridges

Really simple, there are plenty of guides around. Including how to configure it 
using netifrc (which is installed by default on Gentoo)

> no documentation about in which order a VM will see the devices

Same goes for physical devices. Use udev-rules to name the interfaces 
logically based on the MAC-address:
***
# cat 70-persistent-net.rules 
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", 
ATTR{address}=="00:16:3e:16:01:01", ATTR{dev_id}=="0x0", ATTR{type}=="1", 
KERNEL=="eth*", NAME="lan"

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", 
ATTR{address}=="00:16:3e:16:01:02", ATTR{dev_id}=="0x0", ATTR{type}=="1", 
KERNEL=="eth*", NAME="dmz"
***

> a handful of bridges and VMs

Only 1 bridge per network segment is needed.

> a firewall/router VM with it's passed-through port for pppoe and three
> bridges

Not difficult, had that for years till I moved the router to a seperate 
machine. 
(Needed something small to fit the room where it lives)

> the xen documentation being an awful mess

A lot of it is outdated. A big cleanup would be useful there.

> an awful lot of complexity required

There is a logic to it. If you use the basic xen install, you need to do every 
layer yourself.
You could also opt to go for a more ready product, like XCP, Vmware ESX,...
Those will do more for you, but also hide the interesting details to the point 
of being annoying.
Bit like using Ubuntu or Redhat instead of Gentoo.

> Guess what, I still haven't found out how to actually back up and
> restore a VM residing in an LVM volume.  I find it annoying that LVM
> doesn't have any way of actually copying a LV.  It could be so easy if
> you could just do something like 'lvcopy lv_source
> other_host:/backups/lv_source_backup' and 'lvrestore
> other_host:/backups/lv_source_backup vg_target/lv_source' --- or store
> the copy of the LV in a local file somewhere.

LVs are block-devices. How do you make a backup of an entire harddrive?

> Just why can't you?  ZFS apparently can do such things --- yet what's
> the difference in performance of ZFS compared to hardware raid?
> Software raid with MD makes for quite a slowdown.

What do you consider "hardware raid" in this comparison?
Most so-called hardware raid cards depend heavily on the host CPU to do all 
the calculations and the code used is extremely inefficient.
The Linux build-in software raid layer ALWAYS outperforms those cards.

The true hardware raid cards have their own calculation chips to do the heavy 
lifting. Those actually stand a chance to outperform the linux software raid 
layer. It depends on the spec of the host CPU and what you use the system for.

ZFS and BTRFS runs fully on the host CPU, but has some additional logic built-
in which allows it to generally outperform hardware raid.

I could do with a hardware controller which can be used to off-load all the 
heavy lifting for the RAIDZ-calculations away from the CPU. And if the stuff 
for the deduplication could also be done that way?

> > the only issue with bridges is that if eth0 is in the bridge, if you try
> > to use eth0 directly with for example an IP address things go a bit
> > weird, so you have to use br0 instead
> > so don't do that.
> 
> Yes, it's very confusing.

It's just using a different name. Once it's configured, the network layer of 
the 
OS handles it for you.

--
Joost

Reply via email to