> Subject: Re: Network throughput limits for local VM <-> VM
> communication
> 
> Fischer, Anna wrote:
> > Not sure I understand. As far as I can see the packets are replicated
> on the tun/tap interface before they actually enter the bridge. So this
> is not about the bridge learning MAC addresses and flooding frames to
> unknown destinations. So I think this is different.
> >
> 
> Okay.
> 
> You said:
> 
> > However, without VLANs, the tun
> > interface will pass packets to all tap interfaces. It has to, as it
> > doesn't know to which one the packet has to go to.
> 
> Well, it shouldn't.  The tun interface should pass the packets to just
> one tap interface.
> 
> Can you post the qemu command line you're using?  There's a gotcha
> there
> that can result in what you're seeing.

Sorry for the late reply on this issue. The command line I am using looks 
roughly like this:

/usr/bin/qemu-system-x86_64 -m 1024 -smp 2 -name FC10-2 -uuid 
b811b278-fae2-a3cc-d51d-8f5b078b2477 -boot c -drive 
file=,if=ide,media=cdrom,index=2 -drive 
file=/var/lib/libvirt/images/FC10-2.img,if=virtio,index=0,boot=on -net 
nic,macaddr=54:52:00:11:ae:79,model=e1000 -net tap net 
nic,macaddr=54:52:00:11:ae:78,model=e1000 -net tap  -serial pty -parallel none 
-usb -vnc 127.0.0.1:2 -k en-gb -soundhw es1370

This is my "routing VM" that has two network interfaces and routes packets 
between two subnets. It has one interface plugged into bridge virbr0 and the 
other interface is plugged into virbr1:

brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.8ac1d18c63ec       no              vnet0
                                                        vnet1
virbr1          8000.2ebfcbb9ed70       no              vnet2
                                                        vnet3

If I use the e1000 virtual NIC model, I see performance drop significantly 
compared to using virtio_net. However, with virtio_net I have the network 
stalling after a few seconds of high-throughput traffic (as I mentioned in my 
previous post). Just to reiterate my scenario: I run three guests on the same 
physical machine, one guest is my routing VM that is routing IP network traffic 
between the other two guests.

I am also wondering about the fact that I do not seem to get CPU utilization 
maxed out in this case while throughput does not go any higher. I do not 
understand what is stopping KVM from using more CPU for guest I/O processing? 
There is nothing else running on my machine. I have analyzed the amount of CPU 
that each KVM thread is using, and I can see that the thread that is running 
the VCPU of the routing VM which is processing interrupts of the e1000 virtual 
network card is using the highest amount of CPU. Is there any way that I can 
optimize my network set-up? Maybe some specific configuration of the e1000 
driver within the guest? Are there any known issues with this?

I also see very difference CPU utilization and network throughput figures when 
pinning threads to CPU cores using taskset. At one point I managed to double 
the throughput, but I could not reproduce that setup for some reason. What are 
the major issues that I would need to pay attention to when pinning threads to 
cores in order to optimize my specific set-up so that I can achieve better 
network I/O performance?

Thanks for your help.

Anna
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to