On Tue, 9 Jun 2020, Ken Smith via GLLUG wrote:

Hi All,

While in lockdown I decided to do some performance testing on KVM. I had believed that passing a block device through to a guest rather than using a QCOW2 file would get better performance. I wanted to see whether that was true and indeed whether using iSCSI storage was any better/worse.


Interesting, I've been looking into this myself trying to improve
performance/reduce cpu usage.

This is a random file I happened to have lying around:
 scp _usr.dmp localhost:/mnt/nobackup/ _usr.dmp 100% 1990MB 149.9MB/s   00:13

Using nc (no encryption)
time cat _usr.dmp >/dev/tcp/::1/4444
real    0m5.617s

When I copy it over the network (1Gbit) I get:
 scp _usr.dmp xen17:/dev/shm/ _usr.dmp 100% 1990MB  55.5MB/s   00:35

time cat _usr.dmp >/dev/tcp/fe80::d250:99ff:fec1:5e59%usb0/4444
real    0m19.093s

(This is pretty close to the theoretical maximum for the network!)


Onto a virtual host that is running on xen17

#vm writing to /dev/zero
 time cat _usr.dmp >/dev/tcp/fe80::216:3eff:fee0:7253%usb0/4444
real    0m19.798s

#vm writing to an iscsi device (on the xen17 host)
 time cat _usr.dmp >/dev/tcp/fe80::216:3eff:fee0:7253%usb0/4444
real    0m40.941s

#using ssh:
scp _usr.dmp debootstrap17:/mnt/tmp/x _usr.dmp 100% 1990MB  26.9MB/s   01:14

#And when the vm has the device mounted as a raw device, not via iscsi:
 time cat _usr.dmp >/dev/tcp/fe80::216:3eff:fee0:7253%usb0/4444
real    0m34.968s

And via SSH:
scp _usr.dmp debootstrap17:/mnt/tmp/x _usr.dmp 100% 1990MB  30.1MB/s   01:06


In my particular case, using ssh to move files on the lan is by far the
biggest hit and ssh tends to be used for everything nowadays. I will
probably patch ssh at some point to allow the null cipher so encryption
can be disabled in the .ssh/config file on a per host basis.

xen17 is an Intel(R) Celeron(R) CPU J1900  @ 1.99GHz with 16GB ram and
the source machine was a Intel(R) Core(TM) i3-7100U CPU @ 2.40GHz

Tim.


My test hardware is quite modest and this may adversely have affected what I measured. The processor is a Intel Core2 6300 @ 1.86GHz with VT-X support. It shows 3733 Bogomips at startup. There's 8GB RAM and an Intel 82801HB SATA controller on a Gigabyte MB. The disks are two 3TB SATA 7200RPM set up with a Raid 1 LVM Ext3 partition as well as other non-Raid partitions to use to test.

I used Fedora 32 as the KVM host and my testing was with Centos 8 as a guest.

On the host I got 60MB/s write and 143 MB/s read on Raid1/LVM/Ext3. I wrote/read 10GB files using dd. 10Gb so as to overflow any memory based caching. Without LVM that changed to 80 MB/s write and 149 MB/s read.

I tried all kinds of VM setups. Normal QCOW2, pass though of block devices Raid/LVM and Non-Raid/LVM. I consistently got around 14.5 MB/s write and 16.5 MB/s read. Similar figures with iSCSI operating from both file based devices and block devices on the same host. The best I got by tweaking the performance settings in KVM was a modest improvement to 15 MB/s write and 17 MB/s read.

As a reference point I did a test on a configuration that has Centos 6 on Hyper-V on an HP ML350 with SATA 7200 rpm disks. I appreciate that's much more capable hardware, although SATA rather than SAS, but I measured 176 MB/s write and 331 MB/s read. That system is using a file on the underlying NTFS file system to provide a block device to the Centos 6 VM.

I also tried booting the C8 guest via iSCSI on a Centos6 Laptop, which worked fine on a 1G network. I measured 16.8 MB/s write and 23.1 MB/s read that way.

I noticed an increase in processor load while running my DD tests, although I didn't take any actual measurements.

What to conclude? Is the hardware just not fast enough? Are newer processors better at abstracting the VM guests with less performance impact? What am I missing??

Any thoughts from virtualisation experts here most welcome.

Thanks

Ken





--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Reply via email to