Hi,
a little personal disclaimer.
This might not be perfect as I don't know your case in detail enough.
And also in general there is always "one more tuning" that you can tune :-)
An example of an alternative might be to partition the nvme and pass the
partitions as block devices => less flexible for live cycle management, but
usually much faster - it is up to your needs entirely.
All these things will come at a tradeoff like overhead / features / affecting
different workload patterns differently.
For now my suggestion here tries to stick to the qcow2 image on FS
option that you have chosen and just tune that one.
Lets summarize the benefits of NVME passthrough to images on host FS
- lower latency
- more and deeper queues
- no host caching adding overhead
- features
- less overhead
- ...
Lets take care about a few of those ...
A disk by default will start like:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/uvtool/libvirt/images/yourdisk.qcow'/>
<target dev='vdb' bus='virtio'/>
</disk>
Your's will most likely look very similar.
Note I prefer an XML compared to the command line options as there are more
features and it is more easy to document and audit later on (even if generated).
#1 Queues
By default a virtio block device has only one queue which is enough in many
cases.
But on huge systems with massively parallel work the might not be.
Add something like - queues='8' - to your driver section.
example:
<driver name='qemu' type='qcow2' queues='8'/>
#2 Caching
Not always, but often for high speed I/O host caching is a drawback.
You can disable that via - cache='none' - in your driver section.
example:
<driver name='qemu' type='qcow2' queues='8' cache='none'/>
#3 Features
NVME disks after all are flash and you'd want to make sure that it learns about
freed space (discard) in the long run. virtio-blk won't transfer those, but
virtio-scsi will.
<controller type='scsi' index='0' model='virtio-scsi'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
</controller>
Combining that with the above needs to adapt slightly as with virtio-scsi the
controller has the queues, so queues='x' moves here.
E.g. read https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/ and
links from there.
#4 IOThreads
By default your VM might have not enough I/O threads.
If you have multiple high performing elements you might want to assign them an
individual one.
You'll need to allocate more and then assign more to your disk(controller).
Again this depends on your setup, and here I only outline how to unblock
multiple disks from each others iothread.
1. in the domain section the threads
<iothreads>4</iothreads>
2. if you use virtio-scsi as above you assign iothreads at the controller level
and might therefore want one controller per disk (in other cases you might
assign the disk)
(assign separate threads to each controller and have each disk have its
own controller)
---
Overall modified from the initial example:
<domain>
...
<iothreads>4</iothreads>
<devices>
...
<controller type='scsi' index='0' model='virtio-scsi'>
<driver queues='8' iothread='3'/>
</controller>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/lib/uvtool/libvirt/images/yourdisk.qcow'/>
<target dev='sda' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
---
Some choices of the setup e.g. using qcow files - will limit some of the
benefits above.
At least in the past emulation limited the concurrency of actions here as well
as passthrough of discard might be less effective.
Raw files and even more so partitions will be much faster - but as I said all
is up to your specific needs.
You might need to evaluate all the alternatives between qcow files and
NVME-passthrough (which are like polar opposite) and rethink what you
need in terms of features/performance and then pick YOUR tradeoff.
You'll find generic documentation about what I used above at:
https://libvirt.org/formatdomain.html#elementsDisks
https://libvirt.org/formatdomain.html#elementsIOThreadsAllocation
As you see this is only the tip of the iceberg :-)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853042
Title:
Ubuntu 18.04 - vm disk i/o performance issue when using file system
passthrough
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-power-systems/+bug/1853042/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs