Hi,
Let me provide my expectations (which are different):
You said "desired performance when using file system passthrough should be 
similar to the device passthrough"
IMHO that isn’t right - it might be "desired" but unrealistic to "be expected"

Usually you have a hierarchy:
1. device passthrough
2. using block devices
3. using images on Host Filesystem
4. using images on semi-remote cluster filesystems
(and a few special cases in between)

Those usually from 1->4 are decreasing performance but increasing
flexibility and manageability.

So I wanted to give a heads up based on the initial report that
eventually this might end up as "please adjust expectations"

---

That said, let’s focus on what your setup actually looks like and if there are 
obvious improvements or hidden bugs.
Unfortunately "file system passthrough" isn't a clearly defined thing.
Could you:
1) outline which disk storage you attached to the host
2) which filesystem is on that storage
3) how you are passing files and/or images to the guest

Please explain the questions above and attach libvirts guest xml of both
of your test cases.

P.S. nvme passthrough will soon become even faster on ppc64el due to the
fix for bug LP 1847948

** Changed in: qemu (Ubuntu)
       Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853042

Title:
  Ubuntu 18.04 - vm disk i/o performance issue when using file system
  passthrough

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-power-systems/+bug/1853042/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to