Thanks for confirming the setup that I already have assumed.

This will naturally be slower for having:
a) overhead by the host filesystem
b) overhead by qcow metadata handling
c) exits to the host for the I/O (biggest single element of these)
d) less concurrency as the defaults for queue count and depths
e) with the PT case you do real direct I/O write, while the other probably 
caches in the host which most of the time is bad.

I can help you to optimize the tunables for the image that you attach if you 
want that.
That will make it slightly better than it is, but clearly it will never reach 
the performance of the nvme-passthrough.

Let me know if you want some general tuning guidance on those or not.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853042

Title:
  Ubuntu 18.04 - vm disk i/o performance issue when using file system
  passthrough

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-power-systems/+bug/1853042/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to