On Tue, Feb 21, 2017 at 12:07 PM, Eric Desrochers <
[email protected]> wrote:

> @Ryan,
>
> This situation has been brought to my attention by someone, and I was
> able to reproduce it myself.
>
> The main difference between my setup and his, is that my contact has
> "vg02-vg02--labmysql01" on a separate disk, thus /dev/sdb instead of
> /dev/sda like my case as I only have 1 disk available for testing.
>
> $ lsblk
> NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> sda                         8:0    0 119.2G  0 disk
> ├─sda1                      8:1    0    28G  0 part
> │ └─vg01-vg01--root       252:0    0    28G  0 lvm  /
> ├─sda2                      8:2    0     1K  0 part
> ├─sda5                      8:5    0  18.6G  0 part
> │ └─vg01-vg01--swap       252:1    0  18.6G  0 lvm  [SWAP]
> └─sda6                      8:6    0  18.6G  0 part
>   └─vg02-vg02--labmysql01 252:2    0  18.6G  0 lvm
>
> $ lvs
>   LV              VG   Attr       LSize  Pool Origin Data%  Meta%  Move
> Log Cpy%Sync Convert
>   vg01-root       vg01 -wi-ao---- 27.94g
>   vg01-swap       vg01 -wi-ao---- 18.62g
>   vg02-labmysql01 vg02 -wi-ao---- 18.62g
>
> $ vgs
>   VG   #PV #LV #SN Attr   VSize  VFree
>   vg01   2   2   0 wz--n- 46.56g    0
>   vg02   1   1   0 wz--n- 18.62g    0
>
> $ pvs
>   PV         VG   Fmt  Attr PSize  PFree
>   /dev/sda1  vg01 lvm2 a--  27.94g    0
>   /dev/sda5  vg01 lvm2 a--  18.62g    0
>   /dev/sda6  vg02 lvm2 a--  18.62g    0
>
> I'm currently re-installing my system to Zesty and see if I can
> reproduce it with the devel release.
>

Since you have a single spindle in play here, this is what I suspect is
going on.

You're opening the underlying device with O_DIRECT (cache=none) and
specifying
the use of the "native" io driver, which is Linux AIO.  Virtio-blk will
enable the disk
with WCE enabled ("write cache" in /sys/class/block/vdX/cache_type);  The
guest is going to issue write barriers, aka, a write-cache flush operation
to ensure consistency of the data is on the virtual disk (and not in the
cache).
This results in qemu on the host flushing the AIO queue via fdatasync()
which is going to ask the hypervisor to wait until it's sure the data is on
the disk.  This behavior varies based on the host OS and disk devices.

There are a couple of things to try here:

1) io=threads;  for a single disk, I don't think linux aio is providing any
performance benefit but you will pay more per-io submitted due to Linux AIO
overhead;

2) cache=writethrough;  This avoids telling the guest it has a write-cache,
it enables higher read performance as host can cache that data, and writes
are not returned until the underlying write syscall submitted by qemu does
which should avoid any costly guest-driven cache flush operations (The
opens the underlying disk on the host with O_DSYNC flag).

This is mostly a question of performance vs. integrity.  If it's OK for the
data to become corrupt (it's replaceable or throw-away, or backed up) then
one may want to use cache=writeback or even cache=unsafe which enables more
caching and no barriers.


> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1666555
>
> Title:
>   update-grub slow with raw LVM source dev + VirtIO bus
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1666555/+subscriptions
>

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1666555

Title:
  update-grub slow with raw LVM source dev + VirtIO bus

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1666555/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to