Hi,
I was asked about a very similar case and needed to debug it.
So I thought I give the issue reported here a try how it looks like today.

virt install creates a guest with a command like:
  -drive 
file=/dev/LVMpool/test-snapshot-virtinst,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native

## Solved - confusion of pre-existing other apparmor rules as pools are
unsupported by libvirt/apparmor ##

In this case virt-inst pre-creates a VG of a given size and passes the guest 
just that.
This is the different to using the actual pool feature.
With that I'm "ok" that it doesn't need a special apparmor rule already.
>From the guest/apparmor point of view when the guest starts the path is known 
>and added to the guests profile.
(With a pool ref in the guest that would not have worked)

## experiments - setup ##
Lets define a guest which has a qcow and a lvm disk that we can snapshot for 
experiments.
We will use the disk created in the test above, but in a uvtool guest to get 
rid of all virt-install special quirks.
The other disk is just a qcow file.

  $ sudo qemu-img create -f qcow2 
/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow 1G
Formatting '/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow', fmt=qcow2 
size=1073741824 cluster_size=65536 lazy_refcounts=off refcount_bits=16

The config for those looks like:
qcow:
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow'/>
      <target dev='vdc' bus='virtio'/>
    </disk>

CMD: -drive 
file=/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow,format=qcow2,if=none,id=drive-virtio-disk2
apparmor:   "/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow" rwk,


disk:
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/LVMpool/test-snapshot-virtinst'/>
      <target dev='vdd' bus='virtio'/>
    </disk>

CMD: -drive 
file=/dev/LVMpool/test-snapshot-virtinst,format=raw,if=none,id=drive-virtio-disk3
apparmor:   "/dev/dm-11" rwk,
which is a match as
$ ll /dev/LVMpool/test-snapshot-virtinst
lrwxrwxrwx 1 root root 8 Sep 11 05:14 /dev/LVMpool/test-snapshot-virtinst -> 
../dm-11


## experiments - snapshotting ##

Details of the spec see: https://libvirt.org/formatdomain.html

Snapshot of just the qcow file:
$ virsh snapshot-create-as --print-xml --domain eoan-snapshot --disk-only 
--atomic --diskspec vda,snapshot=no  --diskspec vdb,snapshot=no --diskspec 
vdc,file=/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow,snapshot=external
 --diskspec vdd,snapshot=no

$ virsh snapshot-list eoan-snapshot
 Name                 Creation Time             State
------------------------------------------------------------
 1568196836           2019-09-11 06:13:56 -0400 disk-snapshot

The snapshot got added to the apparmor profile:
  "/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow" rwk,

The position shows that this was done with the "append" feature of 
virt-aa-helper.
So it did not re-parse the guest but just add one more entry (as it would do on 
hotplug).

I'm not trying to LVM-snapshot as that seems not what was asked for.
And further LVM would have own capabilties to do so.


## check status after snapshot ##

The guest now has the new snapshot as main file and the old one as backing file 
(COW-chain)
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source 
file='/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow'/>
        <backingStore/>
      </backingStore>
      <target dev='vdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0004'/>
    </disk>

Please do mind that this is the "runtime view", once shut down you'll only see 
the new snapshot.
    
This is confirmed by the metadata in the qcow file.
$ sudo qemu-img info /var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow
image: /var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 196K
cluster_size: 65536
backing file: /var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false


## restart guest ##

XML of inactive guest as expected:

    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source 
file='/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow'/>
      <target dev='vdc' bus='virtio'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0004'/>
    </disk>

The guest starts just fine (with still all 4 disk entries):
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/uvtool/libvirt/images/eoan-snapshot.qcow'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source 
file='/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTkuMTA6czM5MHggMjAxOTA5MDY='/>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/uvtool/libvirt/images/eoan-snapshot-ds.qcow'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0001'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source 
file='/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow'/>
        <backingStore/>
      </backingStore>
      <target dev='vdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0004'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/LVMpool/test-snapshot-virtinst'/>
      <backingStore/>
      <target dev='vdd' bus='virtio'/>
      <alias name='virtio-disk3'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0005'/>
    </disk
    
The apparmor includes the backing chains:
  "/var/lib/uvtool/libvirt/images/eoan-snapshot.qcow" rwk,
  
"/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTkuMTA6czM5MHggMjAxOTA5MDY="
 rk,
  "/var/lib/uvtool/libvirt/images/eoan-snapshot-ds.qcow" rwk,
  "/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow" rwk,
  "/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow" rk,
  "/dev/dm-11" rwk,

All works just fine even through restart nowadays.

## LVM snapshots ##

While at it lets double check LVM snapshots as that is often used as
well and I was asked about them.

I use this as vdd still:
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/LVMpool/test-snapshot-virtinst'/>
      <backingStore/>
      <target dev='vdd' bus='virtio'/>
      <alias name='virtio-disk3'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0005'/>
    </disk
    
Running an LVM snapshot is like:
$ sudo lvcreate -L1G -s -n /dev/LVMpool/test-snapshot-virtinst-snap 
/dev/LVMpool/test-snapshot-virtinst
  Using default stripesize 64.00 KiB.
  Logical volume "test-snapshot-virtinst-snap" created.

Which gives me:
$ sudo lvdisplay /dev/LVMpool
  --- Logical volume ---
  LV Path                /dev/LVMpool/test-snapshot-virtinst
  LV Name                test-snapshot-virtinst
  VG Name                LVMpool
  LV UUID                H0FfqR-619v-KAeJ-GhXU-p9K5-dns8-fP8NGX
  LV Write Access        read/write
  LV Creation host, time s1lp05, 2019-09-11 05:14:52 -0400
  LV snapshot status     source of
                         test-snapshot-virtinst-snap [active]
  LV Status              available
  # open                 2
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:11
   
  --- Logical volume ---
  LV Path                /dev/LVMpool/test-snapshot-virtinst-snap
  LV Name                test-snapshot-virtinst-snap
  VG Name                LVMpool
  LV UUID                hTPlXa-o3vj-yjrE-Igse-H7K7-NimG-ipTdoH
  LV Write Access        read/write
  LV Creation host, time s1lp05, 2019-09-11 06:44:05 -0400
  LV snapshot status     active destination for test-snapshot-virtinst
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  COW-table size         1.00 GiB
  COW-table LE           256
  Allocated to snapshot  0.00%
  Snapshot chunk size    4.00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:14
  
Writing into that from the guest makes it change.

sudo lvs
  LV                          VG      Attr       LSize  Pool Origin             
    Data%  Meta%  Move Log Cpy%Sync Convert
  test-snapshot-virtinst      LVMpool owi-aos---  1.00g                         
                                           
  test-snapshot-virtinst-snap LVMpool swi-a-s---  1.00g      
test-snapshot-virtinst 9.80
  

Since LVM keeps the old name the one that is continued to be written the same:
$ ll /dev/LVMpool/
lrwxrwxrwx  1 root root    8 Sep 11 06:44 test-snapshot-virtinst -> ../dm-11
lrwxrwxrwx  1 root root    8 Sep 11 06:44 test-snapshot-virtinst-snap -> 
../dm-14

Still dm-11 that matches apparmor
  "/dev/dm-11" rwk,

Checking if it is an issue to restart the guest with the LVM snapshot attached.
No, shutdown and start work fine still.

With that said I think we can close this old bug nowadays.

P.S. There is a case a friend of mine reported with qcow snapshots on LVM which 
sounds odd and is broken.
This I'll track down in another place as it has nothing to do with the issue 
that was reported here.

** Changed in: apparmor (Ubuntu)
       Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1525310

Title:
  virsh with apparmor misconfigures libvirt-UUID files during snapshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1525310/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to