** Summary changed:

- Missing thin-provisioning-tools prevent VG from being activated
+ Missing thin-provisioning-tools prevent VG from being (de)activated

** Description changed:

- I had configured thin-pool storage driver for docker on one of the PowerVM 
lapr. And had created containers.
- These containers were running STAF tests. I had to reboot the partition. 
After reboot, i see the docker daemon failed to come up. Below are the details,
+ Creating a thin pool LV is allowed even when thin-provisioning-tools is
+ not installed. But deactivating, or activating, the VG fails.
  
- Steps -
- 1. Instal 16.04.02 on a vm partition.
- 2. Install docker.io
- 3. Configure a thin-pool storage driver for docker daemon.
- 4. Create some sample containers.
- 5. Reboot the vm partition.
+ I think the lvconvert tool, used to combine the two "thin LVs" into a
+ thin pool LV, should refuse to run if thin-provisioning-tools isn't
+ installed.
  
- Docker daemon fails to come up.
+ Steps to reproduce:
+ root@15-89:~# vgcreate vg /dev/vdb1
+   Volume group "vg" successfully created
  
- Logs -
- Machine details -
-   Kernel Build:  4.4.0-53-generic
-   System Name :  bamlp4
-   Model/Type  :  8247-22L
-   Platform    :  powerpc64le
+ root@15-89:~# vgs
+   VG   #PV #LV #SN Attr   VSize  VFree 
+   vg     1   0   0 wz--n- 40.00g 40.00g
  
- uname -a
- Linux bamlp4 4.4.0-53-generic #74-Ubuntu SMP Fri Dec 2 15:59:36 UTC 2016 
ppc64le ppc64le ppc64le GNU/Linux
+ root@15-89:~# lvcreate -n pool0 -l 90%VG vg
+   Logical volume "pool0" created.
  
- Docker details- 
- root@bamlp4:~# docker info
- Containers: 4
-  Running: 3
-  Paused: 0
-  Stopped: 1
- Images: 4
- Server Version: 1.12.1
- Storage Driver: devicemapper
-  Pool Name: docker--storage-thinpool
-  Pool Blocksize: 65.54 kB
-  Base Device Size: 10.74 GB
-  Backing Filesystem: xfs
-  Data file: 
-  Metadata file: 
-  Data Space Used: 12.02 GB
-  Data Space Total: 20.4 GB
-  Data Space Available: 8.373 GB
-  Metadata Space Used: 9.466 MB
-  Metadata Space Total: 213.9 MB
-  Metadata Space Available: 204.4 MB
-  Thin Pool Minimum Free Space: 2.04 GB
-  Udev Sync Supported: true
-  Deferred Removal Enabled: false
-  Deferred Deletion Enabled: false
-  Deferred Deleted Device Count: 0
-  Library Version: 1.02.110 (2015-10-30)
- Logging Driver: json-file
- Cgroup Driver: cgroupfs
- Plugins:
-  Volume: local
-  Network: null bridge host overlay
- Swarm: inactive
- Runtimes: runc
- Default Runtime: runc
- Security Options: apparmor
- Kernel Version: 4.4.0-53-generic
- Operating System: Ubuntu 16.04.1 LTS
- OSType: linux
- Architecture: ppc64le
- CPUs: 36
- Total Memory: 90.91 GiB
- Name: bamlp4
- ID: BS55:FI5I:4KNB:33H7:ZUAC:AXIU:AOQ4:2PST:22Y7:TNW7:GYT6:WX7A
- Docker Root Dir: /var/lib/docker
- Debug Mode (client): false
- Debug Mode (server): false
- Registry: https://index.docker.io/v1/
- WARNING: No swap limit support
- Insecure Registries:
-  127.0.0.0/8
- root@bamlp4:~# 
+ root@15-89:~# lvcreate -n pool0meta -l 5%VG vg
+   Logical volume "pool0meta" created.
  
- docker info |grep Udev
-  Udev Sync Supported: true
- WARNING: No swap limit support
+ root@15-89:~# lvs
+   LV        VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
+   pool0     vg   -wi-a----- 36.00g                                            
        
+   pool0meta vg   -wi-a-----  2.00g                                            
        
+ 
+ root@15-89:~# ll /dev/mapper/
+ total 0
+ drwxr-xr-x  2 root root     100 Jun 21 14:15 ./
+ drwxr-xr-x 20 root root    3820 Jun 21 14:15 ../
+ crw-------  1 root root 10, 236 Jun 21 13:15 control
+ lrwxrwxrwx  1 root root       7 Jun 21 14:14 vg-pool0 -> ../dm-0
+ lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0meta -> ../dm-1
+ 
+ root@15-89:~# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
+   WARNING: Converting logical volume vg/pool0 and vg/pool0meta to pool's data 
and metadata volumes.
+   THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
+ Do you really want to convert vg/pool0 and vg/pool0meta? [y/n]: y
+   Converted vg/pool0 to thin pool.
+ 
+ root@15-89:~# ll /dev/mapper/
+ total 0
+ drwxr-xr-x  2 root root     120 Jun 21 14:15 ./
+ drwxr-xr-x 20 root root    3840 Jun 21 14:15 ../
+ crw-------  1 root root 10, 236 Jun 21 13:15 control
+ lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0 -> ../dm-2
+ lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tdata -> ../dm-1
+ lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tmeta -> ../dm-0
+ root@15-89:~# lvs -a
+   LV              VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
+   [lvol0_pmspare] vg   ewi-------  2.00g                                      
              
+   pool0           vg   twi-a-tz-- 36.00g             0.00   0.01              
              
+   [pool0_tdata]   vg   Twi-ao---- 36.00g                                      
              
+   [pool0_tmeta]   vg   ewi-ao----  2.00g          
  
  
-  service docker status
- * docker.service - Docker Application Container Engine
-    Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor 
preset: enabled)
-    Active: active (running) since Tue 2016-12-13 04:36:32 CST; 2 days ago
-      Docs: https://docs.docker.com
-  Main PID: 8861 (dockerd)
-     Tasks: 111
-    Memory: 88.5M
-       CPU: 14min 27.708s
-    CGroup: /system.slice/docker.service
-            |- 6941 containerd-shim 
30283306a694fb5b18fe03b38505e8218677ab7d1b4b552505b68e7e38737803 
/var/run/docker/libcontainerd/30283306a694fb5b18fe03b38505e8218677ab7d1b4b552505b
-            |- 8861 /usr/bin/dockerd -H fd:// -s devicemapper 
--storage-opt=dm.thinpooldev=/dev/mapper/docker--storage-thinpool 
--fixed-cidr=172.17.128.0/18 --mtu 1462
-            |- 8993 containerd-shim 
c83c781710f5c9198067fa74d3f40407d4bb7f7991a04d59cbe824e3903a877d 
/var/run/docker/libcontainerd/c83c781710f5c9198067fa74d3f40407d4bb7f7991a04d59cbe
-            |-10931 containerd -l 
unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim 
containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir 
/var/run/dock
-            `-58887 containerd-shim 
1b4ec092cadfd448a10436662fe35d6abf4e7c9a612da327ae8dd13c94e94f1a 
/var/run/docker/libcontainerd/1b4ec092cadfd448a10436662fe35d6abf4e7c9a612da327ae8
+ If you now reboot the system, all that is gone:
+ root@15-89:~# ll /dev/mapper/
+ total 0
+ drwxr-xr-x  2 root root      60 Jun 21 14:28 ./
+ drwxr-xr-x 19 root root    3760 Jun 21 14:28 ../
+ crw-------  1 root root 10, 236 Jun 21 14:28 control
  
- Dec 13 04:36:29 bamlp4 dockerd[8861]: 
time="2016-12-13T04:36:29.687120000-06:00" level=info msg="Loading containers: 
start."
- Dec 13 04:36:29 bamlp4 dockerd[8861]: 
time="2016-12-13T04:36:29.707970000-06:00" level=info msg="Firewalld running: 
false"
- Dec 13 04:36:31 bamlp4 dockerd[8861]: 
time="2016-12-13T04:36:31.225112000-06:00" level=info msg="Default bridge 
(docker0) is assigned with an IP address 172.17.0.0/16. Daemon option
- Dec 13 04:36:32 bamlp4 dockerd[8861]: 
time="2016-12-13T04:36:32.038062000-06:00" level=info msg="Loading containers: 
done."
- Dec 13 04:36:32 bamlp4 dockerd[8861]: 
time="2016-12-13T04:36:32.039030000-06:00" level=info msg="Daemon has completed 
initialization"
- Dec 13 04:36:32 bamlp4 dockerd[8861]: 
time="2016-12-13T04:36:32.039084000-06:00" level=info msg="Docker daemon" 
commit=23cf638 graphdriver=devicemapper version=1.12.1
- Dec 13 04:36:32 bamlp4 dockerd[8861]: 
time="2016-12-13T04:36:32.059172000-06:00" level=info msg="API listen on 
/var/run/docker.sock"
- Dec 13 04:36:32 bamlp4 systemd[1]: Started Docker Application Container 
Engine.
+ The same happens if you deactivate the VG (which the reboot undoubtedly
+ triggers). It fails because of a missing /usr/sbin/thin_check which is
+ provided by the thin-provisioning-tools package:
  
+ root@15-89:~# vgchange -a n
+   /usr/sbin/thin_check: execvp failed: No such file or directory
+   WARNING: Integrity check of metadata for pool vg/pool0 failed.
+   0 logical volume(s) in volume group "vg" now active
  
- lsblk
- NAME                                                       MAJ:MIN RM  SIZE 
RO TYPE MOUNTPOINT
- sda                                                          8:0    0   20G  
0 disk 
- |-sda1                                                       8:1    0    7M  
0 part 
- |-sda2                                                       8:2    0 19.1G  
0 part /
- `-sda3                                                       8:3    0  881M  
0 part [SWAP]
- sdb                                                          8:16   0   20G  
0 disk 
- |-docker--storage-thinpool_tmeta                           252:0    0  204M  
0 lvm  
- | `-docker--storage-thinpool                               252:2    0   19G  
0 lvm  
- |   
|-docker-8:2-398313-6ebd9e327696fa07625788b1a482b89ea7f12fc8e07a430a9470ff979de57832
- |   |                                                      252:3    0   10G  
0 dm   
/var/lib/docker/devicemapper/mnt/6ebd9e327696fa07625788b1a482b89ea7f12fc8e07a430a9470ff979de57832
- |   
|-docker-8:2-398313-0deb4334768d334bea71efc51a1e4a16118e5bc0912295dab962433b4b14bd5b
- |   |                                                      252:4    0   10G  
0 dm   
/var/lib/docker/devicemapper/mnt/0deb4334768d334bea71efc51a1e4a16118e5bc0912295dab962433b4b14bd5b
- |   
`-docker-8:2-398313-7c943f7afd160b4ad7747291813519ab6f3103a422d353a137e01ea68fbf94b6
- |                                                          252:5    0   10G  
0 dm   
/var/lib/docker/devicemapper/mnt/7c943f7afd160b4ad7747291813519ab6f3103a422d353a137e01ea68fbf94b6
- `-docker--storage-thinpool_tdata                           252:1    0   19G  
0 lvm  
-   `-docker--storage-thinpool                               252:2    0   19G  
0 lvm  
-     
|-docker-8:2-398313-6ebd9e327696fa07625788b1a482b89ea7f12fc8e07a430a9470ff979de57832
-     |                                                      252:3    0   10G  
0 dm   
/var/lib/docker/devicemapper/mnt/6ebd9e327696fa07625788b1a482b89ea7f12fc8e07a430a9470ff979de57832
-     
|-docker-8:2-398313-0deb4334768d334bea71efc51a1e4a16118e5bc0912295dab962433b4b14bd5b
-     |                                                      252:4    0   10G  
0 dm   
/var/lib/docker/devicemapper/mnt/0deb4334768d334bea71efc51a1e4a16118e5bc0912295dab962433b4b14bd5b
-     
`-docker-8:2-398313-7c943f7afd160b4ad7747291813519ab6f3103a422d353a137e01ea68fbf94b6
-                                                            252:5    0   10G  
0 dm   
/var/lib/docker/devicemapper/mnt/7c943f7afd160b4ad7747291813519ab6f3103a422d353a137e01ea68fbf94b6
- sdc                                                          8:32   0   20G  
0 disk 
- sdd                                                          8:48   0   20G  
0 disk 
- |-sdd1                                                       8:49   0    5G  
0 part 
- |-sdd2                                                       8:50   0    5G  
0 part 
- |-sdd3                                                       8:51   0    5G  
0 part 
- `-sdd4                                                       8:52   0    5G  
0 part 
- sde                                                          8:64   0   50G  
0 disk 
- sdf                                                          8:80   0   50G  
0 disk 
- 
- Containers that were up and running -
- CONTAINER ID        IMAGE                                     COMMAND         
         CREATED             STATUS                  PORTS               NAMES
- c83c781710f5        23c492753bd5                              "/bin/sh -c 
./root/NF"   2 days ago          Up 2 days                                   
bamlp4nfsclnt
- 30283306a694        kte2.isst.aus.stglabs.ibm.com:5000/staf   "/bin/bash"     
         2 days ago          Up 2 days                                   
sharp_feynman
- 28bd3cf10714        32d545c3ea01                              "/bin/sh -c 
./staf_io"   2 days ago          Exited (1) 2 days ago                       
bamlp4-io
- 1b4ec092cadf        590e44f15214                              "/bin/sh -c 
./staf_ba"   2 days ago          Up 2 days                                   
bamlp4-base
- root@bamlp4:~# 
- 
- 
- Filesystem              Inodes  IUsed     IFree IUse% Mounted on
- udev                    219623    761    218862    1% /dev
- tmpfs                   220469    931    219538    1% /run
- /dev/sda2              1254176 100228   1153948    8% /
- tmpfs                   220469      1    220468    1% /dev/shm
- tmpfs                   220469      6    220463    1% /run/lock
- tmpfs                   220469     16    220453    1% /sys/fs/cgroup
- 10.33.11.31:/data     26206208  26488  26179720    1% /data
- 10.33.11.31:/images   40207920  21681  40186239    1% /images
- 10.33.11.31:/kte      49745648 614507  49131141    2% /kte
- 10.33.11.31:/distros 314572800 167916 314404884    1% /distros
- kte2:/kte             38864896 189000  38675896    1% /mnt
- kte2:/kte2fs          38864896 189000  38675896    1% /kte2fs
- tmpfs                   220469      4    220465    1% /run/user/0
- kte2:/docklog         38864896 189000  38675896    1% /docklog
- /dev/dm-3              5242368  45859   5196509    1% 
/var/lib/docker/devicemapper/mnt/6ebd9e327696fa07625788b1a482b89ea7f12fc8e07a430a9470ff979de57832
- shm                     982325      1    982324    1% 
/var/lib/docker/containers/1b4ec092cadfd448a10436662fe35d6abf4e7c9a612da327ae8dd13c94e94f1a/shm
- /dev/dm-4              5242368  19312   5223056    1% 
/var/lib/docker/devicemapper/mnt/0deb4334768d334bea71efc51a1e4a16118e5bc0912295dab962433b4b14bd5b
- shm                     982325      1    982324    1% 
/var/lib/docker/containers/30283306a694fb5b18fe03b38505e8218677ab7d1b4b552505b68e7e38737803/shm
- /dev/dm-5              5242368  30980   5211388    1% 
/var/lib/docker/devicemapper/mnt/7c943f7afd160b4ad7747291813519ab6f3103a422d353a137e01ea68fbf94b6
- shm                     982325      1    982324    1% 
/var/lib/docker/containers/c83c781710f5c9198067fa74d3f40407d4bb7f7991a04d59cbe824e3903a877d/shm
- root@bamlp4:/tmp#
- 
- == Comment: #1 - Vinutha GS <vinuth...@in.ibm.com> - 2016-12-16 00:33:39 ==
- After reboot  ---
- 
- - docker service status -
- 
- service docker status
- * docker.service - Docker Application Container Engine
-    Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor 
preset: enabled)
-    Active: failed (Result: exit-code) since Thu 2016-12-15 23:59:08 CST; 
29min ago
-      Docs: https://docs.docker.com
-   Process: 4089 ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS 
$NETWORK_OPTS (code=exited, status=1/FAILURE)
-  Main PID: 4089 (code=exited, status=1/FAILURE)
- 
- Dec 15 23:59:06 bamlp4 systemd[1]: Starting Docker Application Container 
Engine...
- Dec 15 23:59:07 bamlp4 dockerd[4089]: 
time="2016-12-15T23:59:07.226605000-06:00" level=info msg="libcontainerd: new 
containerd process, pid: 4096"
- Dec 15 23:59:08 bamlp4 dockerd[4089]: 
time="2016-12-15T23:59:08.676821000-06:00" level=fatal msg="Error starting 
daemon: error initializing graphdriver: devicemapper: Non existing d
- Dec 15 23:59:08 bamlp4 systemd[1]: docker.service: Main process exited, 
code=exited, status=1/FAILURE
- Dec 15 23:59:08 bamlp4 systemd[1]: Failed to start Docker Application 
Container Engine.
- Dec 15 23:59:08 bamlp4 systemd[1]: docker.service: Unit entered failed state.
- Dec 15 23:59:08 bamlp4 systemd[1]: docker.service: Failed with result 
'exit-code'.
- Dec 15 23:59:08 bamlp4 systemd[1]: docker.service: Start request repeated too 
quickly.
- Dec 15 23:59:08 bamlp4 systemd[1]: Failed to start Docker Application 
Container Engine.
- 
- ------
- lsblk, doesn't list the thin-pool details -
-  lsblk
- NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
- sda      8:0    0   20G  0 disk 
- |-sda1   8:1    0    7M  0 part 
- |-sda2   8:2    0 19.1G  0 part /
- `-sda3   8:3    0  881M  0 part [SWAP]
- sdb      8:16   0   20G  0 disk 
- sdc      8:32   0   20G  0 disk 
- sdd      8:48   0   20G  0 disk 
- |-sdd1   8:49   0    5G  0 part 
- |-sdd2   8:50   0    5G  0 part 
- |-sdd3   8:51   0    5G  0 part 
- `-sdd4   8:52   0    5G  0 part 
- sde      8:64   0   50G  0 disk 
- sdf      8:80   0   50G  0 disk 
- =====
- 
- Under /dev/mapper, i don't see anything.
- 
- If the system is rebooted, will we loose all the docker related data?
- I see on a machine that has overlay configured, when system is rebooted it 
comes up properly.
- 
- I read for thin-pool, if Udev support is present then after the system
- reboot the docker should come up successfully.
- 
- == Comment: #4 - SEETEENA THOUFEEK <sthou...@in.ibm.com> - 2016-12-19 
00:41:11 ==
- Docker version 
- 
- Client : 1.12.1
- API Version : 1.24 
- Go version: go1.6.2
- Built : Tue, 27 Sep 2016 
- OS/Arch : Linux/ppc64le 
- 
- We have some known issue reported says fixed in Ubuntu 16.04 
- Bug 128990 - Docker daemon fails to start after abrupt host shutdown
- 
- Since this bug is reported in Ubuntu 16.04.02, might need to check with
- the bug 128990 team if this is fixed in which build.
- 
- I was able to replicate the issue. I created VG and LV and rebooted the
- system. Found that devicemapper entries for corresponding devices are
- missing after reboot. However, LVM commands like 'vgdisplay' and
- 'lvdisplay' show proper info, but 'lsblk' doesn't show the device's LVM
- related info after reboot.
- 
- So it doesn't seem to be related to docker. Here docker is just trying
- to use the device but it's missing after reboot, hence it fails to
- start.
- 
- We need to mirror this to distro.
+ root@15-89:~# ll /dev/mapper/
+ total 0
+ drwxr-xr-x  2 root root      60 Jun 21 14:29 ./
+ drwxr-xr-x 19 root root    3760 Jun 21 14:29 ../
+ crw-------  1 root root 10, 236 Jun 21 14:28 control

** Package changed: docker.io (Ubuntu) => lvm2 (Ubuntu)

** Summary changed:

- Missing thin-provisioning-tools prevent VG from being (de)activated
+ Missing thin-provisioning-tools prevents VG from being (de)activated

** Summary changed:

- Missing thin-provisioning-tools prevents VG from being (de)activated
+ Missing thin-provisioning-tools prevents VG with thin pool LV from being 
(de)activated, but not its creation

** Description changed:

  Creating a thin pool LV is allowed even when thin-provisioning-tools is
- not installed. But deactivating, or activating, the VG fails.
+ not installed. But deactivating or activating that VG fails.
  
  I think the lvconvert tool, used to combine the two "thin LVs" into a
  thin pool LV, should refuse to run if thin-provisioning-tools isn't
  installed.
  
  Steps to reproduce:
  root@15-89:~# vgcreate vg /dev/vdb1
-   Volume group "vg" successfully created
+   Volume group "vg" successfully created
  
  root@15-89:~# vgs
-   VG   #PV #LV #SN Attr   VSize  VFree 
-   vg     1   0   0 wz--n- 40.00g 40.00g
+   VG   #PV #LV #SN Attr   VSize  VFree
+   vg     1   0   0 wz--n- 40.00g 40.00g
  
  root@15-89:~# lvcreate -n pool0 -l 90%VG vg
-   Logical volume "pool0" created.
+   Logical volume "pool0" created.
  
  root@15-89:~# lvcreate -n pool0meta -l 5%VG vg
-   Logical volume "pool0meta" created.
+   Logical volume "pool0meta" created.
  
  root@15-89:~# lvs
-   LV        VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
-   pool0     vg   -wi-a----- 36.00g                                            
        
-   pool0meta vg   -wi-a-----  2.00g                                            
        
+   LV        VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
+   pool0     vg   -wi-a----- 36.00g
+   pool0meta vg   -wi-a-----  2.00g
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     100 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3820 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:14 vg-pool0 -> ../dm-0
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0meta -> ../dm-1
  
  root@15-89:~# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
-   WARNING: Converting logical volume vg/pool0 and vg/pool0meta to pool's data 
and metadata volumes.
-   THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
+   WARNING: Converting logical volume vg/pool0 and vg/pool0meta to pool's data 
and metadata volumes.
+   THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Do you really want to convert vg/pool0 and vg/pool0meta? [y/n]: y
-   Converted vg/pool0 to thin pool.
+   Converted vg/pool0 to thin pool.
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     120 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3840 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0 -> ../dm-2
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tdata -> ../dm-1
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tmeta -> ../dm-0
  root@15-89:~# lvs -a
-   LV              VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
-   [lvol0_pmspare] vg   ewi-------  2.00g                                      
              
-   pool0           vg   twi-a-tz-- 36.00g             0.00   0.01              
              
-   [pool0_tdata]   vg   Twi-ao---- 36.00g                                      
              
-   [pool0_tmeta]   vg   ewi-ao----  2.00g          
- 
+   LV              VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
+   [lvol0_pmspare] vg   ewi-------  2.00g
+   pool0           vg   twi-a-tz-- 36.00g             0.00   0.01
+   [pool0_tdata]   vg   Twi-ao---- 36.00g
+   [pool0_tmeta]   vg   ewi-ao----  2.00g
  
  If you now reboot the system, all that is gone:
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:28 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:28 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control
  
  The same happens if you deactivate the VG (which the reboot undoubtedly
  triggers). It fails because of a missing /usr/sbin/thin_check which is
  provided by the thin-provisioning-tools package:
  
  root@15-89:~# vgchange -a n
-   /usr/sbin/thin_check: execvp failed: No such file or directory
-   WARNING: Integrity check of metadata for pool vg/pool0 failed.
-   0 logical volume(s) in volume group "vg" now active
+   /usr/sbin/thin_check: execvp failed: No such file or directory
+   WARNING: Integrity check of metadata for pool vg/pool0 failed.
+   0 logical volume(s) in volume group "vg" now active
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:29 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:29 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control

** Description changed:

  Creating a thin pool LV is allowed even when thin-provisioning-tools is
  not installed. But deactivating or activating that VG fails.
  
  I think the lvconvert tool, used to combine the two "thin LVs" into a
  thin pool LV, should refuse to run if thin-provisioning-tools isn't
  installed.
  
  Steps to reproduce:
  root@15-89:~# vgcreate vg /dev/vdb1
    Volume group "vg" successfully created
  
  root@15-89:~# vgs
    VG   #PV #LV #SN Attr   VSize  VFree
    vg     1   0   0 wz--n- 40.00g 40.00g
  
  root@15-89:~# lvcreate -n pool0 -l 90%VG vg
    Logical volume "pool0" created.
  
  root@15-89:~# lvcreate -n pool0meta -l 5%VG vg
    Logical volume "pool0meta" created.
  
  root@15-89:~# lvs
    LV        VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
    pool0     vg   -wi-a----- 36.00g
    pool0meta vg   -wi-a-----  2.00g
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     100 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3820 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:14 vg-pool0 -> ../dm-0
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0meta -> ../dm-1
  
  root@15-89:~# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
    WARNING: Converting logical volume vg/pool0 and vg/pool0meta to pool's data 
and metadata volumes.
    THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Do you really want to convert vg/pool0 and vg/pool0meta? [y/n]: y
    Converted vg/pool0 to thin pool.
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     120 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3840 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0 -> ../dm-2
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tdata -> ../dm-1
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tmeta -> ../dm-0
  root@15-89:~# lvs -a
-   LV              VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
+   LV              VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%S
+ ync Convert
    [lvol0_pmspare] vg   ewi-------  2.00g
    pool0           vg   twi-a-tz-- 36.00g             0.00   0.01
    [pool0_tdata]   vg   Twi-ao---- 36.00g
    [pool0_tmeta]   vg   ewi-ao----  2.00g
  
  If you now reboot the system, all that is gone:
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:28 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:28 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control
  
  The same happens if you deactivate the VG (which the reboot undoubtedly
  triggers). It fails because of a missing /usr/sbin/thin_check which is
  provided by the thin-provisioning-tools package:
  
  root@15-89:~# vgchange -a n
    /usr/sbin/thin_check: execvp failed: No such file or directory
    WARNING: Integrity check of metadata for pool vg/pool0 failed.
    0 logical volume(s) in volume group "vg" now active
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:29 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:29 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control

** Description changed:

  Creating a thin pool LV is allowed even when thin-provisioning-tools is
- not installed. But deactivating or activating that VG fails.
+ not installed. But deactivating or activating that VG fails. Since
+ deactivating the VG usually only happens at reboot, the user might fail
+ to notice this big problem until then.
  
  I think the lvconvert tool, used to combine the two "thin LVs" into a
  thin pool LV, should refuse to run if thin-provisioning-tools isn't
  installed.
  
  Steps to reproduce:
  root@15-89:~# vgcreate vg /dev/vdb1
    Volume group "vg" successfully created
  
  root@15-89:~# vgs
    VG   #PV #LV #SN Attr   VSize  VFree
    vg     1   0   0 wz--n- 40.00g 40.00g
  
  root@15-89:~# lvcreate -n pool0 -l 90%VG vg
    Logical volume "pool0" created.
  
  root@15-89:~# lvcreate -n pool0meta -l 5%VG vg
    Logical volume "pool0meta" created.
  
  root@15-89:~# lvs
    LV        VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
    pool0     vg   -wi-a----- 36.00g
    pool0meta vg   -wi-a-----  2.00g
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     100 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3820 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:14 vg-pool0 -> ../dm-0
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0meta -> ../dm-1
  
  root@15-89:~# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
    WARNING: Converting logical volume vg/pool0 and vg/pool0meta to pool's data 
and metadata volumes.
    THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Do you really want to convert vg/pool0 and vg/pool0meta? [y/n]: y
    Converted vg/pool0 to thin pool.
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     120 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3840 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0 -> ../dm-2
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tdata -> ../dm-1
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tmeta -> ../dm-0
  root@15-89:~# lvs -a
    LV              VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%S
  ync Convert
    [lvol0_pmspare] vg   ewi-------  2.00g
    pool0           vg   twi-a-tz-- 36.00g             0.00   0.01
    [pool0_tdata]   vg   Twi-ao---- 36.00g
    [pool0_tmeta]   vg   ewi-ao----  2.00g
  
  If you now reboot the system, all that is gone:
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:28 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:28 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control
  
  The same happens if you deactivate the VG (which the reboot undoubtedly
  triggers). It fails because of a missing /usr/sbin/thin_check which is
  provided by the thin-provisioning-tools package:
  
  root@15-89:~# vgchange -a n
    /usr/sbin/thin_check: execvp failed: No such file or directory
    WARNING: Integrity check of metadata for pool vg/pool0 failed.
    0 logical volume(s) in volume group "vg" now active
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:29 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:29 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control

** Description changed:

  Creating a thin pool LV is allowed even when thin-provisioning-tools is
  not installed. But deactivating or activating that VG fails. Since
  deactivating the VG usually only happens at reboot, the user might fail
  to notice this big problem until then.
  
  I think the lvconvert tool, used to combine the two "thin LVs" into a
- thin pool LV, should refuse to run if thin-provisioning-tools isn't
- installed.
+ thin pool LV, should refuse to run if thin-provisioning-tools, or the
+ needed scripts, aren't installed.
  
  Steps to reproduce:
  root@15-89:~# vgcreate vg /dev/vdb1
    Volume group "vg" successfully created
  
  root@15-89:~# vgs
    VG   #PV #LV #SN Attr   VSize  VFree
    vg     1   0   0 wz--n- 40.00g 40.00g
  
  root@15-89:~# lvcreate -n pool0 -l 90%VG vg
    Logical volume "pool0" created.
  
  root@15-89:~# lvcreate -n pool0meta -l 5%VG vg
    Logical volume "pool0meta" created.
  
  root@15-89:~# lvs
    LV        VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
    pool0     vg   -wi-a----- 36.00g
    pool0meta vg   -wi-a-----  2.00g
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     100 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3820 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:14 vg-pool0 -> ../dm-0
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0meta -> ../dm-1
  
  root@15-89:~# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
    WARNING: Converting logical volume vg/pool0 and vg/pool0meta to pool's data 
and metadata volumes.
    THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Do you really want to convert vg/pool0 and vg/pool0meta? [y/n]: y
    Converted vg/pool0 to thin pool.
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     120 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3840 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0 -> ../dm-2
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tdata -> ../dm-1
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tmeta -> ../dm-0
  root@15-89:~# lvs -a
    LV              VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%S
  ync Convert
    [lvol0_pmspare] vg   ewi-------  2.00g
    pool0           vg   twi-a-tz-- 36.00g             0.00   0.01
    [pool0_tdata]   vg   Twi-ao---- 36.00g
    [pool0_tmeta]   vg   ewi-ao----  2.00g
  
  If you now reboot the system, all that is gone:
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:28 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:28 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control
  
  The same happens if you deactivate the VG (which the reboot undoubtedly
  triggers). It fails because of a missing /usr/sbin/thin_check which is
  provided by the thin-provisioning-tools package:
  
  root@15-89:~# vgchange -a n
    /usr/sbin/thin_check: execvp failed: No such file or directory
    WARNING: Integrity check of metadata for pool vg/pool0 failed.
    0 logical volume(s) in volume group "vg" now active
  
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:29 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:29 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1657646

Title:
  Missing thin-provisioning-tools prevents VG with thin pool LV from
  being (de)activated, but not its creation

Status in lvm2 package in Ubuntu:
  Triaged

Bug description:
  Creating a thin pool LV is allowed even when thin-provisioning-tools
  is not installed. But deactivating or activating that VG fails. Since
  deactivating the VG usually only happens at reboot, the user might
  fail to notice this big problem until then.

  I think the lvconvert tool, used to combine the two "thin LVs" into a
  thin pool LV, should refuse to run if thin-provisioning-tools, or the
  needed scripts, aren't installed.

  Steps to reproduce:
  root@15-89:~# vgcreate vg /dev/vdb1
    Volume group "vg" successfully created

  root@15-89:~# vgs
    VG   #PV #LV #SN Attr   VSize  VFree
    vg     1   0   0 wz--n- 40.00g 40.00g

  root@15-89:~# lvcreate -n pool0 -l 90%VG vg
    Logical volume "pool0" created.

  root@15-89:~# lvcreate -n pool0meta -l 5%VG vg
    Logical volume "pool0meta" created.

  root@15-89:~# lvs
    LV        VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
    pool0     vg   -wi-a----- 36.00g
    pool0meta vg   -wi-a-----  2.00g

  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     100 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3820 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:14 vg-pool0 -> ../dm-0
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0meta -> ../dm-1

  root@15-89:~# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
    WARNING: Converting logical volume vg/pool0 and vg/pool0meta to pool's data 
and metadata volumes.
    THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Do you really want to convert vg/pool0 and vg/pool0meta? [y/n]: y
    Converted vg/pool0 to thin pool.

  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root     120 Jun 21 14:15 ./
  drwxr-xr-x 20 root root    3840 Jun 21 14:15 ../
  crw-------  1 root root 10, 236 Jun 21 13:15 control
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0 -> ../dm-2
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tdata -> ../dm-1
  lrwxrwxrwx  1 root root       7 Jun 21 14:15 vg-pool0_tmeta -> ../dm-0
  root@15-89:~# lvs -a
    LV              VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%S
  ync Convert
    [lvol0_pmspare] vg   ewi-------  2.00g
    pool0           vg   twi-a-tz-- 36.00g             0.00   0.01
    [pool0_tdata]   vg   Twi-ao---- 36.00g
    [pool0_tmeta]   vg   ewi-ao----  2.00g

  If you now reboot the system, all that is gone:
  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:28 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:28 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control

  The same happens if you deactivate the VG (which the reboot
  undoubtedly triggers). It fails because of a missing
  /usr/sbin/thin_check which is provided by the thin-provisioning-tools
  package:

  root@15-89:~# vgchange -a n
    /usr/sbin/thin_check: execvp failed: No such file or directory
    WARNING: Integrity check of metadata for pool vg/pool0 failed.
    0 logical volume(s) in volume group "vg" now active

  root@15-89:~# ll /dev/mapper/
  total 0
  drwxr-xr-x  2 root root      60 Jun 21 14:29 ./
  drwxr-xr-x 19 root root    3760 Jun 21 14:29 ../
  crw-------  1 root root 10, 236 Jun 21 14:28 control

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1657646/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to     : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to