[vdsm] some questions about selinux policy on VDSM storage
Hi all, I am trying to create a VM in vdsm, but I find that selinux is preventing qemu from visiting images in vdsm storage. I add two policies to selinux then start the VM successfully. Not sure if I've done it right, so I'd like to ask for some advice. The directory structure of my vdsm storage is as follow: $ tree -l /rhev /rhev `-- data-center |-- 3ace0f74-a9fa-11e1-bb33-00247edb4743 | |-- 1de3f4e2-a9f9-11e1-9956-00247edb4743 - /rhev/data-center/mnt/_teststorage/1de3f4e2-a9f9-11e1-9956-00247edb4743 | | |-- dom_md | | | |-- ids | | | |-- inbox | | | |-- leases | | | |-- metadata | | | `-- outbox | | |-- images | | | `-- 8230cd4a-aa1b-11e1-970c-00247edb4743 | | | |-- b9e163b6-aa1c-11e1-94f6-00247edb4743 | | | `-- b9e163b6-aa1c-11e1-94f6-00247edb4743.meta | | `-- master | | |-- tasks | | | `-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5 | | | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.job.0 | | | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.recover.0 | | | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.recover.1 | | | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.result | | | `-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.task | | `-- vms | `-- mastersd - /rhev/data-center/mnt/_teststorage/1de3f4e2-a9f9-11e1-9956-00247edb4743 [recursive, not followed] |-- hsm-tasks `-- mnt `-- _teststorage - /teststorage `-- 1de3f4e2-a9f9-11e1-9956-00247edb4743 |-- dom_md | |-- ids | |-- inbox | |-- leases | |-- metadata | `-- outbox |-- images | `-- 8230cd4a-aa1b-11e1-970c-00247edb4743 | |-- b9e163b6-aa1c-11e1-94f6-00247edb4743 | `-- b9e163b6-aa1c-11e1-94f6-00247edb4743.meta `-- master |-- tasks | `-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5 | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.job.0 | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.recover.0 | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.recover.1 | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.result | `-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.task `-- vms 22 directories, 24 files The storage is constructed using vdsClient. I create a LOCALFS DATA domain, a pool, an image and a volume. I find that libvirtd can relable the volume to system_u:object_r:svirt_image_t:s0:cXXX,cXXX, and it starts qemu as system_u:system_r:svirt_t:s0:cXXX,cXXX $ ll -Z b9e163b6-aa1c-11e1-94f6-00247edb4743 -rw-rw-r--. vdsm kvm system_u:object_r:svirt_image_t:s0:c248,c603 b9e163b6-aa1c-11e1-94f6-00247edb4743 $ ps auxZ | grep qemu system_u:system_r:svirt_t:s0:c248,c603 qemu 11132 60.0 6.9 1531336 273756 ? Sl 13:58 0:31 /usr/bin/qemu-kvm ...(lots of arguments) This is an expected feature of sVirt, and qemu is supposed to have access to the volume. To access the volume file b9e163b6-aa1c-11e1-94f6-00247edb4743, qemu must go through /rhev/data-center/3ace0f74-a9fa-11e1-bb33-00247edb4743/1de3f4e2-a9f9-11e1-9956-00247edb4743/images/8230cd4a-aa1b-11e1-970c-00247edb4743/, so it must read the soft link 1de3f4e2-... and _teststorage. However, these two soft links are created by vdsm with security label default_t, and qemu is labeled as svirt_t, so selinux is preventing qemu reading the soft links, this means qemu can not access the volume file. I lookup the libvirt documentation, it says libvirt runs qemu in a confined domain, only allowing qemu to access files and devices labeled as system_u:object_r:virt_image_t or system_u:object_r:svirt_image_t (http://libvirt.org/drvqemu.html#securityselinux). So I add two policies as follow: semanage fcontext -a -t virt_image_t '/rhev/data-center/mnt/[^/]+' semanage fcontext -a -t virt_image_t '/rhev/data-center/[-0-9a-f]+/[-0-9a-f]+' This let qemu to access those soft links. Including the existing policies policies on /rhev, now the overall policies are as follow: # semanage fcontext -l | grep '^/rhev' /rhev directory system_u:object_r:mnt_t:s0 /rhev(/[^/]*)? directory system_u:object_r:mnt_t:s0 /rhev/[^/]*/.* all files None /rhev/data-center/[-0-9a-f]+/[-0-9a-f]+all files system_u:object_r:virt_image_t:s0 /rhev/data-center/mnt/[^/]+all files system_u:object_r:virt_image_t:s0 Then I restorecon -Rv /rhev. After doing this, I can create and boot VM in vdsm now. Now I have three questions. 1. I think I'm giving qemu
Re: [vdsm] some questions about selinux policy on VDSM storage
On 06/11/2012 02:56 PM, Zhou Zheng Sheng wrote: Hi all, I am trying to create a VM in vdsm, but I find that selinux is preventing qemu from visiting images in vdsm storage. I add two policies to selinux then start the VM successfully. Not sure if I've done it right, so I'd like to ask for some advice. The directory structure of my vdsm storage is as follow: $ tree -l /rhev /rhev `-- data-center |-- 3ace0f74-a9fa-11e1-bb33-00247edb4743 | |-- 1de3f4e2-a9f9-11e1-9956-00247edb4743 - /rhev/data-center/mnt/_teststorage/1de3f4e2-a9f9-11e1-9956-00247edb4743 | | |-- dom_md | | | |-- ids | | | |-- inbox | | | |-- leases | | | |-- metadata | | | `-- outbox | | |-- images | | | `-- 8230cd4a-aa1b-11e1-970c-00247edb4743 | | | |-- b9e163b6-aa1c-11e1-94f6-00247edb4743 | | | `-- b9e163b6-aa1c-11e1-94f6-00247edb4743.meta | | `-- master | | |-- tasks | | | `-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5 | | | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.job.0 | | | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.recover.0 | | | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.recover.1 | | | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.result | | | `-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.task | | `-- vms | `-- mastersd - /rhev/data-center/mnt/_teststorage/1de3f4e2-a9f9-11e1-9956-00247edb4743 [recursive, not followed] |-- hsm-tasks `-- mnt `-- _teststorage - /teststorage `-- 1de3f4e2-a9f9-11e1-9956-00247edb4743 |-- dom_md | |-- ids | |-- inbox | |-- leases | |-- metadata | `-- outbox |-- images | `-- 8230cd4a-aa1b-11e1-970c-00247edb4743 | |-- b9e163b6-aa1c-11e1-94f6-00247edb4743 | `-- b9e163b6-aa1c-11e1-94f6-00247edb4743.meta `-- master |-- tasks | `-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5 | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.job.0 | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.recover.0 | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.recover.1 | |-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.result | `-- a01abf82-c8f2-4d52-8e50-f26a2a2994b5.task `-- vms 22 directories, 24 files The storage is constructed using vdsClient. I create a LOCALFS DATA domain, a pool, an image and a volume. I find that libvirtd can relable the volume to system_u:object_r:svirt_image_t:s0:cXXX,cXXX, and it starts qemu as system_u:system_r:svirt_t:s0:cXXX,cXXX $ ll -Z b9e163b6-aa1c-11e1-94f6-00247edb4743 -rw-rw-r--. vdsm kvm system_u:object_r:svirt_image_t:s0:c248,c603 b9e163b6-aa1c-11e1-94f6-00247edb4743 $ ps auxZ | grep qemu system_u:system_r:svirt_t:s0:c248,c603 qemu 11132 60.0 6.9 1531336 273756 ? Sl 13:58 0:31 /usr/bin/qemu-kvm ...(lots of arguments) This is an expected feature of sVirt, and qemu is supposed to have access to the volume. To access the volume file b9e163b6-aa1c-11e1-94f6-00247edb4743, qemu must go through /rhev/data-center/3ace0f74-a9fa-11e1-bb33-00247edb4743/1de3f4e2-a9f9-11e1-9956-00247edb4743/images/8230cd4a-aa1b-11e1-970c-00247edb4743/, so it must read the soft link 1de3f4e2-... and _teststorage. However, these two soft links are created by vdsm with security label default_t, and qemu is labeled as svirt_t, so selinux is preventing qemu reading the soft links, this means qemu can not access the volume file. I lookup the libvirt documentation, it says libvirt runs qemu in a confined domain, only allowing qemu to access files and devices labeled as system_u:object_r:virt_image_t or system_u:object_r:svirt_image_t (http://libvirt.org/drvqemu.html#securityselinux). So I add two policies as follow: semanage fcontext -a -t virt_image_t '/rhev/data-center/mnt/[^/]+' semanage fcontext -a -t virt_image_t '/rhev/data-center/[-0-9a-f]+/[-0-9a-f]+' This let qemu to access those soft links. Including the existing policies policies on /rhev, now the overall policies are as follow: # semanage fcontext -l | grep '^/rhev' /rhev directory system_u:object_r:mnt_t:s0 /rhev(/[^/]*)? directory system_u:object_r:mnt_t:s0 /rhev/[^/]*/.* all filesNone /rhev/data-center/[-0-9a-f]+/[-0-9a-f]+all files system_u:object_r:virt_image_t:s0 /rhev/data-center/mnt/[^/]+all files system_u:object_r:virt_image_t:s0 Then I restorecon -Rv /rhev. After doing this, I can