On Sat, Feb 18, 2017 at 11:07 PM, Trammell Hudson <[email protected]> wrote: > On Sat, Feb 18, 2017 at 10:45:31PM +0300, Oleg Artemiev wrote: >> [...] >> AFAIR, when App VM is started some image files are made. Are these >> files are made in /var/lib/qubes/appvms or also in >> /var/lib/qubes/vm-templates ? > > I've done some work on making Qubes' installation to have a read-only > (and dm-verity protected) dom0 / with a write-able /home. It requires > patching qubes/storage/__init__.py to allow the volatile.img file to > reside on the rw partition (and not be re-created on the ro /): > > https://groups.google.com/forum/#!topic/qubes-devel/hG93VcwWtRY
Thank you very much! Interesting thread. I've a few questions after reading above link: > but in practice there are multiple places relying on > exact /var/lib/qubes directory - for example udev scripts preventing all > the filesystem guessing code, or hiding those devices from qvm-block. > Also DispVM preparation script rely on those paths... > So, better use mount --bind. Such mounts should be better in /etc/fstab or in start scripts like /etc/rc.local ? /var/log, var/cache, /var/lib/xen, /etc/libvrt/libxl > libvrt re-writes the config files to /etc on every vm startup, which > seems a little odd, but I haven't tracked it down yet. Is having /etc/libvirt on a separate place is enough or there're other dirs? >>> Besides the above, for volatile.img it should be enough to modify this >> script. >>It seems that script is insufficient for a read-only / filesystem >> since the prepare-volatile-img.sh script calls truncate on the file > and qubes/storage/__init__.py calls os.remove() on the file. Trammell, did I understood right that your solution > I can just move the entire appvms/personal directory to /home and leave a > single symlink in /var/lib/qubes/appvms doesn't require patch for prepare-volatile-img.sh ? And do I understand right that to have volatile images be placed not in the same directory as /var/lib/qubes/appvms/<vmname>/ I will need to patch the prepare-volatile-img.sh ? Marek, If I need patch for prepare-volatile-img.sh - which of qubes git repositories should I clone to provide a pull request and are any chances that such behavior will be applied by merging into Qubes 3.2 (if I incorporate changes available through mailing list search)? Ability to have separate location for volatile data will allow users to place often-volatile data on hdd and rarely-volatile data on ssd without having to use btrfs or other ssd-aware file system type. > I modified the python code to check for a symlink and remove the > destination of the link instead: > > # Re-create only for template based VMs > if source_template is not None and self.volatile_img: > if (os.path.islink(self.volatile_img)): > if os.path.exists(self.volatile_img): > os.remove(os.readlink(self.volatile_img)) > elif os.path.exists(self.volatile_img): > os.remove(self.volatile_img) > > I'm concerned that this code is executed as root, while the user > controls the path to self.volatile_img. It seems like this would > allow someone to remove any file on the system by tweaking the > destination file. > Have you read /etc/sudoers.d/qubes? ;) AFAIR, as Qubes team state there - Qubes currently is not a multi-user (in terms that intended to use by different people) - user separation by Linux OS abilities is considered not secure enough. So, intentionally, there's no significant difference in root/user unless you enable it by hardening Dom0 (i.e. sudo bash is password-less by default in all VMs). So, due to notes in sudoers.d/qubes , I've a questions to Marek (or anyone) - are there a reason to not include such a behavior in Qubes 3.2? > BTW in Qubes 4.0 we already have an API to support arbitrary images > location. I see no Qubes 4 in downloads on official web. Do I have to compile it from source? Or enabling some repository in Qubes 3.2 is enough to get into 4.0 ? Seems I missed this info in the FAQ. >> [...] >> Yes, I understand this won't turn off writes to ssd when template VM >> is upgraded. > > This is one difficult point with the setup is upgrades, especially > if Qube's overwrites the python library. In my case it also requires > rebooting into a recovery mode, installing updates and then re-signing > the root filesystem. The rw partition is on separate TPM protected > keys so the VMs are not available during the upgrade process. Thanks for explaining a nice setup. :) Unfortunately till I buy an open hardware laptop (i.e. certified https://www.crowdsupply.com/purism/librem-13 ) I've no ability to implement this - my asus n56vz is crap (no VT-d, thus I cannot pass different device on single usb controller to different VMs, also any VM w/ access to pci device provides attack surface from a VM to Dom0 by trying to use DMA attack by allowing writes to arbitrary Dom0 memory (as I got from this thread: https://groups.google.com/forum/#!msg/qubes-devel/2UL9ZcIPT6Y/xUzL-wwXEmQJ )). BTW: there's a next step model similar to purism/librem-13: https://www.crowdsupply.com/purism/librem-15 , but only purism/librem-13 is listed as officially certified for Qubes. Why not purism/librem-15 also? -- Bye.Olli. gpg --search-keys grey_olli , use key w/ fingerprint below: Key fingerprint = 9901 6808 768C 8B89 544C 9BE0 49F9 5A46 2B98 147E Blog keys (the blog is mostly in Russian): http://grey-olli.livejournal.com/tag/ -- You received this message because you are subscribed to the Google Groups "qubes-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/qubes-devel/CABunX6N_jipQ3B3_dusQQuR_S%3Dt6-R%3DBAxJeLjajn51q%2ByM%2BcQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
