-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Sun, Jul 13, 2025 at 06:24:35AM +0000, qubes-os via qubes-devel wrote:
> Very much new to Qubes, and trying to ease my way into it,
> albeit possibly hindered by having had some previous exposure
> to non-Qubes Xen environments.
> 
> 
> I'd like to ask a question about the way in which an AppVM's
> Copy-on-Write partition, from within the "volatile.img" VBD
> is used.
> 
> 
> From reading the Template Implementation page, I note
> 
> 
> Block devices of a VM                                                         
>   
>                                                                               
>   
>    Every VM has 4 block devices connected:                                    
>   
>                                                                               
>   
>      * xvda – base root device (/) – details described below                  
>   
>      * xvdb – private.img – place where VM always can write.                  
>   
>      * xvdc – volatile.img, discarded at each VM restart – here is placed     
>   
>        swap and temporal “/” modifications (see below)                        
>   
>      * xvdd – modules.img – kernel modules and firmware                       
>   
> 
> 
> and then, below, 
> 
> 
> Snapshot device in Dom0                                                       
>                                                                               
>   
>    This device consists of:                                                   
>   
>                                                                               
>   
>      * root.img     – real template filesystem                            
>      * root-cow.img – differences between the device as seen by AppVM
>         and the current root.img            
>                                                                               
>   
>    The above is achieved through creating device-mapper snapshots for each    
>   
>    version of root.img. When an AppVM is started, a xen hotplug script        
>   
>    (/etc/xen/scripts/block-snapshot) reads the inode numbers of root.img and  
>   
>    root-cow.img; these numbers are used as the snapshot device’s name. When a 
>   
>    device with the same name exists the new AppVM will use it – therefore,    
>   
>    AppVMs based on the same version of root.img will use the same device. Of  
>   
>    course, the device-mapper cannot use the files directly – it must be       
>   
>    connected through /dev/loop*. The same mechanism detects if there is a     
>   
>    loop device associated with a file determined by the device and inode      
>   
>    numbers – or if creating a new loop device is necessary.                   
>   
> 
> Then, from inspection of the block devices within a VM, I can see
> 
> xvda
> 
> Number  Start       End       Size  File system  Name                 Flags
>         34s        2047s     2014s  Free Space
>  1      1.00MiB   201MiB    200MiB               EFI System           boot, 
> esp
>  2      201MiB    203MiB   2.00MiB               BIOS boot partition  
> bios_grub
>  3      0.02GiB  20.0GiB   19.8GiB  ext4         Root filesystem
>         20.0GiB  20.0Gib     2015s  Free Space
> 
> xvdc
> 
> Number  Start        End       Size Type      File system     Flags
>         63s        2047s      1985s           Free Space
>  1      0.00GiB  1.00GiB    1.00GiB  primary  linux-swap(v1)  
>  3      1.00GiB  10.0GiB    9.00GiB  primary
> 
> 
> but what I can't seem to work out is where the Copy-on-Write partition
> (as I think of it: xvdc3) is being "associated" with the VM's "Root
> filesystem" (xvda3), nor where the loop devices, required for it all
> to hang together, are created.

The above documentation is outdated a bit - with LVM thin provisioning
the CoW layer on root volume is done in dom0, so VM gets read-write
snapshot as xvda and doesn't need to do CoW on its own. So, volatile
volume is used only for swap.

If you want this CoW layer to be done in VM, it is still supported
option, and you can select it by setting root volume to read-only
(qvm-volume config VMNAME:root rw false). But it will be a bit slower.

> The reference to the 
> 
>   "xen hotplug script (/etc/xen/scripts/block-snapshot)"
> 
> has me thinking that the "association" is happening in the Dom0,
> but I can't seem to see the "various parts", when taking a look
> around the Dom0 or AppVM, after invoking an "Xfce Terminal" from
> the personal qube.
> 
> I do note though, that inside the VM, a 'df' shows the root device
> being presented as
> 
>   /dev/mapper/dmroot
> 
> and not
> 
>   /dev/xvda3
> 
> which then has me thinking that the "association" might be
> taking place within the AppVM, but again, I can't see any
> obvious evidence for that.

Generally VM's initramfs takes care of assembling /dev/mapper/dmroot.
But if you look closely, /dev/mapper/dmroot is simply a symlink to
/dev/xvda3. 

> I feel that I should be able to see the "various parts", but,
> when looking around, am clearly missing them.
> 
> 
> Could someone point me to a document, or previous answer, that 
> makes things clearer, and/or to what I might have missed in 
> looking around inside the Dom0 and AppVM. 

- -- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmhzlAQACgkQ24/THMrX
1yxKyQf/SuWrqss+GGIrmU5I1C9V1bbFXV1Iwu/viZf8Kq5sHDPS+G0EJG2NlMH8
EShQa4sJ0qERAit6H36XC4H5dJp+r+TDxbB9nOMx+oWtERstugWMN2lQ/g8R4djd
yF0mm3Szvf9JyT9KxpVM4AxchzpnD+FWAfF83Fc5DH3GghWNPEGob3J7NOreGA/U
kcTXgo2Up/nQuxDGgsbVjsCqJgme3nssGdU9ZkZLNuqY1YVX+iHTzKClqDnB//yg
QH7cDqvqtoPpNpr4kfdsq2rra+OYAJhZEG8w7QFPzhWTpA6J+DHY5j+QvQd5EkP2
7MNmHWStm+Gz15Bu2bCJZCVECXZSJw==
=nbom
-----END PGP SIGNATURE-----

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-devel+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/qubes-devel/aHOUBC6-PSTNbGhd%40mail-itl.

Reply via email to