On Monday, March 19, 2018 at 11:22:38 AM UTC-7, Bill Wether wrote:
> On Monday, March 19, 2018 at 1:55:39 PM UTC-4, Unman wrote:
> > On Mon, Mar 19, 2018 at 10:34:05AM -0700, Bill Wether wrote:
> > > This has been mentioned before in 
> > > <https://groups.google.com/forum/#!msg/qubes-users/Y1QjsK5fp1A>, but I 
> > > don't see anywhere that it's fixed.
> > > 
> > > In R3.2, df in Dom0 would show how much actual disk space remained.  
> > > That's a critical piece of data for production use, given the sheer 
> > > amount of breakage caused by running out of space.
> > > 
> > > I have a 1TB SSD with Qubes 4.0 RC5 and about 450GB of restored VMs, but 
> > > when I type 'df' in dom0 I get:
> > > 
> > > Use% Mounted on
> > > devtmpfs                      1995976       0   1995976   0% /dev
> > > tmpfs                         2009828       0   2009828   0% /dev/shm
> > > tmpfs                         2009828    1612   2008216   1% /run
> > > tmpfs                         2009828       0   2009828   0% 
> > > /sys/fs/cgroup
> > > /dev/mapper/qubes_dom0-root 935037724 3866076 883604596   1% /
> > > tmpfs                         2009828       8   2009820   1% /tmp
> > > xenstore                      2009828     416   2009412   1% 
> > > /var/lib/xenstored
> > > /dev/sda1                      999320   79676    850832   9% /boot
> > > tmpfs                          401964       8    401956   1% 
> > > /run/user/1000
> > > 
> > > You'd never know that the disk is actually half full or a little more. I 
> > > have no idea how to manage my disk space on Qubes 4.0.
> > > 
> > > Suggestions?
> 
> > 
> > Qubes 4.0 uses LVM thin pools.
> > Try using sudo lvs to see the actual data used in the pool.
> 
> Ah, okay, thanks.  When I do that, I get 
> 
> [billw@dom0 Desktop]$ sudo lvs
>   LV                                        VG         Attr       LSize   
> Pool   Origin                  Data%  Meta%  Move Log Cpy%Sync Convert
>   pool00                                    qubes_dom0 twi-aotz-- 906.96g     
>                            52.49  28.32                           
>   root                                      qubes_dom0 Vwi-aotz-- 906.96g 
> pool00                         2.14                                   
>   swap                                      qubes_dom0 -wi-ao----   7.55g   
> 
> and so forth.
> 
> Does that mean that my drive is actually 81% full with only 450 GB of VMs?  I 
> sure hope not.  That's over 50% overhead!                 
> 
> Cheers
> 
> BillW

81% is probably not accurate, since the metadata is stored in an LV that seems 
to start out at 16 GB [1]

If you want more precise info on used space, qvm-pool is useful (specifically, 
qvm-pool -i lvm)

The attached script will calculate free space in the main lvm pool and 
percentage used, and you can use it with a Xfce Generic Monitor to add its 
output to your panel.

Also, note that lvs shows the maximum sizes for the LVs assigned to TemplateVMs 
& AppVMs, not space used.

1 - https://github.com/QubesOS/qubes-issues/issues/3240

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/c4606d82-a355-4642-af96-3c2b20ce0bcf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
#!/bin/sh
SIZE=$(qvm-pool -i lvm | awk '/^size/ {print $2}')
USAGE=$(qvm-pool -i lvm | awk '/^usage/ {print $2}')
FREE=$(($SIZE - $USAGE))
USEDCENT=$((100*$USAGE/$SIZE + 200*$USAGE/$SIZE % 2))
FREEGB=$(echo $FREE | cut -c 1-3)
FREEMB=$(echo $FREE | cut -c 4-5)
echo "<tool>$FREEGB.$FREEMB GB FREE</tool>"
echo "<bar>$USEDCENT</bar>"

Reply via email to