Re: [qubes-users] Re: Disk space--R4 lies through its teeth

2018-03-20 Thread Chris Laprise

On 03/19/2018 02:28 PM, Yuraeitha wrote:

On Monday, March 19, 2018 at 6:34:05 PM UTC+1, Bill Wether wrote:

This has been mentioned before in 
, but I don't 
see anywhere that it's fixed.

In R3.2, df in Dom0 would show how much actual disk space remained.  That's a 
critical piece of data for production use, given the sheer amount of breakage 
caused by running out of space.

I have a 1TB SSD with Qubes 4.0 RC5 and about 450GB of restored VMs, but when I 
type 'df' in dom0 I get:

Use% Mounted on
devtmpfs  1995976   0   1995976   0% /dev
tmpfs 2009828   0   2009828   0% /dev/shm
tmpfs 20098281612   2008216   1% /run
tmpfs 2009828   0   2009828   0% /sys/fs/cgroup
/dev/mapper/qubes_dom0-root 935037724 3866076 883604596   1% /
tmpfs 2009828   8   2009820   1% /tmp
xenstore  2009828 416   2009412   1% /var/lib/xenstored
/dev/sda1  999320   79676850832   9% /boot
tmpfs  401964   8401956   1% /run/user/1000

You'd never know that the disk is actually half full or a little more. I have 
no idea how to manage my disk space on Qubes 4.0.

Suggestions?

Thanks

BillW


In addition to using "sudo lvs", I believe this too may also be relevant.

quote:
"In all versions of Qubes, you may want to set up a periodic job in dom0 to trim the 
disk. This can be done with either systemd (weekly only) or cron (daily or weekly)."
...
"Although discards can be issued on every delete inside dom0 by adding the discard 
mount option to /etc/fstab, this option can hurt performance so the above procedure is 
recommended instead. However, inside App and Template qubes, the discard mount option is 
on by default to notify the LVM thin pool driver (R4.0) or sparse file driver (R3.2) that 
the space is no longer needed and can be zeroed and re-used."
https://www.qubes-os.org/doc/disk-trim/

In general, if your trimming is not working correctly, either in VM's or in 
dom0, then you may get wrong numbers, even if you use the correct commands to 
list your drive space usage.

The reason you get so much drive space usage reported, may very likely be 
because your trimming isn't working or isn't enabled.



This has become a tricky subject because TRIM and discard have different 
but overlapping effects.


The disk-trim doc you linked could give the impression that editing 
lvm.conf is necessary (at least within the context of this thread). All 
that's required to reclaim unused dom0 space is fstrim (such as the 
timed examples in the doc) or adding 'discard' option to / in fstab. 
Some prefer the timer which is slightly safer for dom0, but could lead 
to greater risk of running out of space overall.


Until the disk space widget becomes available, you can view the LVM 
pool's free space with the command qubesuser posted here:

https://github.com/QubesOS/qubes-issues/issues/3240#issuecomment-340088432

--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b988db27-3e9c-1b21-4bf0-f6c1d1617f7e%40posteo.net.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Disk space--R4 lies through its teeth

2018-03-19 Thread Yuraeitha
On Monday, March 19, 2018 at 6:34:05 PM UTC+1, Bill Wether wrote:
> This has been mentioned before in 
> , but I don't 
> see anywhere that it's fixed.
> 
> In R3.2, df in Dom0 would show how much actual disk space remained.  That's a 
> critical piece of data for production use, given the sheer amount of breakage 
> caused by running out of space.
> 
> I have a 1TB SSD with Qubes 4.0 RC5 and about 450GB of restored VMs, but when 
> I type 'df' in dom0 I get:
> 
> Use% Mounted on
> devtmpfs  1995976   0   1995976   0% /dev
> tmpfs 2009828   0   2009828   0% /dev/shm
> tmpfs 20098281612   2008216   1% /run
> tmpfs 2009828   0   2009828   0% /sys/fs/cgroup
> /dev/mapper/qubes_dom0-root 935037724 3866076 883604596   1% /
> tmpfs 2009828   8   2009820   1% /tmp
> xenstore  2009828 416   2009412   1% 
> /var/lib/xenstored
> /dev/sda1  999320   79676850832   9% /boot
> tmpfs  401964   8401956   1% /run/user/1000
> 
> You'd never know that the disk is actually half full or a little more. I have 
> no idea how to manage my disk space on Qubes 4.0.
> 
> Suggestions?
> 
> Thanks
> 
> BillW

In addition to using "sudo lvs", I believe this too may also be relevant. 

quote: 
"In all versions of Qubes, you may want to set up a periodic job in dom0 to 
trim the disk. This can be done with either systemd (weekly only) or cron 
(daily or weekly)."
...
"Although discards can be issued on every delete inside dom0 by adding the 
discard mount option to /etc/fstab, this option can hurt performance so the 
above procedure is recommended instead. However, inside App and Template qubes, 
the discard mount option is on by default to notify the LVM thin pool driver 
(R4.0) or sparse file driver (R3.2) that the space is no longer needed and can 
be zeroed and re-used." 
https://www.qubes-os.org/doc/disk-trim/

In general, if your trimming is not working correctly, either in VM's or in 
dom0, then you may get wrong numbers, even if you use the correct commands to 
list your drive space usage.

The reason you get so much drive space usage reported, may very likely be 
because your trimming isn't working or isn't enabled.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/e226a55f-1801-4c74-9288-9a1ce95e50b6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.