With bookworm's kernel, QEMU-KVM EFI cannot see virtio partitions

2022-11-18 Thread Jorge P. de Morais Neto


Windows-10-Jorge.xml
Description: XML document
Hi!  After I upgraded to bookworm, my QEMU-KVM VM fails to boot the
guest OS; instead it drops to the EFI shell.  If I boot the physical
host into bullseye's kernel (Linux 5.10) then the VM boots normally.

This VM has two virtual disks, each backed by a physical partition---one
partition in my NVMe SSD and the other on my HDD.  The interface for
both virtual disks is virtio.  I have attached the XML.

Is this a known problem?  Is there a workaround, other than booting the
physical host into an older kernel?  Should I report this as a bug?

Regards

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- https://www.defectivebydesign.org
- https://www.gnu.org


"Failed unmounting /var/cache" error message when shutting down

2022-04-03 Thread Jorge P. de Morais Neto
Hi.  This problem is some months old, and I have sent a similar message
on 20 Jan 2022 11:57:35 (UTC).  Since then I have slightly simplified my
Btrfs subvolume layout but the problem remains.

When I shutdown or halt my laptop, I get error messages like:

[FAILED] Failed unmounting /var/cache.
[⋮]
[  OK  ] Reached target Unmount All Filesystems.
[  OK  ] Reached target Final Step.
 Starting halt...

My Nextcloud public folder has a screenshot from January:
https://cloud.disroot.org/s/MFaEoozaHHJjtbs

I suppose these Btrfs subvolumes do still get unmounted cleanly before
shutdown, because:
1. The success message "[  OK  ] Reached target Unmount All Filesystems."
2. When the laptop turns back on, I don't see any message about unclean
   filesystem.
3. In the output of ~smartctl -a /dev/nvme0n1~, the field "Unsafe
   Shutdowns" dos not increase.

Am I thinking correctly, and can I safely disregard those error messages?

For reference, I have attached the relevant excerpt from /etc/fstab

Kind regards
LABEL=SSD /   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@rootfs   0   0
LABEL=SSD /var/backupsbtrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-bkp  0   0
LABEL=SSD /var/cache  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-cch  0   0
LABEL=SSD /var/spool  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-spl  0   0
LABEL=SSD /var/tmpbtrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-tmp  0   0
LABEL=HDD /var/logbtrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-log  0   0
LABEL=HDD /var/cache/apt/archives btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@apt-arch 0 0
LABEL=HDD /root   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~root0   0
LABEL=SSD /home   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@home 0   0
LABEL=SSD /home/cache btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@home-cch 0   0
LABEL=HDD /home-HDD   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@home 0   0
LABEL=HDD /home-HDD/cache btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@home-cch 0   0
LABEL=SSD /gnubtrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@guix-gnu 0   0
LABEL=SSD /var/guix   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@guix-var 0   0
LABEL=SSD /usr/local btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@usr-local   0   0
UUID=9550-C451  /boot/efi   vfatumask=0077  0   1
LABEL=HDD-swap none  swap  sw   
0   0

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- Free Software Supporter: https://www.fsf.org/free-software-supporter
- If an email of mine arrives at your spam box, please notify me.


Re: Btrfs best practices

2022-01-29 Thread Jorge P . de Morais Neto
Hello!  I think I should inform this list about my choices so far:

Em [2021-12-16 qui 14:13:05-0300], Jorge P. de Morais Neto escreveu:

> Should I use a backported kernel as Btrfs [wiki][] recommends?  I worry
> that bullseye-backports comes from Debian testing with poor security.

I'm just using bullseye's kernel (5.10 LTS).

> For lifetime and space saving, I intend to install Debian to the SSD
> with compress-force=zstd:12, but then adopt compress-force=zstd.  Thus
> the installation will be slow---I'll do something else while the
> installer works---but the installed system will be efficient, right?

I'm still using compress=zstd:12 and it's performing well.  Notice I
went from "compress-force=zstd:12" to just "compress=zstd:12".  That is
because of:

Using the forcing compression is not recommended, the heuristics are
supposed to decide that and compression algorithms internally detect
incompressible data too.[1]

Btrfs contains an internal heuristics that determines if some data
is compressible so that it doesn't try to compress data that isn't
compressible as this wastes CPU time.  The compress-force mount
option bypasses this heuristics in order to gain better compression
ratios.  A downside is that this increases fragmentation with
non-compressible files.[2]

1: https://btrfs.readthedocs.io/en/latest/Compression.html "Compression —
   BTRFS documentation"
2: https://wiki.tnonline.net/w/Btrfs/Compression#The_compress-force_mount_option
   "Btrfs/Compression - Forza's ramblings"

In the near future I intend to reduce this strong compression level (12)
to something more usual, in order to reduce power usage.

> Is fragmentation a concern?  Is the [Gotchas][] article accurate?

I now have little to worry about fragmentation, because:

1. I dedicated a raw partition to my qemu-KVM virtual machine, bypassing
   Btrfs.
2. I moved the caches of ungoogled-chromium, GNU IceCat, Firefox,
   Evolution and GNU Guix to the HDD, because they (especially the web
   browser caches) were writing too much temporary data to the SSD.
   Thus, if they ever become too fragmented, I can now just defrag them,
   without the danger of wearing the SSD.
3. I made a script to find heavily fragmented files (using compsize's
   output) and so far I have nothing to worry about.

> ** Subvolumes

I read https://en.opensuse.org/SDB:BTRFS and laid out subvolumes
according to this fstab excerpt:

LABEL=SSD /  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@rootfs  0   0
LABEL=SSD /var/cache btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-cache   0   0
LABEL=SSD /var/backups   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-backups 0   0
LABEL=SSD /var/mail  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-mail0   0
LABEL=SSD /var/spool btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-spool   0   0
LABEL=SSD /var/tmp   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-tmp 0   0
LABEL=HDD /var/log   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-log 0   0
LABEL=SSD /var/lib/libvirt/images btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@libvirt-images 0 0
LABEL=HDD /var/cache/apt/archives btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@apt-archives 0 0
LABEL=HDD /root  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~root   0   0
LABEL=SSD /home  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@home0   0
LABEL=SSD /home/cachebtrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@home-cache  0   0
LABEL=HDD /home-HDD  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@home0   0
LABEL=HDD /home-HDD/cachebtrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@home-cache  0   0
LABEL=SSD /gnu   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@guix-store  0   0
LABEL=SSD /var/guix  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@guix-var0   0
LABEL=SSD /usr/local btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@usr-local   0   0

Rationale:
1. In the future I could snapshot @rootfs before certain system
   operations (say, large upgrades).  If I then rollback the system to a
   snapshot, I'll still want the latest logs, user data, cache,
   libvirt-images etc., so these should be outside the @rootfs
   subvolume.  Also, including them in snapshots would be very expensive
   because some of these directories have too much variable data.
2. If I snapshot @home (probably for backup) I don't want to snapshot
   user cache (see above).
3. Some user data should be on the HDD, such as videos, music, pictures,
   downloads etc.  They are large files that would fill the SSD; and
   their usage charact

"Failed unmounting "/{root,var/cache} error messages when shutting down

2022-01-20 Thread Jorge P. de Morais Neto
Hi.  When I shutdown or halt my laptop, I see error messages like:

[FAILED] Failed unmounting /root.
[⋮]
[FAILED] Failed unmounting /var/cache.
[⋮]
[  OK  ] Reached target Unmount All Filesystems.
[  OK  ] Reached target Final Step.
 Starting halt...

My Nextcloud public folder has a screenshot:
https://cloud.disroot.org/s/MFaEoozaHHJjtbs

I suppose these Btrfs subvolumes do still get unmounted cleanly before
shutdown, because:
1. The success message "[  OK  ] Reached target Unmount All Filesystems."
2. When the laptop turns back on, I don't see any message about unclean
   filesystem.
3. In the output of ~smartctl -a /dev/nvme0n1~, the field "Unsafe
   Shutdowns" dos not increase.

Am I thinking correctly, and can I safely disregard those error messages?

For reference, here is the relevant excerpt from /etc/fstab

--8<---cut here---start->8---
LABEL=SSD /  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@rootfs  0   0
LABEL=SSD /var/cache btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-cache   0   0
LABEL=SSD /var/backups   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-backups 0   0
LABEL=SSD /var/mail  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-mail0   0
LABEL=SSD /var/spool btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-spool   0   0
LABEL=SSD /var/tmp   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-tmp 0   0
LABEL=HDD /var/log   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-log 0   0
LABEL=SSD /var/lib/libvirt/images btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@libvirt-images 0 0
LABEL=HDD /var/cache/apt/archives btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@apt-archives 0 0
LABEL=SSD /root  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~root   0   0
LABEL=SSD /home/jorgebtrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~jorge  0   0
LABEL=SSD /home/jorge/.cache btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~jorge-.cache 0 0
LABEL=HDD /home/jorge/HDDbtrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~jorge  0   0
LABEL=SSD /home/wanessa  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~wanessa0   0
LABEL=HDD /home/wanessa/HDD  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~wanessa0   0
LABEL=SSD /gnu   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@guix-store  0   0
LABEL=SSD /var/guix  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@guix-var0   0
LABEL=SSD /usr/local btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@usr-local   0   0
--8<---cut here---end--->8---

Kind regards

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- Please adopt free/libre formats like PDF, Org, LaTeX, ODF, Opus, WebM and 7z.
- Libre apps for AOSP (Replicant, LineageOS, etc.) and Android: F-Droid
- https://www.gnu.org/philosophy/free-sw.html "What is free software?"



Debian Btrfs: subvolume layout best practice

2022-01-16 Thread Jorge P. de Morais Neto
Hi!  I use Btrfs on Dell Inspiron 5570 laptop with 16 GiB RAM, a 1 TB
SATA HDD and an M.2 NVMe 250 GB SSD---a Western Digital WD Blue SN550
rated for 150 TBW.  I have read a lot on subvolume layout and, inspired
partly by [1], laid out subvolumes according to this fstab excerpt:
1: https://en.opensuse.org/SDB:BTRFS "SDB:BTRFS - openSUSE Wiki"

LABEL=SSD /  btrfs 
noatime,space_cache=v2,compress-force=zstd:12,subvol=@rootfs0   0
LABEL=SSD /var/  btrfs 
noatime,space_cache=v2,compress-force=zstd:12,subvol=@var   0   0
LABEL=HDD /var/log   btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@var-log 0   0
LABEL=HDD /var/cache/apt/archives btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@apt-archives   0   0
LABEL=SSD /root  btrfs 
noatime,space_cache=v2,compress-force=zstd:12,subvol=@~root 0   0
LABEL=SSD /home/jorgebtrfs 
noatime,space_cache=v2,compress-force=zstd:12,subvol=@~jorge0   0
LABEL=SSD /home/jorge/.cache btrfs 
noatime,space_cache=v2,compress-force=zstd:12,subvol=@~jorge-.cache 0   0
LABEL=HDD /home/jorge/HDDbtrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~jorge  0   0
LABEL=SSD /home/wanessa  btrfs 
noatime,space_cache=v2,compress-force=zstd:12,subvol=@~wanessa  0   0
LABEL=HDD /home/wanessa/HDD  btrfs 
noatime,space_cache=v2,compress=zstd:12,subvol=@~wanessa0   0
LABEL=SSD /gnu   btrfs 
noatime,space_cache=v2,compress-force=zstd:12,subvol=@gnu-store 0   0
LABEL=SSD /usr/local btrfs 
noatime,space_cache=v2,compress-force=zstd:12,subvol=@usr-local 0   0

I have tmpfs on /tmp via the systemd unit.  I know that zstd:12 looks
overkill, but performance is still pretty good.  Besides, after the
system has settled I will reduce the compression level, combining the
performance of moderate compression with the space saving of zstd:12, as
most data will have already been written.

Two questions:
1. Should /var really be segregated?

   *After* segregating /var into its own subvolume¹, I realized this
   could cause inconsistency on system rollbacks.  I don't known about
   openSUSE's package manager, but AFAIK Debian's dpkg stores important
   state in /var, particularly /var/lib.  Thus I fear that if I snapshot
   @rootfs and @var at different times, then rolling back @rootfs and
   @var could confuse dpkg.  Should I then
   a) Before rolling back @rootfs and @var, remember to verify they were
   snapshotted at very close moments?  Or
   b) change my subvolume layout?  How exactly?

   With option a, the only benefit of segregating /var is that I could
   snapshot @rootfs, backup that snapshot, and then delete that snapshot
   without the space waste of snapshotting /var too.  After all, I only
   backup (parts of) /home and /etc.  I would snapshot /var on other
   occasions only.

2. Should I segregate /boot/grub2/x86_64-efi?

   Should I segregate /boot/grub2/x86_64-efi into its own subvolume as
   openSUSE does?

Footnotes
¹ And /var/log into yet another subvolume (on the HDD).

Regards,

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- Free Software Supporter: https://www.fsf.org/free-software-supporter
- If an email of mine arrives at your spam box, please notify me.



Re: Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD

2022-01-03 Thread Jorge P . de Morais Neto
Hi!

Em [2022-01-03 seg 10:03:08-0500], Michael Stone escreveu:

> On Mon, Jan 03, 2022 at 08:42:29AM -0300, Jorge P. de Morais Neto wrote:
>>Indeed I use such high compression to prolong SSD lifetime.
>
> This is probably misguided and useless at best, at worst you're causing 
> additional writes because compressed data is generally hard to modify in 
> place without rewriting substantial portions.

But doesn't Btrfs compression work with small blocks?

https://btrfs.wiki.kernel.org/index.php/Compression#Are_there_speed_penalties_when_doing_random_access_to_a_compressed_file.3F

Fedora change proposal of Btrfs transparent compression by default
mentions increased flash-based media lifespan in the summary:

https://fedoraproject.org/wiki/Changes/BtrfsTransparentCompression#Summary

> For reference, my main desktop which tracks debian unstable and gets
> pretty much constant updates, does package builds, etc., has after
> several years used...2% of its primary SSDs write capacity.  Most
> modern SSDs will never be used anywhere close to their limits before
> being discarded as functionally obsolete.  Just don't worry about it
> and focus on other things.

Thank you for the advice.  Indeed I should be a bit less obsessed with
certain details.  I at least dropped the idea of messing with swappiness
(as mentioned earlier in this thread) thanks to similar advice.  I have
a weakness for ricing which I must moderate.

Kind regards

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about <https://stallmansupport.org>
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- https://www.defectivebydesign.org
- https://www.gnu.org



Re: Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD

2022-01-03 Thread Jorge P . de Morais Neto
Em [2022-01-02 dom 23:38:48+], piorunz escreveu:

> On 02/01/2022 16:33, Jorge P. de Morais Neto wrote:
>> I am currently using compress-force=zstd:12 for the SSD and
>> compress=zstd:12 for both HDD (internal SATA and external USB3)¹.
>> Despite the strong compression level, performance is pretty good.  Yet,
>> when the system settles, I intend to reduce compression level to 9 or 6
>> (as you earlier recommended).  This should make performance even better,
>> while saving a lot of space because most data was compressed at
>> level 12.
>
> I run compress-force=zstd:6 on my fast PC, compress-force=zstd:3 on my
> server (to give it a bit more breathing space), and also same level 3 on
> my laptop. 12 is quite high for SSD, are you sure you not slowing down
> peak performance of your SSD by intense CPU usage? Or is it by design,
> to reduce number of writes to SSD?

Indeed I use such high compression to prolong SSD lifetime.  IIUC,
besides directly reducing the amount of data written, compression allows
extra over-provisioning which should reduce the write amplification
factor too.

Level 12 compression probably hurts performance a bit, but programs
still start very quickly---far quicker than they did over the HDD with
ext4.  This makes sense to me, because (IIUC) starting programs is
read-intensive and zstd decompression is fast even on high compression
levels.  And surprisingly, even dpkg package installation and updates,
which (IIUC) are write-intensive, are very quick too.

Besides, I think I rarely need top CPU power and disk throughput at the
same time.  I max out the quad-core processor when compiling Guix
packages but that happens on /tmp (tmpfs), so high Btrfs zstd
compression does no harm.  I also stress the CPU (and the integrated
GPU) on the rare occasions I play Xonotic or SuperTuxKart, and Btrfs
compression probably does no harm during 3D game rendering either.

In conclusion, my SSD is plenty quick for my needs even with very high
Btrfs zstd compression.  I care more about maximizing its lifetime than
having programs start a fraction of a second quicker when they already
start very quickly.

And when the system settles I intend to reduce Btrfs zstd compression
level to 9 or 6 in an attempt to save electricity---although I am nearly
always on AC power.

> No, I am not on this list, where is it?  Please send a link!

http://vger.kernel.org/vger-lists.html#linux-btrfs

Kind regards,
  Jorge

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about <https://stallmansupport.org>
- Please adopt free/libre formats like PDF, Org, LaTeX, ODF, Opus, WebM and 7z.
- Libre apps for AOSP (Replicant, LineageOS, etc.) and Android: F-Droid
- https://www.gnu.org/philosophy/free-sw.html "What is free software?"



Re: Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD

2022-01-02 Thread Jorge P . de Morais Neto
Hi Piotr!  Happy 2022!

Em [2021-12-08 qua 22:54:29+], piorunz escreveu:

> On 08/12/2021 19:35, Jorge P. de Morais Neto wrote:

>> - Why `compress-force' instead of simply `compress'?
>
> I've read very extensive discussion about that and came to conclusion
> that compress-force is better.  It's checking every chunk of file for
> compressibility.  "compress", on the other hand, checks only first
> sectors, then drops compression if no compressible data is detected.
> Imagine your qcow file, first 1 GB is not compressible, so "compress"
> option will drop compression of that file right away.  But remaining 20GB
> are zeros because you haven't filled that yet.  With compress-force, you
> compress these zeros to nothing.  File takes 1GB of space.  You don't have
> that on ext4, or btrfs "compress" only option.

Have you revised after kernel Linux 4.15?  The btrfs(5) manpage says:

   Since kernel 4.15, a set of heuristic algorithms have been
   improved by using frequency sampling, repeated pattern
   detection and Shannon entropy calculation to avoid that.

Therefore, it looks like after kernel Linux 4.15 the compress option (or
compress=ALG:LEVEL), instead of compress-force, became more interesting.

I am currently using compress-force=zstd:12 for the SSD and
compress=zstd:12 for both HDD (internal SATA and external USB3)¹.
Despite the strong compression level, performance is pretty good.  Yet,
when the system settles, I intend to reduce compression level to 9 or 6
(as you earlier recommended).  This should make performance even better,
while saving a lot of space because most data was compressed at
level 12.

And I may also change compress-force to compress, even for the SSD,
because I run kernel 5.10 which is later than 4.15.  I may ask the
linux-btrfs mailing list first.  I have subscribed to it.  Are you there
too?

Regards

* Footnotes

¹ Both HDD have compress (rather than compress-force) because most of
their files are already compressed---pictures, videos, music, compressed
archives etc.

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about <https://stallmansupport.org>
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- https://www.defectivebydesign.org
- https://www.gnu.org



Re: Btrfs best practices

2021-12-16 Thread Jorge P . de Morais Neto
Hi.

Em [2021-12-16 qui 14:55:23-0300], Eduardo M KALINOWSKI escreveu:

> On 16/12/2021 14:13, Jorge P. de Morais Neto wrote:
>> I'll put system and /home on the SSD but all XDG user dirs² on the
>> HDD [snip]
>
> I don't have that manpage installed, but if you're refering to
> ~/.config, ~/.local, etc, these are exactly the kinds of things that
> should be on the SSD - it'll help with application startup times as
> files in those directories are read when applications start.

No, I was referring to:

DESKTOP
DOWNLOAD
TEMPLATES
PUBLICSHARE
DOCUMENTS
MUSIC
PICTURES
VIDEOS

> As for the other questions, I cannot help specifically, but as a
> general advice, don't overthink it and don't bother trying to optimize
> everything up to the smallest details.  Unless you have some very
> specific use, default settings are good enough, you should'nt notice
> any different in day to day use.

Thank you for the advice!  I will think about it.  I must not return to
the days when I riced Gentoo for hours every week trying to gain a tiny
bit of extra performance.  There must be moderation.

Regards

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about <https://stallmansupport.org>
- Please adopt free/libre formats like PDF, Org, LaTeX, ODF, Opus, WebM and 7z.
- Libre apps for AOSP (Replicant, LineageOS, etc.) and Android: F-Droid
- https://www.gnu.org/philosophy/free-sw.html "What is free software?"



Re: Btrfs best practices

2021-12-16 Thread Jorge P . de Morais Neto
Hi.  I must add the information that I use zswap:

GRUB_CMDLINE_LINUX_DEFAULT="quiet zswap.enabled=1 zswap.zpool=z3fold 
zswap.compressor=lzo-rle"

Em [2021-12-16 qui 14:13:05-0300], Jorge P. de Morais Neto escreveu:

> Hi!  I own a Dell Inspiron 5570 laptop with 1 TB SATA HDD, a new 250 GB
> NVMe SSD¹ and 16 GiB RAM.  I seek reliability, durability, performance
> and power efficiency.
>
> I do weekly duplicity backups to external 1.5 TB USB3 HDD.  I'll start
> also daily rsyncing some of the SSD data to the SATA HDD.
>
> The SSD will have 50 GB extra over provisioning and a 200 GB partition,
> besides the special UEFI partition.  The SATA HDD will start with 16 GiB
> swap partition then a big partition.  I'll put system and /home on the
> SSD but all XDG user dirs² on the HDD, and tmpfs on /tmp.  All three
> drives will have Btrfs with space_cache=v2, noatime, zstd compression
> and reasonable free breathing space.
>
> I use Gnome and:
> - GNU Emacs
> - notmuch and offlineimap (I may switch to mbsync)
> - GNU IceCat, Mozilla Firefox and ungoogled-chromium
> - Gajim and GNU Jami
> - Gnome Boxes or Virtual Machine Manager running a VM with 2 GiB RAM and
>   one .qcow2 disk image currently weighting 24 GB.
> - mpv
> - Nextcloud (always running but rarely syncing changes)
>
> I use Debian stable with only official repositories, including
> bullseye-backports.  I manually installed GNU Guix package manager and
> have 163 packages on main Guix profile.
>
> * Doubts
> ** Backported kernel
> Should I use a backported kernel as Btrfs [wiki][] recommends?  I worry
> that bullseye-backports comes from Debian testing with poor security.
>
> [wiki]: 
> https://btrfs.wiki.kernel.org/index.php/Getting_started#Before_you_start
>
> ** Strong compression during install
> For lifetime and space saving, I intend to install Debian to the SSD
> with compress-force=zstd:12, but then adopt compress-force=zstd.  Thus
> the installation will be slow---I'll do something else while the
> installer works---but the installed system will be efficient, right?
>
> ** HDD Compression
> Both HDD have a lot of already-compressed data: videos, audio, photos
> and compressed archives and disk images; compress-force would force
> Btrfs to recompress it all, only to discard the recompressed data and
> store the original.  Therefore compress=zstd:4 would be better, right?
>
> ** Fragmentation
> Is fragmentation a concern?  Is the [Gotchas][] article accurate?
>
> [Gotchas]: https://btrfs.wiki.kernel.org/index.php/Gotchas#Fragmentation
>
> ** Subvolumes
> What about
> https://fedoraproject.org/wiki/Changes/BtrfsByDefault#Additional_subvolumes ?
>
> ** Swappiness
> Most performance-critical data will be on the SSD, so there will be much
> less need for RAM caches; therefore I should decrease swappiness
> (especially if I put swap on the SSD), right?  By how much?
>
> * Footnotes
> ¹ A 250 GB WD Blue SN550 rated for 150 TBW.
> ² See the xdg-user-dir manpage.
>
> Kindest regards,
>   Jorge
>
> -- 
> - Many people hate injustice but few check the facts; this causes more
>   injustice.  Ask me about <https://stallmansupport.org>
> - I am Brazilian.  I hope my English is correct and I welcome feedback.
> - https://www.defectivebydesign.org
> - https://www.gnu.org

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about <https://stallmansupport.org>
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- Free Software Supporter: https://www.fsf.org/free-software-supporter
- If an email of mine arrives at your spam box, please notify me.



Btrfs best practices

2021-12-16 Thread Jorge P. de Morais Neto
Hi!  I own a Dell Inspiron 5570 laptop with 1 TB SATA HDD, a new 250 GB
NVMe SSD¹ and 16 GiB RAM.  I seek reliability, durability, performance
and power efficiency.

I do weekly duplicity backups to external 1.5 TB USB3 HDD.  I'll start
also daily rsyncing some of the SSD data to the SATA HDD.

The SSD will have 50 GB extra over provisioning and a 200 GB partition,
besides the special UEFI partition.  The SATA HDD will start with 16 GiB
swap partition then a big partition.  I'll put system and /home on the
SSD but all XDG user dirs² on the HDD, and tmpfs on /tmp.  All three
drives will have Btrfs with space_cache=v2, noatime, zstd compression
and reasonable free breathing space.

I use Gnome and:
- GNU Emacs
- notmuch and offlineimap (I may switch to mbsync)
- GNU IceCat, Mozilla Firefox and ungoogled-chromium
- Gajim and GNU Jami
- Gnome Boxes or Virtual Machine Manager running a VM with 2 GiB RAM and
  one .qcow2 disk image currently weighting 24 GB.
- mpv
- Nextcloud (always running but rarely syncing changes)

I use Debian stable with only official repositories, including
bullseye-backports.  I manually installed GNU Guix package manager and
have 163 packages on main Guix profile.

* Doubts
** Backported kernel
Should I use a backported kernel as Btrfs [wiki][] recommends?  I worry
that bullseye-backports comes from Debian testing with poor security.

[wiki]: https://btrfs.wiki.kernel.org/index.php/Getting_started#Before_you_start

** Strong compression during install
For lifetime and space saving, I intend to install Debian to the SSD
with compress-force=zstd:12, but then adopt compress-force=zstd.  Thus
the installation will be slow---I'll do something else while the
installer works---but the installed system will be efficient, right?

** HDD Compression
Both HDD have a lot of already-compressed data: videos, audio, photos
and compressed archives and disk images; compress-force would force
Btrfs to recompress it all, only to discard the recompressed data and
store the original.  Therefore compress=zstd:4 would be better, right?

** Fragmentation
Is fragmentation a concern?  Is the [Gotchas][] article accurate?

[Gotchas]: https://btrfs.wiki.kernel.org/index.php/Gotchas#Fragmentation

** Subvolumes
What about
https://fedoraproject.org/wiki/Changes/BtrfsByDefault#Additional_subvolumes ?

** Swappiness
Most performance-critical data will be on the SSD, so there will be much
less need for RAM caches; therefore I should decrease swappiness
(especially if I put swap on the SSD), right?  By how much?

* Footnotes
¹ A 250 GB WD Blue SN550 rated for 150 TBW.
² See the xdg-user-dir manpage.

Kindest regards,
  Jorge

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- https://www.defectivebydesign.org
- https://www.gnu.org



Re: Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD

2021-12-15 Thread Jorge P . de Morais Neto
Hello,

Em [2021-12-09 qui 15:00:43+0100], hdv@gmail escreveu:

> Regarding the swap space: I wouldn't make it so big.  That really isn't 
> necessary.  I have a 64GB RAM system here, on which I have 2GB of swap.  I 
> doubt I have ever seen conky show me more than 35% use.  And I am quite a 
> heavy user of system resources (much 3D CAD editing, photo editing, 
> video editing and rendering, and often multiple VM's in use).
>
> My laptop has 32GB of RAM and 2 GB of swap and on that system I haven't 
> seen much swapping either.

I wanted to play safe in case I later upgrade the RAM to 32 GiB and,
additionally, I later enable the hibernate functionality.  Since I have
a 1 TB HDD, I can spare a 32 GiB (approximately 34 GB) for swapping.
For increased swapping performance, the swapping space on a rotational
drive should be contiguous and located at the start of the drive, right?

Kindest regards,
Jorge

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- https://www.defectivebydesign.org
- https://www.gnu.org



Re: Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD

2021-12-15 Thread Jorge P . de Morais Neto
Hello,

Em [2021-12-09 qui 01:02:17+], Andy Smith escreveu:

> If you are still worried you could partition just half of it and use
> it as a physical volume for LVM, which you might want to do anyway to
> encrypt it (LUKS), Then over time you can see how much you have
> written, how much life is left etc. and decide then whether to leave
> it over-provisioned or extend the LVM further.  It leaves your options
> open.

I intend to use Btrfs.  This means if I later decide to use some of the
unpartitioned space, I can easily and efficiently add it to the main
Btrfs file system without directly using LVM (since Btrfs actually
includes logical volume management), right?

Kindest regards,

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- Please adopt free/libre formats like PDF, Org, LaTeX, ODF, Opus, WebM and 7z.
- Libre apps for AOSP (Replicant, LineageOS, etc.) and Android: F-Droid
- https://www.gnu.org/philosophy/free-sw.html "What is free software?"



Re: Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD

2021-12-15 Thread Jorge P . de Morais Neto
Hi.

Em [2021-12-09 qui 05:14:09+0500], Alexander V. Makartsev escreveu:

> So, if you plan to use NVMe SSD as a system drive, I suggest you also 
> keep /swap partition

I am considering swapping to the SSD, yes.

> Also, I suggest you to make backups of /home on daily schedule to HDD,
> because data recovery from a failed SSD is not only very expensive,
> but often also next to impossible.

Thank you for this tip, I was not fully aware of this issue.

Kindest regards

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- Free Software Supporter: https://www.fsf.org/free-software-supporter
- If an email of mine arrives at your spam box, please notify me.



Re: Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD

2021-12-14 Thread Jorge P . de Morais Neto
Hello!

Em [2021-12-08 qua 22:05:50-0800], David Christensen escreveu:

> I would remove the 1 TB HDD, install the 250 GB NVMe SSD, and do a fresh 
> install of Debian 11 with MBR partitioning, 1E+9 byte boot partition 
> (ext4)

Why MBR partitioning and why a separate boot partition?

> I would put the 1 TB HDD into an external HDD enclosure and use it to
> store system images (e.g. partition table, boot, swap, and root).

I already have a 1.5 TB external HDD.  And I intend to use GPT
partitioning, which (I've read) stores checksummed copies of the
partition table.  For the actual data, I have weekly backups to the
external HDD, and I intend to have daily rsync of some of the SSD data
to the internal HDD.  Would not that be safe enough?

> I recommend a file server or NAS for bulk data -- downloads, music,
> photographs, videos, etc..

I would like to setup a home server, but probably not very soon.

> I recommend a version control server for user project files and system 
> configuration files.

I store my dotfiles (and some other data) in a git repository in my
notebook.  I hope that duplicity is correctly backing up that git
repository, but I admit I have not tested it.

Regards!

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- https://www.defectivebydesign.org
- https://www.gnu.org



Re: Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD

2021-12-08 Thread Jorge P . de Morais Neto
Hi.  Thank you for your response.

Em [2021-12-08 qua 14:49:50+], piorunz escreveu:

> I understand you have one SATA 2.5" slot in your laptop and one NVMe
> slot, and you want to utilize them both.

That is correct.

>> On the SSD I intend to leave 35 GB unpartitioned for extra over
>> provisioning.  It would have just one 215 GB partition.
>
> Leave more overprovisioning if you can. Use Btrfs with zstd compression
> for your / and /home, you will gain many gigabytes.

How much overprovisioning would you recommend?  I can probably afford
more indeed (because of the 1 TB HDD), but excessive overprovisioning
could increase the risk of the system failing due to lack of disk space
during some important task.  Also, I heard that Linux filesystems like
having some reasonable (some say 10%) internal (within the filesystem)
free space.

> This doesn't make any sense.  Don't run RAID1 SSD+HDD.  You will kill
> all gains SSD NVMe provides.

I lack RAID experience, but I assumed the kernel would easily be smart
enough to read from the fastest RAID member (SSD), so read performance
would be great.  And I hoped the kernel would also be smart enough to,
on writes, write to the SSD first and later (asynchronously) replicate
to the HDD.  But now a quick web search indeed suggests that those
optimizations are not default or common, so we can drop the RAID idea.

> / on NVMe Btrfs
> noatime,nodiratime,space_cache=v2,ssd,compress-force=zstd:6,subvol=@
>
> /home on NVMe Btrfs
> noatime,nodiratime,space_cache=v2,ssd,compress-force=zstd:6,subvol=@home

About those options:

- noatime: I didn't know about this issue, I thought relatime was
  efficient enough.  Thank you for the tip!
- nodiratime: According to the mount manpage, noatime implies
  nodiratime.
- ssd: Does btrfs not autodetect SSD?  Why provide ssd option?
- Why `compress-force' instead of simply `compress'?


For more context, my DE is Gnome and some of my most often used
applications are:

- GNU Emacs
- notmuch and offlineimap (I am considering switching to mbsync)
- GNU IceCat and Mozilla Firefox
- Gajim and GNU Jami
- Nextcloud (it is always running but rarely syncing changes)
- Gnome Boxes or Virtual Machine Manager running a VM with 2 GiB RAM and
  one .qcow2 disk image currently weighting 24 GB.

Kind regards!

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- Free Software Supporter: https://www.fsf.org/free-software-supporter
- If an email of mine arrives at your spam box, please notify me.



Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD

2021-12-08 Thread Jorge P. de Morais Neto
Hi everyone!  I have a Dell Inspiron 5570 laptop with 1TB HDD and 16 GiB
RAM (it supports 32 GiB).  I am about to buy an M.2 NVMe 250GB SSD---a
Western Digital WD Blue SN550.  I would like to set the system for
reliability, SSD durability¹ and performance.

I have looked at [Multi HDD/SSD Partitioning Scheme][] but it is too
complex and probably outdated (last modified 2013-10-17).  I would like
something simpler.  For backups, I would continue my weekly manual
backups to my 1.5 TB external HDD with duplicity.

On the SSD I intend to leave 35 GB unpartitioned for extra over
provisioning.  It would have just one 215 GB partition.

On the HDD I would put a 34 GB swap partition at the beginning, then a
215 GB partition for RAID1 with the SSD, then a 751 GB partition.  I
intend to put Debian system *and* /home on the 215 GB RAID1, but I would
set all the XDG user dirs² on the 751 GB HDD partition.  I would have
tmpfs on /tmp---I have read that long thread where someone alleged that
moving /tmp to tmpfs makes it useless but I disagree.

Would all this be reasonable?  Do you recommend any change?  Any tip?  I
run Debian stable with only official repositories, including
bullseye-backports.  I also manually installed GNU Guix package manager
and my main Guix profile has 163 packages.

Regards!

[Multi HDD/SSD Partitioning Scheme] 
https://wiki.debian.org/Multi%20HDD/SSD%20Partition%20Scheme

¹ According to its data sheet, the 250GB WD Blue SN550 endures 150TBW.
² See the xdg-user-dir manpage.

-- 
- Many people hate injustice but few check the facts; this causes more
  injustice.  Ask me about 
- I am Brazilian.  I hope my English is correct and I welcome feedback.
- https://www.defectivebydesign.org
- https://www.gnu.org