Dear Cephalopodians, in some recent threads on this list, I have read about the "knobs":
pglog_hardlimit (false by default, available at least with 12.2.11 and
13.2.5)
bdev_enable_discard (false by default, advanced option, no description)
bdev_async_discard (false by default, advanced option, no description)
I am wondering about the defaults for these settings, and why these settings
seem mostly undocumented.
It seems to me that on SSD / NVMe devices, you would always want to enable
discard for significantly increased lifetime,
or run fstrim regularly (which you can't with bluestore since it's a filesystem
of its own). From personal experience,
I have already lost two eMMC devices in Android phones early due to trimming
not working fine.
Of course, on first generation SSD devices, "discard" may lead to data loss
(which for most devices has been fixed with firmware updates, though).
I would presume that async-discard is also advantageous, since it seems to
queue the discards and work on these in bulk later
instead of issuing them immediately (that's what I grasp from the code).
Additionally, it's unclear to me whether the bdev-discard settings also affect
WAL/DB devices, which are very commonly SSD/NVMe devices
in the Bluestore age.
Concerning the pglog_hardlimit, I read on that list that it's safe and limits
maximum memory consumption especially for backfills / during recovery.
So it "sounds" like this is also something that could be on by default. But
maybe that is not the case yet to allow downgrades after failed upgrades?
So in the end, my question is:
Is there a reason why these values are not on by default, and are also not
really mentioned in the documentation?
Are they just "not ready yet" / unsafe to be on by default, or are the defaults
just like that because they have always been at this value,
and defaults will change with the next major release (nautilus)?
Cheers,
Oliver
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
