Hello,

On Mon, 13 Apr 2020, tu...@posteo.de wrote:
>On 04/13 11:06, Michael wrote:
>> On Monday, 13 April 2020 06:32:37 BST tu...@posteo.de wrote:
[..]
>My question are more driven by curiousty than by anxiety...
[..]
>For example [the fstrim manpage] says:
>"For most desktop and server systems a sufficient trimming frequency  is
>once  a week."
>
>...but why is this ok to do so? Are all PCs made equal? Are all
>use cases are equal? It even does not distinguish between SSD/Sata 
>and SSD/NVMe(M2 in my case).

Observe your use pattern a bit and use 'fstrim -v' when you think it's
worth it, as it basically boils down to how much you delete *when*. If
you e.g.:

- constantly use the drive as a fast cache for video-editing etc.,
  writing large files to the drive and later deleting them again

    -> run fstrim daily or even mount with 'discard'-option

- write/delete somewhat regularly, e.g. as a system-drive and running
  updates or emerge @world (esp. if you build on the SSD) e.g. weekly
  they effectively are a write operation and a bunch of deletions. Or
  if you do whatever other deletions somewhat regularly

    -> run fstrim after each one or three such deletions, e.g. via
       a weekly cronjob

- mostly write (if anything), rarely delete anything

    -> run fstrim manually a few days after $some deletions have
       accumulated or any other convenient time you can remember to
       and are sure all deleted files can be gone, be it bi-weekly,
       monthly, tri-montly, yearly, completly irregularly, whenever ;)

Choose anything in the range that fits _your_ use-pattern best,
considering capacity, free-space (no matter if on a partition or
unallocated) and what size was trimmed when running 'fstrim -v'...

Running that weekly (I'd suggest bi-weekly) 'fstrim' cronjob is not a
bad suggestion as a default I guess, but observe your use and choose
to deviate or not :)

My gut says to run fstrim if:

    - it'd trim more than 5% (-ish) capacity
    - it'd trim more than 20% (-ish) of the remaining "free" space
      (including unallocated)
    - it'd trim more than $n GiB (where $n may be anything ;)

whichever comes first (and the two latter can only be determined by
observation). No need to run fstrim after deleting just 1kiB. Or 1MiB.

Not that me lazybag adheres to that, read on if you will... ;)

FWIW:
I run fstrim a few times a year when I think of it and guesstimate I
did delete quite a bit in the meantime (much like I run fsck ;) ...
This usually trims a few GiB on my 128G drive:

# fdisk -u -l /dev/sda
Disk /dev/sda: 119.2 GiB, 128035676160 bytes, 250069680 sectors
Disk model: SAMSUNG SSD 830 
[..]
Device     Boot     Start       End   Sectors Size Id Type
/dev/sda1            2048 109053951 109051904  52G 83 Linux
/dev/sda2       109053952 218105855 109051904  52G 83 Linux

(I did leave ~15GiB unpartitioned, and was too lazy to rectify that
yet, at the time I partitioned in 2012, for many (cheaper?) SSDs that
overprovisioning was still a good thing and 'duh intarweb' was quite
worse than today regarding the problem)...

So while I'm about it, I guess it's time to run fstrim (for the first
time this year IIRC) ...

# fstrim -v /sda1 ; fstrim -v /sda2     ## mountpoints mangled
/sda1: 7563407360 bytes were trimmed
/sda2: 6842478592 bytes were trimmed

# calc 'x=config("display",1); 7563407360/2^30; 6842478592/2^30'
        ~7.0
        ~6.4

So, my typical few GiB or about 12.8% disk capacity (summed) were
trimmed (oddly enough, it's always been in this 4-8GiB/partition
range). I probably should run fstrim a bit more often though, but then
again I've still got those unallocated 15G, so I guess I'm fine. And
that's with quite a large Gentoo system on /dev/sda2 and all its at
times large (like libreoffice, firefox, seamonkey, icedtea, etc.)
updates:

# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        52G   45G  3.8G  93% /

PORTAGE_TMPDIR, PORTDIR, (and distfiles and packages) are on other
HDDs though, so building stuff does not affect the SSD, only the
actual install (merge) and whatever else. But I've got /var/log/ on
the SSD on both systems (sda1/sda2).

While I'm at it:

# smartctl -A /dev/sda
[..]
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH [..] RAW_VALUE
  5 Reallocated_Sector_Ct   0x0033   100   100   010    [..] 0
  9 Power_On_Hours          0x0032   091   091   000    [..] 43261
 12 Power_Cycle_Count       0x0032   097   097   000    [..] 2617
177 Wear_Leveling_Count     0x0013   093   093   000    [..] 247
179 Used_Rsvd_Blk_Cnt_Tot   0x0013   100   100   010    [..] 0
181 Program_Fail_Cnt_Total  0x0032   100   100   010    [..] 0
182 Erase_Fail_Count_Total  0x0032   100   100   010    [..] 0
183 Runtime_Bad_Block       0x0013   100   100   010    [..] 0
187 Uncorrectable_Error_Cnt 0x0032   100   100   000    [..] 0
190 Airflow_Temperature_Cel 0x0032   067   050   000    [..] 33
195 ECC_Error_Rate          0x001a   200   200   000    [..] 0
199 CRC_Error_Count         0x003e   253   253   000    [..] 0
235 POR_Recovery_Count      0x0012   099   099   000    [..] 16
241 Total_LBAs_Written      0x0032   099   099   000    [..] 12916765928
[ '[..]': none show WHEN_FAILED other than '-' and the rest is standard]

Wow, I have almost exactly 6TBW or ~52 "Drive writes" or "Capacity
writes" on this puny 128MB SSD:

# calc 'printf("%0.2d TiB written\n%0.1d drive writes\n",
    12916765928/2/2^30, 12916765928/250069680);'
~6.01 TiB written
~51.7 drive writes

And I forgot I've been running this drive for this long already (not
that I've been running it 24/7 by quite a bit, but since July 2012 or
about 15/7-ish):

$ dateduration 43261h       ### [1]
4 years 11 months 7 days 13 hours 0 minutes 0 seconds

HTH,
-dnh

[1] ==== ~/bin/dateduration ====
    #!/bin/bash
    F='%Y years %m months %d days %H hours %M minutes %S seconds'
    now=$(date +%s)
    datediff -f "$F" $(dateadd -i '%s' "$now" +0s) $(dateadd -i '%s' "$now" $1)
    ====

    If anyone knows a better way... ;)

-- 
printk("; crashing the system because you wanted it\n");
        linux-2.6.6/fs/hpfs/super.c

Reply via email to