On Mon, Apr 27, 2020 at 3:07 PM antlists wrote:
>
> On 27/04/2020 17:59, Rich Freeman wrote:
> > Really though a better solution than any of this is for the filesystem
> > to be more SSD-aware and just only perform writes on entire erase
> > regions at one time. If the drive is told to write
On 27/04/2020 17:59, Rich Freeman wrote:
Really though a better solution than any of this is for the filesystem
to be more SSD-aware and just only perform writes on entire erase
regions at one time. If the drive is told to write blocks 1-32 then
it can just blindly erase their contents first
On Mon, Apr 27, 2020 at 12:20 PM wrote:
>
> The kernel is keep track of all, which already has been fstrimmed and
> avoids to retrimm the same data.
> This knowledge gets lost, when the PC is powercycled or rebooted.
>
I imagine this is filesystem-specific. When I checked the ext4 source
I
On 04/28 03:12, Kent Fredric wrote:
> On Sun, 26 Apr 2020 18:15:51 +0200
> tu...@posteo.de wrote:
>
> > Filesystem Size Used Avail Use% Mounted on
> > /dev/root 246G 45G 189G 20% /
>
> Given that (Size - Used) is roughly 200G, it suggests to me that
> perhaps, some process
On Sun, 26 Apr 2020 18:15:51 +0200
tu...@posteo.de wrote:
> Filesystem Size Used Avail Use% Mounted on
> /dev/root 246G 45G 189G 20% /
Given that (Size - Used) is roughly 200G, it suggests to me that
perhaps, some process somewhere is creating and deleting a lot of
temporary
Hello, Rich.
On Sun, Apr 26, 2020 at 15:29:40 -0400, Rich Freeman wrote:
[ ]
> Incidentally, in the other thread the reason that dry-run didn't
> report anything to be trimmed is that this is hard-coded:
> printf(_("%s: 0 B (dry run) trimmed on %s\n"), path, devname);
>
On 27/4/20 11:14 am, tu...@posteo.de wrote:
On 04/26 09:58, Rich Freeman wrote:
/ on a btrfs raid10 (1x500G and 3x120G SSD)
"fstrim -v /" about 2 hrs apart:
rattus ~ # fstrim -v /
/: 680.6 GiB (730744291328 bytes) trimmed
rattus ~ # fstrim -v /
/: 17.8 GiB (19087859712 bytes) trimmed
On 04/26 09:58, Rich Freeman wrote:
> On Sun, Apr 26, 2020 at 9:43 PM wrote:
> >
> > To implement a dry run with a printf() is new to me... ;)
> >
>
> That is all they fstrim authors could do, since there is no dry-run
> option for the actual ioctl, and fstrim itself has no idea how the
>
On Sun, Apr 26, 2020 at 9:43 PM wrote:
>
> To implement a dry run with a printf() is new to me... ;)
>
That is all they fstrim authors could do, since there is no dry-run
option for the actual ioctl, and fstrim itself has no idea how the
filesystem will implement it (short of re-implementing
On 04/26 03:29, Rich Freeman wrote:
> On Sun, Apr 26, 2020 at 12:15 PM wrote:
> >
> > On 04/26 11:20, Rich Freeman wrote:
> > > On Sun, Apr 26, 2020 at 10:52 AM wrote:
> > > >
> > > > Fstrim reports about 200 GiB of trimmed data.
> > > >
> > >
> > > My suggestion would be to run fstrim twice in
On Sun, Apr 26, 2020 at 12:15 PM wrote:
>
> On 04/26 11:20, Rich Freeman wrote:
> > On Sun, Apr 26, 2020 at 10:52 AM wrote:
> > >
> > > Fstrim reports about 200 GiB of trimmed data.
> > >
> >
> > My suggestion would be to run fstrim twice in a row and see how fast
> > it operates and what the
On 04/26 11:20, Rich Freeman wrote:
> On Sun, Apr 26, 2020 at 10:52 AM wrote:
> >
> > Fstrim reports about 200 GiB of trimmed data.
> >
> > From the gut this looks quite a lot -- the whole
> > partition is 256 GB in size.
> >
> > Smartclt report for the drive:
> > Data Units Written:
On Sun, Apr 26, 2020 at 10:52 AM wrote:
>
> Fstrim reports about 200 GiB of trimmed data.
>
> From the gut this looks quite a lot -- the whole
> partition is 256 GB in size.
>
> Smartclt report for the drive:
> Data Units Written: 700,841 [358 GB]
>
> Each week 200 GiB fstrimmed
Hi,
jyst out of curiosity:
I have a 512 MB NVMe SDD drive installed, which I had (currently)
formatted with one 256 MB root partition.
I bound /var and /tmp to hardisk.
Currently I am doing one Gentoo update a day and I am running
unstable.
Just to get a feeling, how often I need to fstrim / I
14 matches
Mail list logo