Hi Steve,

On Thu, 12 Mar 2026 at 19:16, Steve Yates <[email protected]> wrote:

> Hi Alwin,
>
>
>
> All VM disks are set as SSD with discard enabled.
>
>
>
> “Run guest-trim after a disk move or VM migration” was not checked.  I
> didn’t realize that was in there!  I don’t think that applies though? Per
> https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_qemu_agent :
>
> “With this enabled, Proxmox VE will issue a trim command to the guest
> after the following operations that have the potential to write out zeros
> to the storage:
>
>    - moving a disk to another storage
>    - live migrating a VM to another node with local storage”
>
>
>
> …and using Ceph the disk doesn’t move if we migrate a VM…?  Plus trim is
> being run weekly.  Maybe I could stop a VM and move its disk to local
> storage and back into Ceph again but that would take quite a lot of time
> and seems like little to no gain except maybe a lower object count.
>
That's what I meant as an alternative.


>
>
> That section also has this note on ext4, matching previous discussions:
> “There is a caveat with ext4 on Linux, because it uses an in-memory
> optimization to avoid issuing duplicate TRIM requests. Since the guest
> doesn’t know about the change in the underlying storage, only the first
> guest-trim will run as expected. Subsequent ones, until the next reboot,
> will only consider parts of the filesystem that changed since then.”
>
Exactly that. Even if the in-memory bitmap disappears, the underlying
misalignment issue will remain and affect any guest.

Cheers,
Alwin
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to