> Just tested in a test cluster - it won't balk and won't demand force even if you remove a pool that is actually used by files. So beware.
Thanks very much, this is exactly what I needed to know! My next question is, what happens MDS-wise if you *do* remove a pool that still contains some file objects? This pool contains over 4 PB net, and it doesn't seem trivial to make sure I've transferred every single file object out of it, although it's not so hard to make sure all the *important* stuff has been moved. Ideally I'd just move everything to new CephFS instances and delete the original one along with its pools, but there's one set of data using the greater part of a PB that it's difficult to do that with and our current plan is to leave it on the original CephFS instance. This is made easy because that data rolls over in less than a quarter, so I just created a new pool and set dir fattr, and that should take care of getting that data out of the pool in pretty short order (obviously transitioning this quantity of data to new pools, while live in production, is not something that can be done in a matter of a few days or even a few weeks). Thanks again for the test! Trey Palmer On Thu, Oct 2, 2025 at 4:56 PM Alexander Patrakov <[email protected]> wrote: > On Thu, Oct 2, 2025 at 9:45 PM Anthony D'Atri <[email protected]> > wrote: > > > There is design work for a future ability to migrate a pool > transparently, for example to effect a new EC profile, but that won't be > available anytime soon. > > This is, unfortunately, irrelevant in this case. Migrating a pool will > migrate all the objects and their snapshots, even the unwanted ones. > What Trey has (as far as I understood) is that there are some > RADOS-level snapshots that do not correspond to any CephFS-level > snapshots and are thus garbage, not to be migrated. > > That's why the talk about file migration and not pool-level operations. > > Now to the original question: > > > will I be able to do 'ceph fs rm_data_pool' once there are no longer any > > objects associated with the CephFS instance on the pool, or will the MDS > > have ghost object records that cause the command to balk? > > Just tested in a test cluster - it won't balk and won't demand force > even if you remove a pool that is actually used by files. So beware. > > $ ceph osd pool create badfs_evilpool 32 ssd-only > pool 'badfs_evilpool' created > $ ceph fs add_data_pool badfs badfs_evilpool > added data pool 38 to fsmap > $ ceph fs ls > name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data > cephfs_data_wrongpool cephfs_data_rightpool cephfs_data_hdd ] > name: badfs, metadata pool: badfs_metadata, data pools: [badfs_data > badfs_evilpool ] > $ cephfs-shell -f badfs > CephFS:~/>>> ls > dir1/ dir2/ > CephFS:~/>>> mkdir evil > CephFS:~/>>> setxattr evil ceph.dir.layout.pool badfs_evilpool > ceph.dir.layout.pool is successfully set to badfs_evilpool > CephFS:~/>>> put /usr/bin/ls /evil/ls > $ ceph fs rm_data_pool badfs badfs_evilpool > removed data pool 38 from fsmap > > -- > Alexander Patrakov > _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
