There is a setting to set the max pg per osd. I would set that 
temporarily so you can work, create a new pool with 8 pg's and move data 
over to the new pool, remove the old pool, than unset this max pg per 

PS. I am always creating pools starting 8 pg's and when I know I am at 
what I want in production I can always increase the pg count.

-----Original Message-----
From: Brian Topping [] 
Sent: 08 February 2019 05:30
To: Ceph Users
Subject: [ceph-users] Downsizing a cephfs pool

Hi all, I created a problem when moving data to Ceph and I would be 
grateful for some guidance before I do something dumb.

1.      I started with the 4x 6TB source disks that came together as a 
single XFS filesystem via software RAID. The goal is to have the same 
data on a cephfs volume, but with these four disks formatted for 
bluestore under Ceph.
2.      The only spare disks I had were 2TB, so put 7x together. I sized 
data and metadata for cephfs at 256 PG, but it was wrong.
3.      The copy went smoothly, so I zapped and added the original 4x 6TB 
disks to the cluster.
4.      I realized what I did, that when the 7x2TB disks were removed, 
there were going to be far too many PGs per OSD.

I just read over, but that 
addresses how to do this with a generic pool, not pools used by CephFS. 
It looks easy to copy the pools, but once copied and renamed, CephFS may 
not recognize them as the target and the data may be lost.

Do I need to create new pools and copy again using cpio? Is there a 
better way?

Thanks! Brian

ceph-users mailing list

Reply via email to