Hi all, I created a problem when moving data to Ceph and I would be grateful 
for some guidance before I do something dumb.

I started with the 4x 6TB source disks that came together as a single XFS 
filesystem via software RAID. The goal is to have the same data on a cephfs 
volume, but with these four disks formatted for bluestore under Ceph.
The only spare disks I had were 2TB, so put 7x together. I sized data and 
metadata for cephfs at 256 PG, but it was wrong.
The copy went smoothly, so I zapped and added the original 4x 6TB disks to the 
I realized what I did, that when the 7x2TB disks were removed, there were going 
to be far too many PGs per OSD.

I just read over https://stackoverflow.com/a/39637015/478209 
<https://stackoverflow.com/a/39637015/478209>, but that addresses how to do 
this with a generic pool, not pools used by CephFS. It looks easy to copy the 
pools, but once copied and renamed, CephFS may not recognize them as the target 
and the data may be lost.

Do I need to create new pools and copy again using cpio? Is there a better way?

Thanks! Brian
ceph-users mailing list

Reply via email to