I think I would COPY and DELETE in chunks the data not via the 'backend' 
but just via cephfs. So you are 100% sure nothing weird can happen. 
(MOVE is not working as you think on a cephfs between different pools)
You can create and mount an extra data pool in cephfs. I have done this 
also so you can mix rep3 and erasure and a fast ssd pool on you cephfs. 

Adding a pool, something like this:
ceph osd pool set fs_data.ec21 allow_ec_overwrites true
ceph osd pool application enable fs_data.ec21 cephfs
ceph fs add_data_pool cephfs fs_data.ec21

Change a directory to use a different pool:
setfattr -n ceph.dir.layout.pool -v fs_data.ec21 folder
getfattr -n ceph.dir.layout.pool folder

-----Original Message-----
From: Brian Topping [mailto:brian.topp...@gmail.com] 
Sent: 08 February 2019 10:02
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Downsizing a cephfs pool

Hi Mark, thats great advice, thanks! Im always grateful for the 

What about the issue with the pools containing a CephFS though? Is it 
something where I can just turn off the MDS, copy the pools and rename 
them back to the original name, then restart the MDS? 

Agreed about using smaller numbers. When I went to using seven disks, I 
was getting warnings about too few PGs per OSD. Im sure this is 
something one learns to cope with via experience and Im still picking 
that up. Had hoped not I get in a bind like this so quickly, but hey, 
here I am again :)

> On Feb 8, 2019, at 01:53, Marc Roos <m.r...@f1-outsourcing.eu> wrote:
> There is a setting to set the max pg per osd. I would set that 
> temporarily so you can work, create a new pool with 8 pg's and move 
> data over to the new pool, remove the old pool, than unset this max pg 

> per osd.
> PS. I am always creating pools starting 8 pg's and when I know I am at 

> what I want in production I can always increase the pg count.
> -----Original Message-----
> From: Brian Topping [mailto:brian.topp...@gmail.com]
> Sent: 08 February 2019 05:30
> To: Ceph Users
> Subject: [ceph-users] Downsizing a cephfs pool
> Hi all, I created a problem when moving data to Ceph and I would be 
> grateful for some guidance before I do something dumb.
> 1.    I started with the 4x 6TB source disks that came together as a 
> single XFS filesystem via software RAID. The goal is to have the same 
> data on a cephfs volume, but with these four disks formatted for 
> bluestore under Ceph.
> 2.    The only spare disks I had were 2TB, so put 7x together. I sized 

> data and metadata for cephfs at 256 PG, but it was wrong.
> 3.    The copy went smoothly, so I zapped and added the original 4x 
> disks to the cluster.
> 4.    I realized what I did, that when the 7x2TB disks were removed, 
> there were going to be far too many PGs per OSD.
> I just read over https://stackoverflow.com/a/39637015/478209, but that 

> addresses how to do this with a generic pool, not pools used by 
> It looks easy to copy the pools, but once copied and renamed, CephFS 
> may not recognize them as the target and the data may be lost.
> Do I need to create new pools and copy again using cpio? Is there a 
> better way?
> Thanks! Brian

ceph-users mailing list

Reply via email to