Re: [ceph-users] pool migration for cephfs?

2019-05-15 Thread Brian Topping
Lars, I just got done doing this after generating about a dozen CephFS subtrees 
for different Kubernetes clients. 

tl;dr: there is no way for files to move between filesystem formats (ie CephFS 
,> RBD) without copying them.

If you are doing the same thing, there may be some relevance for you in 
https://github.com/kubernetes/enhancements/pull/643. It’s worth checking to see 
if it meets your use case if so.

In any event, what I ended up doing was letting Kubernetes create the new PV 
with the RBD provisioner, then using find piped to cpio to move the file 
subtree. In a non-Kubernetes environment, one would simply create the 
destination RBD as usual. It should be most performant to do this on a monitor 
node.

cpio ensures you don’t lose metadata. It’s been fine for me, but if you have 
special xattrs that the clients of the files need, be sure to test that those 
are copied over. It’s very difficult to move that metadata once a file is 
copied and even harder to deal with a situation where the destination volume 
went live and some files on the destination are both newer versions and missing 
metadata. 

Brian

> On May 15, 2019, at 6:05 AM, Lars Täuber  wrote:
> 
> Hi,
> 
> is there a way to migrate a cephfs to a new data pool like it is for rbd on 
> nautilus?
> https://ceph.com/geen-categorie/ceph-pool-migration/
> 
> Thanks
> Lars
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pool migration for cephfs?

2019-05-15 Thread Elise Burke
Oops, forgot a step - need to tell the MDS about the new pool before step 2:

`ceph mds add_data_pool `

You may also need to mark the pool as used by cephfs:

`ceph osd pool application enable {pool-name} cephfs`

On Wed, May 15, 2019 at 3:15 PM Elise Burke  wrote:

> I came across that and tried it - the short answer is no, you can't do
> that - using cache tier. The longer answer as to why I'm less sure about,
> but iirc it has to do with copying / editing the OMAP object properties.
>
> The good news, however, is that you can 'fake it' using File Layouts -
> http://docs.ceph.com/docs/mimic/cephfs/file-layouts/
>
> In my case I was moving around / upgrading disks and wanted to change from
> unreplicated (well, r=1) to erasure coding (in my case, rs4.1). I was able
> to do this keeping the following in mind:
>
> 1. The original pool, cephfs_data, must remain as a replicated pool. I'm
> unsure why, IIRC some metadata can't be kept in erasure coded pools.
> 2. The metadata pool, cephfs_metadata, must also remain as a replicated
> pool.
> 3. Your new pool (the destination pool) can be created however you like.
> 4. This procedure involves rolling unavailability on a per-file basis.
>
> This is from memory; I should do a better writeup elsewhere, but what I
> did was this:
>
> 1. Create your new pool. `ceph osd pool create  cephfs_data_ec_rs4.1 8 8
> erasure rs4.1`
> 2. Set the xattr for the root directory to use the new pool: `setfattr -n
> ceph.file.layout.pool -v cephfs_data_ec_rs4.1 /cephfs_mountpoint/`
>
> At this stage all new files will be written to the new pool. Unfortunately
> you can't change the layout of a file with data, so copying the files back
> into their own place is required. You can hack up a bash script to do this,
> or write a converter program. Here's the most relevant bit, per file, which
> copies the file first and then renames the new file to the old file:
>
> func doConvert(filename string) error {
> poolRewriteName, previousPoolName, err :=
> newNearbyTempFiles(filename)
> if err != nil {
> return err
> }
> err = SetCephFSFileLayoutPool(poolRewriteName, []byte(*toPool))
> if err != nil {
> os.Remove(poolRewriteName)
> os.Remove(previousPoolName)
> return err
> }
>
> err = CopyFilePermissions(filename, poolRewriteName)
> if err != nil {
> os.Remove(poolRewriteName)
> os.Remove(previousPoolName)
> return err
> }
>
> //log.Printf("Copying %s to %s\n", filename, poolRewriteName)
> err = CopyFile(filename, poolRewriteName)
> if err != nil {
> os.Remove(poolRewriteName)
> os.Remove(previousPoolName)
> return err
> }
>
> //log.Printf("Moving %s to %s\n", filename, previousPoolName)
> err = MoveFile(filename, previousPoolName)
> if err != nil {
> os.Remove(poolRewriteName)
> os.Remove(previousPoolName)
> return err
> }
>
> //log.Printf("Moving %s to %s\n", poolRewriteName, filename)
> err = MoveFile(poolRewriteName, filename)
> os.Remove(poolRewriteName)
> os.Remove(previousPoolName)
> return err
> }
>
>
>
> On Wed, May 15, 2019 at 10:31 AM Lars Täuber  wrote:
>
>> Hi,
>>
>> is there a way to migrate a cephfs to a new data pool like it is for rbd
>> on nautilus?
>> https://ceph.com/geen-categorie/ceph-pool-migration/
>>
>> Thanks
>> Lars
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pool migration for cephfs?

2019-05-15 Thread Elise Burke
I came across that and tried it - the short answer is no, you can't do that
- using cache tier. The longer answer as to why I'm less sure about, but
iirc it has to do with copying / editing the OMAP object properties.

The good news, however, is that you can 'fake it' using File Layouts -
http://docs.ceph.com/docs/mimic/cephfs/file-layouts/

In my case I was moving around / upgrading disks and wanted to change from
unreplicated (well, r=1) to erasure coding (in my case, rs4.1). I was able
to do this keeping the following in mind:

1. The original pool, cephfs_data, must remain as a replicated pool. I'm
unsure why, IIRC some metadata can't be kept in erasure coded pools.
2. The metadata pool, cephfs_metadata, must also remain as a replicated
pool.
3. Your new pool (the destination pool) can be created however you like.
4. This procedure involves rolling unavailability on a per-file basis.

This is from memory; I should do a better writeup elsewhere, but what I did
was this:

1. Create your new pool. `ceph osd pool create  cephfs_data_ec_rs4.1 8 8
erasure rs4.1`
2. Set the xattr for the root directory to use the new pool: `setfattr -n
ceph.file.layout.pool -v cephfs_data_ec_rs4.1 /cephfs_mountpoint/`

At this stage all new files will be written to the new pool. Unfortunately
you can't change the layout of a file with data, so copying the files back
into their own place is required. You can hack up a bash script to do this,
or write a converter program. Here's the most relevant bit, per file, which
copies the file first and then renames the new file to the old file:

func doConvert(filename string) error {
poolRewriteName, previousPoolName, err :=
newNearbyTempFiles(filename)
if err != nil {
return err
}
err = SetCephFSFileLayoutPool(poolRewriteName, []byte(*toPool))
if err != nil {
os.Remove(poolRewriteName)
os.Remove(previousPoolName)
return err
}

err = CopyFilePermissions(filename, poolRewriteName)
if err != nil {
os.Remove(poolRewriteName)
os.Remove(previousPoolName)
return err
}

//log.Printf("Copying %s to %s\n", filename, poolRewriteName)
err = CopyFile(filename, poolRewriteName)
if err != nil {
os.Remove(poolRewriteName)
os.Remove(previousPoolName)
return err
}

//log.Printf("Moving %s to %s\n", filename, previousPoolName)
err = MoveFile(filename, previousPoolName)
if err != nil {
os.Remove(poolRewriteName)
os.Remove(previousPoolName)
return err
}

//log.Printf("Moving %s to %s\n", poolRewriteName, filename)
err = MoveFile(poolRewriteName, filename)
os.Remove(poolRewriteName)
os.Remove(previousPoolName)
return err
}



On Wed, May 15, 2019 at 10:31 AM Lars Täuber  wrote:

> Hi,
>
> is there a way to migrate a cephfs to a new data pool like it is for rbd
> on nautilus?
> https://ceph.com/geen-categorie/ceph-pool-migration/
>
> Thanks
> Lars
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pool migration for cephfs?

2019-05-15 Thread Peter Woodman
I actually made a dumb python script to do this. It's ugly and has a
lot of hardcoded things in it (like the mount location where i'm
copying things to to move pools, names of pools, the savings i was
expecting, etc) but should be easy to adapt to what you're trying to
do

https://gist.github.com/pjjw/b5fbee24c848661137d6ac09a3e0c980

On Wed, May 15, 2019 at 1:45 PM Patrick Donnelly  wrote:
>
> On Wed, May 15, 2019 at 5:05 AM Lars Täuber  wrote:
> > is there a way to migrate a cephfs to a new data pool like it is for rbd on 
> > nautilus?
> > https://ceph.com/geen-categorie/ceph-pool-migration/
>
> No, this isn't possible.
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Senior Software Engineer
> Red Hat Sunnyvale, CA
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pool migration for cephfs?

2019-05-15 Thread Patrick Donnelly
On Wed, May 15, 2019 at 5:05 AM Lars Täuber  wrote:
> is there a way to migrate a cephfs to a new data pool like it is for rbd on 
> nautilus?
> https://ceph.com/geen-categorie/ceph-pool-migration/

No, this isn't possible.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] pool migration for cephfs?

2019-05-15 Thread Lars Täuber
Hi,

is there a way to migrate a cephfs to a new data pool like it is for rbd on 
nautilus?
https://ceph.com/geen-categorie/ceph-pool-migration/

Thanks
Lars
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com