Re: [ceph-users] Migrating a cephfs data pool

2019-07-02 Thread Patrick Donnelly
On Fri, Jun 28, 2019 at 8:27 AM Jorge Garcia  wrote:
>
> This seems to be an issue that gets brought up repeatedly, but I haven't
> seen a definitive answer yet. So, at the risk of repeating a question
> that has already been asked:
>
> How do you migrate a cephfs data pool to a new data pool? The obvious
> case would be somebody that has set up a replicated pool for their
> cephfs data and then wants to convert it to an erasure code pool. Is
> there a simple way to do this, other than creating a whole new ceph
> cluster and copying the data using rsync?

For those interested, there's a ticket [1] to perform file layout
migrations in the MDS in an automated way. Not sure if it'll get done
for Octopus though.

[1] http://tracker.ceph.com/issues/40285

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a cephfs data pool

2019-07-01 Thread Gregory Farnum
On Fri, Jun 28, 2019 at 5:41 PM Jorge Garcia  wrote:
>
> Ok, actually, the problem was somebody writing to the filesystem. So I moved 
> their files and got to 0 objects. But then I tried to remove the original 
> data pool and got an error:
>
>   # ceph fs rm_data_pool cephfs cephfs-data
>   Error EINVAL: cannot remove default data pool
>
> So it seems I will never be able to remove the original data pool. I could 
> leave it there as a ghost pool, which is not optimal, but I guess there's 
> currently not a better option.

Yeah; CephFS writes its backtrace pointers (for inode-based lookups)
to the default data pool. Unfortunately we need all of those to live
in one known pool, and CephFS doesn't have a way to migrate them.
-Greg

>
> On 6/28/19 4:04 PM, Patrick Hein wrote:
>
> Afaik MDS doesn't delete the objects immediately but defer it for later. If 
> you check that again now, how many objects does it report?
>
> Jorge Garcia  schrieb am Fr., 28. Juni 2019, 23:16:
>>
>>
>> On 6/28/19 9:02 AM, Marc Roos wrote:
>> > 3. When everything is copied-removed, you should end up with an empty
>> > datapool with zero objects.
>>
>> I copied the data to a new directory and then removed the data from the
>> old directory, but df still reports some objects in the old pool (not
>> zero). Is there a way to track down what's still in the old pool, and
>> how to delete it?
>>
>> Before delete:
>>
>> # ceph df
>> GLOBAL:
>>  SIZEAVAIL   RAW USED %RAW USED
>>  392 TiB 389 TiB  3.3 TiB  0.83
>> POOLS:
>>  NAMEID USED%USED MAX AVAIL OBJECTS
>>  cephfs-meta  6   17 MiB 0   123 TiB 27
>>  cephfs-data   7  763 GiB  0.60   123 TiB 195233
>>  new-ec-pool  8  641 GiB  0.25   245 TiB 163991
>>
>> After delete:
>>
>> # ceph df
>> GLOBAL:
>>  SIZEAVAIL   RAW USED %RAW USED
>>  392 TiB 391 TiB  1.2 TiB  0.32
>> POOLS:
>>  NAMEID USED%USED MAX AVAIL OBJECTS
>>  cephfs-meta  6   26 MiB 0   124 TiB 29
>>  cephfs-data   7   83 GiB  0.07   124 TiB 21175
>>  new-ec-pool  8  641 GiB  0.25   247 TiB 163991
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia
Ok, actually, the problem was somebody writing to the filesystem. So I 
moved their files and got to 0 objects. But then I tried to remove the 
original data pool and got an error:


  # ceph fs rm_data_pool cephfs cephfs-data
  Error EINVAL: cannot remove default data pool

So it seems I will never be able to remove the original data pool. I 
could leave it there as a ghost pool, which is not optimal, but I guess 
there's currently not a better option.


On 6/28/19 4:04 PM, Patrick Hein wrote:
Afaik MDS doesn't delete the objects immediately but defer it for 
later. If you check that again now, how many objects does it report?


Jorge Garcia mailto:jgar...@soe.ucsc.edu>> 
schrieb am Fr., 28. Juni 2019, 23:16:



On 6/28/19 9:02 AM, Marc Roos wrote:
> 3. When everything is copied-removed, you should end up with an
empty
> datapool with zero objects.

I copied the data to a new directory and then removed the data
from the
old directory, but df still reports some objects in the old pool (not
zero). Is there a way to track down what's still in the old pool, and
how to delete it?

Before delete:

# ceph df
GLOBAL:
 SIZE    AVAIL   RAW USED %RAW USED
 392 TiB 389 TiB  3.3 TiB  0.83
POOLS:
 NAME    ID USED    %USED MAX AVAIL OBJECTS
 cephfs-meta  6   17 MiB 0   123 TiB 27
 cephfs-data   7  763 GiB  0.60   123 TiB 195233
 new-ec-pool  8  641 GiB  0.25   245 TiB 163991

After delete:

# ceph df
GLOBAL:
 SIZE    AVAIL   RAW USED %RAW USED
 392 TiB 391 TiB  1.2 TiB  0.32
POOLS:
 NAME    ID USED    %USED MAX AVAIL OBJECTS
 cephfs-meta  6   26 MiB 0   124 TiB 29
 cephfs-data   7   83 GiB  0.07   124 TiB 21175
 new-ec-pool  8  641 GiB  0.25   247 TiB 163991

___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia
This was after a while (I did notice that the number of objects went 
higher before it went lower). It is actually reporting more objects now. 
I'm not sure if some co-worker or program is writing to the 
filesystem... It got to these numbers and hasn't changed for the past 
couple hours.


# ceph df
GLOBAL:
    SIZE    AVAIL   RAW USED %RAW USED
    392 TiB 391 TiB  1.3 TiB  0.33
POOLS:
    NAME    ID USED    %USED MAX AVAIL OBJECTS
    cephfs-meta  6   27 MiB 0   124 TiB 29
    cephfs-data   7  100 GiB  0.08   124 TiB 25600
    new-ec-pool  8  641 GiB  0.25   247 TiB 163991

On 6/28/19 4:04 PM, Patrick Hein wrote:
Afaik MDS doesn't delete the objects immediately but defer it for 
later. If you check that again now, how many objects does it report?


Jorge Garcia mailto:jgar...@soe.ucsc.edu>> 
schrieb am Fr., 28. Juni 2019, 23:16:



On 6/28/19 9:02 AM, Marc Roos wrote:
> 3. When everything is copied-removed, you should end up with an
empty
> datapool with zero objects.

I copied the data to a new directory and then removed the data
from the
old directory, but df still reports some objects in the old pool (not
zero). Is there a way to track down what's still in the old pool, and
how to delete it?

Before delete:

# ceph df
GLOBAL:
 SIZE    AVAIL   RAW USED %RAW USED
 392 TiB 389 TiB  3.3 TiB  0.83
POOLS:
 NAME    ID USED    %USED MAX AVAIL OBJECTS
 cephfs-meta  6   17 MiB 0   123 TiB 27
 cephfs-data   7  763 GiB  0.60   123 TiB 195233
 new-ec-pool  8  641 GiB  0.25   245 TiB 163991

After delete:

# ceph df
GLOBAL:
 SIZE    AVAIL   RAW USED %RAW USED
 392 TiB 391 TiB  1.2 TiB  0.32
POOLS:
 NAME    ID USED    %USED MAX AVAIL OBJECTS
 cephfs-meta  6   26 MiB 0   124 TiB 29
 cephfs-data   7   83 GiB  0.07   124 TiB 21175
 new-ec-pool  8  641 GiB  0.25   247 TiB 163991

___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Patrick Hein
Afaik MDS doesn't delete the objects immediately but defer it for later. If
you check that again now, how many objects does it report?

Jorge Garcia  schrieb am Fr., 28. Juni 2019, 23:16:

>
> On 6/28/19 9:02 AM, Marc Roos wrote:
> > 3. When everything is copied-removed, you should end up with an empty
> > datapool with zero objects.
>
> I copied the data to a new directory and then removed the data from the
> old directory, but df still reports some objects in the old pool (not
> zero). Is there a way to track down what's still in the old pool, and
> how to delete it?
>
> Before delete:
>
> # ceph df
> GLOBAL:
>  SIZEAVAIL   RAW USED %RAW USED
>  392 TiB 389 TiB  3.3 TiB  0.83
> POOLS:
>  NAMEID USED%USED MAX AVAIL OBJECTS
>  cephfs-meta  6   17 MiB 0   123 TiB 27
>  cephfs-data   7  763 GiB  0.60   123 TiB 195233
>  new-ec-pool  8  641 GiB  0.25   245 TiB 163991
>
> After delete:
>
> # ceph df
> GLOBAL:
>  SIZEAVAIL   RAW USED %RAW USED
>  392 TiB 391 TiB  1.2 TiB  0.32
> POOLS:
>  NAMEID USED%USED MAX AVAIL OBJECTS
>  cephfs-meta  6   26 MiB 0   124 TiB 29
>  cephfs-data   7   83 GiB  0.07   124 TiB 21175
>  new-ec-pool  8  641 GiB  0.25   247 TiB 163991
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia


On 6/28/19 9:02 AM, Marc Roos wrote:

3. When everything is copied-removed, you should end up with an empty
datapool with zero objects.


I copied the data to a new directory and then removed the data from the 
old directory, but df still reports some objects in the old pool (not 
zero). Is there a way to track down what's still in the old pool, and 
how to delete it?


Before delete:

# ceph df
GLOBAL:
    SIZE    AVAIL   RAW USED %RAW USED
    392 TiB 389 TiB  3.3 TiB  0.83
POOLS:
    NAME    ID USED    %USED MAX AVAIL OBJECTS
    cephfs-meta  6   17 MiB 0   123 TiB 27
    cephfs-data   7  763 GiB  0.60   123 TiB 195233
    new-ec-pool  8  641 GiB  0.25   245 TiB 163991

After delete:

# ceph df
GLOBAL:
    SIZE    AVAIL   RAW USED %RAW USED
    392 TiB 391 TiB  1.2 TiB  0.32
POOLS:
    NAME    ID USED    %USED MAX AVAIL OBJECTS
    cephfs-meta  6   26 MiB 0   124 TiB 29
    cephfs-data   7   83 GiB  0.07   124 TiB 21175
    new-ec-pool  8  641 GiB  0.25   247 TiB 163991

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Robert LeBlanc
Yes, 'mv' on the client is just a metadata operation and not what I'm
talking about. The idea is to bring the old pool in as a cache layer, then
bring the new pool in as a lower layer, then flush/evict the data from the
cache and Ceph will move the data to the new pool, but still be able to
access it by the old pool name. You then add an overlay so that the new
pool name acts the same, then the idea is that you can remove the old pool
from the cache and remove the overlay. The only problem is updating cephfs
to look at the new pool name for data that it knows is at the old pool name.

The other option is to add a data mover to cephfs so you can do something
like `ceph fs mv old_pool new_pool` and it would move all the objects and
update the metadata as it performs the data moving. The question is how to
do the data movement since the MDS is not in the data path.

Since both pool names act the same with the overlay, the best option sounds
like; configure the tiering, add the overlay, then do a `ceph fs migrate
old_pool new_pool` which causes the MDS to scan through all the metadata
and update all references of 'old_pool' to 'new_pool'. Once that is done
and the eviction is done, then you can remove the pool from cephfs and the
overlay. That way the OSDs are the one doing the data movement.

I don't know that part of the code, so I can't quickly propose any patches.

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Jun 28, 2019 at 9:37 AM Marc Roos  wrote:

>
> Afaik is the mv now fast because it is not moving any real data, just
> some meta data. Thus a real mv will be slow (only in the case between
> different pools) because it copies the data to the new pool and when
> successful deletes the old one. This will of course take a lot more
> time, but you at least are able to access the cephfs on both locations
> during this time and can fix things in your client access.
>
> My problem with mv now is that if you accidentally use it between data
> pools, it does not really move data.
>
>
>
> -Original Message-
> From: Robert LeBlanc [mailto:rob...@leblancnet.us]
> Sent: vrijdag 28 juni 2019 18:30
> To: Marc Roos
> Cc: ceph-users; jgarcia
> Subject: Re: [ceph-users] Migrating a cephfs data pool
>
> Given that the MDS knows everything, it seems trivial to add a ceph 'mv'
> command to do this. I looked at using tiering to try and do the move,
> but I don't know how to tell cephfs that the data is now on the new pool
> instead of the old pool name. Since we can't take a long enough downtime
> to move hundreds of Terabytes, we need something that can be done
> online, and if it has a minute or two of downtime would be okay.
>
> 
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>
>
> On Fri, Jun 28, 2019 at 9:02 AM Marc Roos 
> wrote:
>
>
>
>
> 1.
> change data pool for a folder on the file system:
> setfattr -n ceph.dir.layout.pool -v fs_data.ec21 foldername
>
> 2.
> cp /oldlocation /foldername
> Remember that you preferably want to use mv, but this leaves
> (meta)
> data
> on the old pool, that is not what you want when you want to delete
> that
> pool.
>
> 3. When everything is copied-removed, you should end up with an
> empty
> datapool with zero objects.
>
> 4. Verify here with others, if you can just remove this one.
>
> I think this is a reliable technique to switch, because you use
> the
>
> basic cephfs functionality that supposed to work. I prefer that
> the
> ceph
> guys implement a mv that does what you expect from it. Now it acts
> more
> or less like a linking.
>
>
>
>
>     -Original Message-
> From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu]
> Sent: vrijdag 28 juni 2019 17:52
> To: Marc Roos; ceph-users
> Subject: Re: [ceph-users] Migrating a cephfs data pool
>
> Are you talking about adding the new data pool to the current
> filesystem? Like:
>
>$ ceph fs add_data_pool my_ceph_fs new_ec_pool
>
> I have done that, and now the filesystem shows up as having two
> data
> pools:
>
>$ ceph fs ls
>name: my_ceph_fs, metadata pool: cephfs_meta, data pools:
> [cephfs_data new_ec_pool ]
>
> but then I run into two issues:
>
> 1. How do I actually copy/move/migrate the data from the old pool
> to the
> new pool?
> 2. When I'm done moving the data, how do I get rid of the old data
> pool?
>
> I know there's a rm_data_pool option, 

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Marc Roos
 
Afaik is the mv now fast because it is not moving any real data, just 
some meta data. Thus a real mv will be slow (only in the case between 
different pools) because it copies the data to the new pool and when 
successful deletes the old one. This will of course take a lot more 
time, but you at least are able to access the cephfs on both locations 
during this time and can fix things in your client access.

My problem with mv now is that if you accidentally use it between data 
pools, it does not really move data. 



-Original Message-
From: Robert LeBlanc [mailto:rob...@leblancnet.us] 
Sent: vrijdag 28 juni 2019 18:30
To: Marc Roos
Cc: ceph-users; jgarcia
Subject: Re: [ceph-users] Migrating a cephfs data pool

Given that the MDS knows everything, it seems trivial to add a ceph 'mv' 
command to do this. I looked at using tiering to try and do the move, 
but I don't know how to tell cephfs that the data is now on the new pool 
instead of the old pool name. Since we can't take a long enough downtime 
to move hundreds of Terabytes, we need something that can be done 
online, and if it has a minute or two of downtime would be okay.


Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Jun 28, 2019 at 9:02 AM Marc Roos  
wrote:


 

1.
change data pool for a folder on the file system:
setfattr -n ceph.dir.layout.pool -v fs_data.ec21 foldername

2. 
cp /oldlocation /foldername
Remember that you preferably want to use mv, but this leaves (meta) 
data 
on the old pool, that is not what you want when you want to delete 
that 
pool.

3. When everything is copied-removed, you should end up with an 
empty 
datapool with zero objects. 

4. Verify here with others, if you can just remove this one.

I think this is a reliable technique to switch, because you use the 

basic cephfs functionality that supposed to work. I prefer that the 
ceph 
guys implement a mv that does what you expect from it. Now it acts 
more 
or less like a linking.




-Original Message-
From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu] 
Sent: vrijdag 28 juni 2019 17:52
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Migrating a cephfs data pool

Are you talking about adding the new data pool to the current 
filesystem? Like:

   $ ceph fs add_data_pool my_ceph_fs new_ec_pool

I have done that, and now the filesystem shows up as having two 
data 
pools:

   $ ceph fs ls
   name: my_ceph_fs, metadata pool: cephfs_meta, data pools: 
[cephfs_data new_ec_pool ]

but then I run into two issues:

1. How do I actually copy/move/migrate the data from the old pool 
to the 
new pool?
2. When I'm done moving the data, how do I get rid of the old data 
pool? 

I know there's a rm_data_pool option, but I have read on the 
mailing 
list that you can't remove the original data pool from a cephfs 
filesystem.

The other option is to create a whole new cephfs with a new 
metadata 
pool and the new data pool, but creating multiple filesystems is 
still 
experimental and not allowed by default...

On 6/28/19 8:28 AM, Marc Roos wrote:
>   
> What about adding the new data pool, mounting it and then moving 
the 
> files? (read copy because move between data pools does not what 
you 
> expect it do)
>
>
> -Original Message-
> From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu]
> Sent: vrijdag 28 juni 2019 17:26
> To: ceph-users
    > Subject: *SPAM* [ceph-users] Migrating a cephfs data pool
>
> This seems to be an issue that gets brought up repeatedly, but I 
> haven't seen a definitive answer yet. So, at the risk of 
repeating a 
> question that has already been asked:
>
> How do you migrate a cephfs data pool to a new data pool? The 
obvious 
> case would be somebody that has set up a replicated pool for 
their 
> cephfs data and then wants to convert it to an erasure code pool. 
Is 
> there a simple way to do this, other than creating a whole new 
ceph 
> cluster and copying the data using rsync?
>
> Thanks for any clues
>
> Jorge
>
> ___
> ceph-users mailing list
> cep

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Robert LeBlanc
Given that the MDS knows everything, it seems trivial to add a ceph 'mv'
command to do this. I looked at using tiering to try and do the move, but I
don't know how to tell cephfs that the data is now on the new pool instead
of the old pool name. Since we can't take a long enough downtime to move
hundreds of Terabytes, we need something that can be done online, and if it
has a minute or two of downtime would be okay.

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Jun 28, 2019 at 9:02 AM Marc Roos  wrote:

>
>
> 1.
> change data pool for a folder on the file system:
> setfattr -n ceph.dir.layout.pool -v fs_data.ec21 foldername
>
> 2.
> cp /oldlocation /foldername
> Remember that you preferably want to use mv, but this leaves (meta) data
> on the old pool, that is not what you want when you want to delete that
> pool.
>
> 3. When everything is copied-removed, you should end up with an empty
> datapool with zero objects.
>
> 4. Verify here with others, if you can just remove this one.
>
> I think this is a reliable technique to switch, because you use the
> basic cephfs functionality that supposed to work. I prefer that the ceph
> guys implement a mv that does what you expect from it. Now it acts more
> or less like a linking.
>
>
>
>
> -Original Message-
> From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu]
> Sent: vrijdag 28 juni 2019 17:52
> To: Marc Roos; ceph-users
> Subject: Re: [ceph-users] Migrating a cephfs data pool
>
> Are you talking about adding the new data pool to the current
> filesystem? Like:
>
>$ ceph fs add_data_pool my_ceph_fs new_ec_pool
>
> I have done that, and now the filesystem shows up as having two data
> pools:
>
>$ ceph fs ls
>name: my_ceph_fs, metadata pool: cephfs_meta, data pools:
> [cephfs_data new_ec_pool ]
>
> but then I run into two issues:
>
> 1. How do I actually copy/move/migrate the data from the old pool to the
> new pool?
> 2. When I'm done moving the data, how do I get rid of the old data pool?
>
> I know there's a rm_data_pool option, but I have read on the mailing
> list that you can't remove the original data pool from a cephfs
> filesystem.
>
> The other option is to create a whole new cephfs with a new metadata
> pool and the new data pool, but creating multiple filesystems is still
> experimental and not allowed by default...
>
> On 6/28/19 8:28 AM, Marc Roos wrote:
> >
> > What about adding the new data pool, mounting it and then moving the
> > files? (read copy because move between data pools does not what you
> > expect it do)
> >
> >
> > -Original Message-
> > From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu]
> > Sent: vrijdag 28 juni 2019 17:26
> > To: ceph-users
> > Subject: *SPAM* [ceph-users] Migrating a cephfs data pool
> >
> > This seems to be an issue that gets brought up repeatedly, but I
> > haven't seen a definitive answer yet. So, at the risk of repeating a
> > question that has already been asked:
> >
> > How do you migrate a cephfs data pool to a new data pool? The obvious
> > case would be somebody that has set up a replicated pool for their
> > cephfs data and then wants to convert it to an erasure code pool. Is
> > there a simple way to do this, other than creating a whole new ceph
> > cluster and copying the data using rsync?
> >
> > Thanks for any clues
> >
> > Jorge
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Marc Roos
 

1.
change data pool for a folder on the file system:
setfattr -n ceph.dir.layout.pool -v fs_data.ec21 foldername

2. 
cp /oldlocation /foldername
Remember that you preferably want to use mv, but this leaves (meta) data 
on the old pool, that is not what you want when you want to delete that 
pool.

3. When everything is copied-removed, you should end up with an empty 
datapool with zero objects. 

4. Verify here with others, if you can just remove this one.

I think this is a reliable technique to switch, because you use the 
basic cephfs functionality that supposed to work. I prefer that the ceph 
guys implement a mv that does what you expect from it. Now it acts more 
or less like a linking.




-Original Message-
From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu] 
Sent: vrijdag 28 juni 2019 17:52
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Migrating a cephfs data pool

Are you talking about adding the new data pool to the current 
filesystem? Like:

   $ ceph fs add_data_pool my_ceph_fs new_ec_pool

I have done that, and now the filesystem shows up as having two data 
pools:

   $ ceph fs ls
   name: my_ceph_fs, metadata pool: cephfs_meta, data pools: 
[cephfs_data new_ec_pool ]

but then I run into two issues:

1. How do I actually copy/move/migrate the data from the old pool to the 
new pool?
2. When I'm done moving the data, how do I get rid of the old data pool? 

I know there's a rm_data_pool option, but I have read on the mailing 
list that you can't remove the original data pool from a cephfs 
filesystem.

The other option is to create a whole new cephfs with a new metadata 
pool and the new data pool, but creating multiple filesystems is still 
experimental and not allowed by default...

On 6/28/19 8:28 AM, Marc Roos wrote:
>   
> What about adding the new data pool, mounting it and then moving the 
> files? (read copy because move between data pools does not what you 
> expect it do)
>
>
> -Original Message-
> From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu]
> Sent: vrijdag 28 juni 2019 17:26
> To: ceph-users
> Subject: *SPAM* [ceph-users] Migrating a cephfs data pool
>
> This seems to be an issue that gets brought up repeatedly, but I 
> haven't seen a definitive answer yet. So, at the risk of repeating a 
> question that has already been asked:
>
> How do you migrate a cephfs data pool to a new data pool? The obvious 
> case would be somebody that has set up a replicated pool for their 
> cephfs data and then wants to convert it to an erasure code pool. Is 
> there a simple way to do this, other than creating a whole new ceph 
> cluster and copying the data using rsync?
>
> Thanks for any clues
>
> Jorge
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia
Are you talking about adding the new data pool to the current 
filesystem? Like:


  $ ceph fs add_data_pool my_ceph_fs new_ec_pool

I have done that, and now the filesystem shows up as having two data pools:

  $ ceph fs ls
  name: my_ceph_fs, metadata pool: cephfs_meta, data pools: 
[cephfs_data new_ec_pool ]


but then I run into two issues:

1. How do I actually copy/move/migrate the data from the old pool to the 
new pool?
2. When I'm done moving the data, how do I get rid of the old data pool? 
I know there's a rm_data_pool option, but I have read on the mailing 
list that you can't remove the original data pool from a cephfs filesystem.


The other option is to create a whole new cephfs with a new metadata 
pool and the new data pool, but creating multiple filesystems is still 
experimental and not allowed by default...


On 6/28/19 8:28 AM, Marc Roos wrote:
  
What about adding the new data pool, mounting it and then moving the

files? (read copy because move between data pools does not what you
expect it do)


-Original Message-
From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu]
Sent: vrijdag 28 juni 2019 17:26
To: ceph-users
Subject: *SPAM* [ceph-users] Migrating a cephfs data pool

This seems to be an issue that gets brought up repeatedly, but I haven't
seen a definitive answer yet. So, at the risk of repeating a question
that has already been asked:

How do you migrate a cephfs data pool to a new data pool? The obvious
case would be somebody that has set up a replicated pool for their
cephfs data and then wants to convert it to an erasure code pool. Is
there a simple way to do this, other than creating a whole new ceph
cluster and copying the data using rsync?

Thanks for any clues

Jorge

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Marc Roos
 
What about adding the new data pool, mounting it and then moving the 
files? (read copy because move between data pools does not what you 
expect it do)


-Original Message-
From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu] 
Sent: vrijdag 28 juni 2019 17:26
To: ceph-users
Subject: *SPAM* [ceph-users] Migrating a cephfs data pool

This seems to be an issue that gets brought up repeatedly, but I haven't 
seen a definitive answer yet. So, at the risk of repeating a question 
that has already been asked:

How do you migrate a cephfs data pool to a new data pool? The obvious 
case would be somebody that has set up a replicated pool for their 
cephfs data and then wants to convert it to an erasure code pool. Is 
there a simple way to do this, other than creating a whole new ceph 
cluster and copying the data using rsync?

Thanks for any clues

Jorge

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia
This seems to be an issue that gets brought up repeatedly, but I haven't 
seen a definitive answer yet. So, at the risk of repeating a question 
that has already been asked:


How do you migrate a cephfs data pool to a new data pool? The obvious 
case would be somebody that has set up a replicated pool for their 
cephfs data and then wants to convert it to an erasure code pool. Is 
there a simple way to do this, other than creating a whole new ceph 
cluster and copying the data using rsync?


Thanks for any clues

Jorge

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com