On Mon, Feb 26, 2018 at 9:56 AM, Eugen Block wrote:
> I'm following up on the rbd export/import option with a little delay.
>
> The fact that the snapshot is not protected after the image is reimported is
> not a big problem, you could deal with that or wait for a fix.
> But
I'm following up on the rbd export/import option with a little delay.
The fact that the snapshot is not protected after the image is
reimported is not a big problem, you could deal with that or wait for
a fix.
But there's one major problem using this method: the VMs lose their
rbd_children
Cumulative followup to various insightful replies.
I wrote:
No, it's not really possible currently and we have no plans to add
>>> such support since it would not be of any long-term value.
>>
>> The long-term value would be the ability to migrate volumes from, say, a
>> replicated pool
On Wed, Feb 21, 2018 at 7:27 PM, Anthony D'Atri wrote:
>>> I was thinking we might be able to configure/hack rbd mirroring to mirror to
>>> a pool on the same cluster but I gather from the OP and your post that this
>>> is not really possible?
>>
>> No, it's not really
"Anthony D'Atri" <a...@dreamsnake.net>
> À: "ceph-users" <ceph-users@lists.ceph.com>
> Envoyé: Jeudi 22 Février 2018 01:27:23
> Objet: Re: [ceph-users] Migrating to new pools
>
> >> I was thinking we might be able to configure/hack rbd
'Atri" <a...@dreamsnake.net>
À: "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Jeudi 22 Février 2018 01:27:23
Objet: Re: [ceph-users] Migrating to new pools
>> I was thinking we might be able to configure/hack rbd mirroring to mirror to
>> a pool on the same cl
>> I was thinking we might be able to configure/hack rbd mirroring to mirror to
>> a pool on the same cluster but I gather from the OP and your post that this
>> is not really possible?
>
> No, it's not really possible currently and we have no plans to add
> such support since it would not be of
On Tue, Feb 20, 2018 at 8:35 PM, Rafael Lopez wrote:
>> There is also work-in-progress for online
>> image migration [1] that will allow you to keep using the image while
>> it's being migrated to a new destination image.
>
>
> Hi Jason,
>
> Is there any recommended
>
> There is also work-in-progress for online
> image migration [1] that will allow you to keep using the image while
> it's being migrated to a new destination image.
Hi Jason,
Is there any recommended method/workaround for live rbd migration in
luminous? eg. snapshot/copy or export/import
Thanks!
On Mon, Feb 19, 2018 at 10:33 AM, Eugen Block wrote:
> Hi,
>
> I created a ticket for the rbd import issue:
>
> https://tracker.ceph.com/issues/23038
>
> Regards,
>
> Eugen
>
>
> Zitat von Jason Dillaman :
>
>> On Fri, Feb 16, 2018 at 11:20 AM, Eugen
Hi,
I created a ticket for the rbd import issue:
https://tracker.ceph.com/issues/23038
Regards,
Eugen
Zitat von Jason Dillaman :
On Fri, Feb 16, 2018 at 11:20 AM, Eugen Block wrote:
Hi Jason,
... also forgot to mention "rbd export --export-format 2"
On Fri, Feb 16, 2018 at 11:20 AM, Eugen Block wrote:
> Hi Jason,
>
>> ... also forgot to mention "rbd export --export-format 2" / "rbd
>> import --export-format 2" that will also deeply export/import all
>> snapshots associated with an image and that feature is available in
>> the
Hi Jason,
... also forgot to mention "rbd export --export-format 2" / "rbd
import --export-format 2" that will also deeply export/import all
snapshots associated with an image and that feature is available in
the Luminous release.
thanks for that information, this could be very valuable for
On Fri, Feb 16, 2018 at 8:08 AM, Jason Dillaman wrote:
> On Fri, Feb 16, 2018 at 5:36 AM, Jens-U. Mozdzen wrote:
>> Dear list, hello Jason,
>>
>> you may have seen my message on the Ceph mailing list about RDB pool
>> migration - it's a common subject that
On Fri, Feb 16, 2018 at 5:36 AM, Jens-U. Mozdzen wrote:
> Dear list, hello Jason,
>
> you may have seen my message on the Ceph mailing list about RDB pool
> migration - it's a common subject that pools were created in a sub-optimum
> fashion and i. e. pgnum is (not yet)
Dear list, hello Jason,
you may have seen my message on the Ceph mailing list about RDB pool
migration - it's a common subject that pools were created in a
sub-optimum fashion and i. e. pgnum is (not yet) reducible, so we're
looking into means to "clone" an RBD pool into a new pool within
Hi *,
Zitat von ceph-users-requ...@lists.ceph.com:
Hi *,
facing the problem to reduce the number of PGs for a pool, I've found
various information and suggestions, but no "definite guide" to handle
pool migration with Ceph 12.2.x. This seems to be a fairly common
problem when having to deal
Hi,
If the problem is not severe and you can wait, then according to this:
http://ceph.com/community/new-luminous-pg-overdose-protection/
there is a pg merge feature coming.
Regards,
Denes.
On 12/18/2017 02:18 PM, Jens-U. Mozdzen wrote:
Hi *,
facing the problem to reduce the number of
A possible option. They do not recommend using cppool.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011460.html
**COMPLETELY UNTESTED AND DANGEROUS**
stop all MDS daemons
delete your filesystem (but leave the pools)
use "rados export" and "rados import" to do a full copy of the
Hi *,
facing the problem to reduce the number of PGs for a pool, I've found
various information and suggestions, but no "definite guide" to handle
pool migration with Ceph 12.2.x. This seems to be a fairly common
problem when having to deal with "teen-age clusters", so consolidated
20 matches
Mail list logo