Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-24 Thread Craig Lewis
Yehuda, are there any potential problems there? I'm wondering if duplicate bucket names that don't have the same contents might cause problems? Would the second cluster be read-only while replication is running? Robin, are the mtimes in Cluster B's S3 data important? Just wondering if it would

Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-24 Thread Yehuda Sadeh
On Wed, Sep 24, 2014 at 11:17 AM, Craig Lewis cle...@centraldesktop.com wrote: Yehuda, are there any potential problems there? I'm wondering if duplicate bucket names that don't have the same contents might cause problems? Would the second cluster be read-only while replication is running? I

Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-24 Thread Robin H. Johnson
On Wed, Sep 24, 2014 at 11:31:29AM -0700, Yehuda Sadeh wrote: On Wed, Sep 24, 2014 at 11:17 AM, Craig Lewis cle...@centraldesktop.com wrote: Yehuda, are there any potential problems there? I'm wondering if duplicate bucket names that don't have the same contents might cause problems?

Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-24 Thread Yehuda Sadeh
On Wed, Sep 24, 2014 at 2:12 PM, Robin H. Johnson robb...@gentoo.org wrote: On Wed, Sep 24, 2014 at 11:31:29AM -0700, Yehuda Sadeh wrote: On Wed, Sep 24, 2014 at 11:17 AM, Craig Lewis cle...@centraldesktop.com wrote: Yehuda, are there any potential problems there? I'm wondering if duplicate

Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-23 Thread Mikaƫl Cluseau
On 09/22/2014 05:17 AM, Robin H. Johnson wrote: Can somebody else make comments about migrating S3 buckets with preserved mtime data (and all of the ACLs CORS) then? I don't know how radosgw objects are stored, but have you considered a lower level rados export/import ? IMPORT AND EXPORT

Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-23 Thread John Nielsen
I would: Keep Cluster A intact and migrate it to your new hardware. You can do this with no downtime, assuming you have enough IOPS to support data migration and normal usage simultaneously. Bring up the new OSDs and let everything rebalance, then remove the old OSDs one at a time. Replace the

Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-23 Thread Robin H. Johnson
On Tue, Sep 23, 2014 at 03:12:53PM -0600, John Nielsen wrote: Keep Cluster A intact and migrate it to your new hardware. You can do this with no downtime, assuming you have enough IOPS to support data migration and normal usage simultaneously. Bring up the new OSDs and let everything

Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-23 Thread Yehuda Sadeh
On Tue, Sep 23, 2014 at 7:23 PM, Robin H. Johnson robb...@gentoo.org wrote: On Tue, Sep 23, 2014 at 03:12:53PM -0600, John Nielsen wrote: Keep Cluster A intact and migrate it to your new hardware. You can do this with no downtime, assuming you have enough IOPS to support data migration and

Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-21 Thread Robin H. Johnson
On Sun, Sep 21, 2014 at 02:33:09PM +0900, Christian Balzer wrote: For a variety of reasons, none good anymore, we have two separate Ceph clusters. I would like to merge them onto the newer hardware, with as little downtime and data loss as possible; then discard the old hardware.

[ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-20 Thread Robin H. Johnson
For a variety of reasons, none good anymore, we have two separate Ceph clusters. I would like to merge them onto the newer hardware, with as little downtime and data loss as possible; then discard the old hardware. Cluster A (2 hosts): - 3TB of S3 content, 100k files, file mtimes important -

Re: [ceph-users] Merging two active ceph clusters: suggestions needed

2014-09-20 Thread Christian Balzer
On Sun, 21 Sep 2014 05:15:32 + Robin H. Johnson wrote: For a variety of reasons, none good anymore, we have two separate Ceph clusters. I would like to merge them onto the newer hardware, with as little downtime and data loss as possible; then discard the old hardware. Cluster A (2