through the geo-replication blueprint [1], I am thinking that we
can leverage the effort and instead of replicate the data into another ceph
cluster, we make it replicate to another data store. At the same time, I have a
couple of questions which need your help:
1) How does the ragosgw-agent scale
Dear all,
Geo-Replication and Disaster Recovery data replication
https://wiki.ceph.com/Planning/Blueprints/Dumpling/RGW_Geo-Replication_and_Disaster_Recovery
has been said to be 'currently slated for the Dumpling release and
implementation is currently underway'.
Is there any news about
Is there any more information on an ETA?
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
storing in rbd (type, volume)?
Neil
On Wed, Jan 30, 2013 at 10:42 PM, Skowron S?awomir
slawomir.skow...@grupaonet.pl wrote:
I make new thread, because i think it's a diffrent case.
We have managed async geo-replication of s3 service, beetwen two
ceph
clusters in two DC's
S?awomir
slawomir.skow...@grupaonet.pl wrote:
I make new thread, because i think it's a diffrent case.
We have managed async geo-replication of s3 service, beetwen two ceph
clusters in two DC's, and to amazon s3 as third. All this via s3 API. I
love
to see native RGW geo-replication
On Wed, Jan 30, 2013 at 10:42 PM, Skowron S?awomir
slawomir.skow...@grupaonet.pl wrote:
I make new thread, because i think it's a diffrent case.
We have managed async geo-replication of s3 service, beetwen two ceph
clusters in two DC's, and to amazon s3 as third. All this via s3 API
(type, volume)?
Neil
On Wed, Jan 30, 2013 at 10:42 PM, Skowron Sławomir
slawomir.skow...@grupaonet.pl wrote:
I make new thread, because i think it's a diffrent case.
We have managed async geo-replication of s3 service, beetwen two ceph
clusters in two DC's, and to amazon s3 as third. All
Hi, now i can response, after i was sick.
Nginx is compiled with perl/or lua support. Inside nginx configuration
is hook, for a perl code, or lua code, as you prefer. This code have a
inline functionality. We have testing this from logs, but it's not a
good idea.Now in line option, have
...@grupaonet.pl wrote:
I make new thread, because i think it's a diffrent case.
We have managed async geo-replication of s3 service, beetwen two ceph
clusters in two DC's, and to amazon s3 as third. All this via s3 API. I
love
to see native RGW geo-replication with described features in another
goes on another location.
On Thu, Jan 31, 2013 at 9:25 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/31 Skowron Sławomir slawomir.skow...@grupaonet.pl:
We have managed async geo-replication of s3 service, beetwen two ceph
clusters in two DC's, and to amazon s3 as third
async geo-replication of s3 service, beetwen two ceph
clusters in two DC's, and to amazon s3 as third. All this via s3 API. I love
to see native RGW geo-replication with described features in another thread.
There is another case. What about RBD replication ?? It's much more
complicated
(which may or
may not be DC-local).
They aren't going to do this, though — each gateway will communicate with the
primaries directly.
I don't know what the timeline is, but Yehuda proposed recently the idea of
master and slave zones (subsets of a cluster) and other changes to facilitate
rgw geo
I make new thread, because i think it's a diffrent case.
We have managed async geo-replication of s3 service, beetwen two ceph clusters
in two DC's, and to amazon s3 as third. All this via s3 API. I love to see
native RGW geo-replication with described features in another thread
On Monday, January 28, 2013 at 9:54 AM, Ben Rowland wrote:
Hi,
I'm considering using Ceph to create a cluster across several data
centres, with the strict requirement that writes should go to both
DCs. This seems possible by specifying rules in the CRUSH map, with
an understood latency hit
Currently it is assumed that ceph clusters are concentrated in a
single geographical location. Thus, the rados gateway, which leverages
the ceph object storage (RADOS) for its backend store is limited to a
single location. There are two main issues that we would like to
solve:
- Disaster
On Wed, Jan 9, 2013 at 1:33 PM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/9 Mark Kampe mark.ka...@inktank.com:
Asynchronous RADOS replication is definitely on our list,
but more complex and farther out.
Do you have any ETA?
1 month? 6 months ? 1 year?
No, but
16 matches
Mail list logo