Thank you!

On Wed, May 9, 2012 at 1:46 PM, Camille Fournier <[email protected]> wrote:

> You can't do ZK guaranteed live at all times across an even number of
> data centers. If you want to guarantee quorum even if you lose a
> datacenter you need an odd number of datacenters for your quorum, in
> your case, that would be 3.
> I don't have load numbers available to share unfortunately, and really
> ZK load across DCs depends quite a bit on the hardware setup and
> network, but I suspect that you will be totally fine. 1000 locks a
> minute is not very high load and 10 clients is pretty minimal,
>
> C
>
> On Wed, May 9, 2012 at 4:37 PM, Narayanan A R
> <[email protected]> wrote:
> > It is between the data centers. So the BCP requirement is to keep
> offering
> > locks reliably for all data centers (about 2 to 4 data centers) even if
> the
> > network connectivity between the data centers goes down or servers die in
> > one DC.
> >
> > The load is spiky and not constant. The worst case peak could be about
> 1000
> > locks every minute or so. There will be about 10 clients in total. The
> ping
> > time between data centers will be in the order of milli seconds.
> >
> > Could you share you numbers if that's ok?
> >
> > I believe there will be 3 servers per data center and one leader and its
> > locality depends on who wins the election and all the write requests goes
> > to that leader. So potentially all write requests travel across data
> > centers to get to the leader and then the replication data is spread out
> to
> > all followers as well in all the data centers.
> >
> > ARN
> >
> > On Wed, May 9, 2012 at 7:16 AM, Camille Fournier <[email protected]>
> wrote:
> >
> >> What's your BCP requirements? Do you need to span clusters because you
> >> need continued availability if one cluster goes down? What write
> >> throughput do you expect to need, how many clients do you anticipate
> >> serving, how many locks will they need? Write throughput does go down
> >> when you span clusters, but it's not as bad as you might think, unless
> >> your ping time between clusters is very slow. I supported
> >> cross-datacenter clusters doing quite respectable write throughput
> >> (sorry, don't have any numbers handy but it was much more capacity
> >> than my service needed), so I wouldn't overdesign your system before
> >> checking the throughput you could get using a simple setup.
> >>
> >> C
> >>
> >> On Tue, May 8, 2012 at 11:27 PM, Narayanan A R
> >> <[email protected]> wrote:
> >> > Imagine the locks recipe need to be used to synchronize resources
> across
> >> > data centers. One option is to span the ensemble to all the data
> centers.
> >> > But I am afraid this will significantly reduce the write throughout.
> The
> >> > alternative is to setup ZK in one and have all the clients talk to the
> >> same
> >> > cluster. Even with this approach the clients needs to keep the
> connection
> >> > open to a different data center. What I have in mind is to make the
> >> > requests stateless and have a service offer locks.
> >> >
> >> > On Tue, May 8, 2012 at 6:42 AM, Camille Fournier <[email protected]>
> >> wrote:
> >> >
> >> >> It can, but it depends on what you're doing. If you want to give us
> >> >> some more information on your proposed use case we can maybe help you
> >> >> more.
> >> >>
> >> >> C
> >> >>
> >> >> On Tue, May 8, 2012 at 3:21 AM, Narayanan A R
> >> >> <[email protected]> wrote:
> >> >> > Does ZK fit well for coordination across data centers?
> >> >>
> >>
>

Reply via email to