Hi oo4load,
I have a question which confuses me quite a long time.
As is known to us all, ZK servers frequently take snapshots while
processing requests.
When a ZK server replays a snapshot which contains a transaction which has
been executed before this snapshot, the transaction will be
Please reply to my private mail address from now.
On Thu, Oct 10, 2019 at 5:01 AM Gao,Wei wrote:
> Hi Chris,
> I received your codes about zookeeper balancer. It seems that there are a
> few java class files missing. They include:
> nl.ing.profileha.util.EventCreator;
>
Hi Chris,
I received your codes about zookeeper balancer. It seems that there are a
few java class files missing. They include:
nl.ing.profileha.util.EventCreator;
nl.ing.profileha.util.FailsafeTriggeredException;
nl.ing.profileha.util.StringUtils;
nl.ing.profileha.util.Validator;
I sent it again, please check.
On Wed, Oct 9, 2019 at 6:31 AM Gao,Wei wrote:
> Hi oo4load,
> Where did you sent it to? Through this site or directly sent to my email?
> I received your pseudo codes last week just like this shown below:
>
> buildDatacenterAndServerModel(configurationFile) {
>
Hi oo4load,
Where did you sent it to? Through this site or directly sent to my email?
I received your pseudo codes last week just like this shown below:
buildDatacenterAndServerModel(configurationFile) {
enum zookeeperRole PARTICIPANT, OBSERVER, NONE, DOWN
object datacenter has servers
I sent it 1 week ago.
On Tue, Oct 8, 2019 at 10:08 AM Gao,Wei wrote:
> Hi oo4load,
> If it is convenient to you, I would like to get the actual code from you
> about the zookeeper cluster balancer implementation. My email address is:
> wei@arcserve.com
> Thank you again.
>
>
>
> --
>
Hi oo4load,
If it is convenient to you, I would like to get the actual code from you
about the zookeeper cluster balancer implementation. My email address is:
wei@arcserve.com
Thank you again.
--
Sent from: http://zookeeper-user.578899.n2.nabble.com/
Hi oo4load,
Would you please sent me the active code of the implementation?
Thank you very much!
--
Sent from: http://zookeeper-user.578899.n2.nabble.com/
Hi oo4load,
Would you please sent me the active code of the implementation?
Thank you very much!
--
Sent from: http://zookeeper-user.578899.n2.nabble.com/
No problem, I will send you Monday.
On 29 September 2019 04:30:28 "Gao,Wei" wrote:
Hi oo4load,
If it is convenient to you, I would like to get the actual code from you
about the zookeeper cluster balancer implementation. My email address is:
*/wei@arcserve.com/*
Thank you again.
--
Hi oo4load,
If it is convenient to you, I would like to get the actual code from you
about the zookeeper cluster balancer implementation. My email address is:
*/wei@arcserve.com/*
Thank you again.
--
Sent from: http://zookeeper-user.578899.n2.nabble.com/
Hi oo4load,
Got it. Thanks a lot!
--
Sent from: http://zookeeper-user.578899.n2.nabble.com/
No you have to build a zookeeper cluster manager client using my code. Its
a zookeeper client.
On 27 September 2019 10:44:51 "Gao,Wei" wrote:
*Hi oo4load,
How could we integrate this implementation with ZooKeeper 3.5.5? Does it
mean we have to mix the implementation code into the already
*Hi oo4load,
How could we integrate this implementation with ZooKeeper 3.5.5? Does it
mean we have to mix the implementation code into the already released
ZooKeeper 3.5.5 and rebuild it again into another ZooKeeper and re-install
it?
Thanks.*
--
Sent from:
Hi oo4load,
Thank you so much for your reply!
How I wish I could appreciate your design with actual code!
Really look forward to hearing from you.
--
Sent from: http://zookeeper-user.578899.n2.nabble.com/
Let me write this from memory. :)
We have the following:
-A running zookeeper cluster with adminserver enabled
-One or more balancer client processes (one per datacenter), of which one
has a master role through some leader election. The master does the work,
the others do nothing.
-In our case,
Hi oo4load,
Could you please tell me how to implements this to avoid the problem
above?
Thanks
--
Sent from: http://zookeeper-user.578899.n2.nabble.com/
We have 3+3 of which 1 floating observer in non target datacenter and
automatic reconfiguring to more observers if we are losing participants.
If the target datacenter blows up this doesn't work, but our main
application will be able to serve customers in a readonly state until
operators
if there was some way for it
to be a voting member only and not bear any data (similar to mongodb's arbiter).
-Original Message-
From: Cee Tee [mailto:c.turks...@gmail.com]
Sent: Wednesday, August 21, 2019 1:27 PM
To: Alexander Shraer
Cc: user@zookeeper.apache.org
Subject: Re: About ZooKeeper
Il mer 21 ago 2019, 20:27 Cee Tee ha scritto:
>
> Yes, one side loses quorum and the other remains active. However we
> actively control which side that is, because our main application is
> active/passive with 2 datacenters. We need Zookeeper to remain active in
the applications active
Yes, one side loses quorum and the other remains active. However we
actively control which side that is, because our main application is
active/passive with 2 datacenters. We need Zookeeper to remain active in
the applications active datacenter.
On 21 August 2019 17:22:00 Alexander Shraer
Il mer 21 ago 2019, 17:22 Alexander Shraer ha scritto:
> That's great! Thanks for sharing.
>
> > Added benefit is that we can also control which data center gets the
> quorum
> > in case of a network outage between the two.
>
> Can you explain how this works? In case of a network outage between
That's great! Thanks for sharing.
> Added benefit is that we can also control which data center gets the
quorum
> in case of a network outage between the two.
Can you explain how this works? In case of a network outage between two
DCs, one of them has a quorum of participants and the other
We have solved this by implementing a 'zookeeper cluster balancer', it
calls the admin server api of each zookeeper to get the current status and
will issue dynamic reconfigure commands to change dead servers into
observers so the quorum is not in danger. Once the dead servers reconnect,
they
Hi,
Reconfiguration, as implemented, is not automatic. In your case, when
failures happen, this doesn't change the ensemble membership.
When 2 of 5 fail, this is still a minority, so everything should work
normally, you just won't be able to handle an additional failure. If you'd
like
to remove
25 matches
Mail list logo