Jun Rao:
Hi,
ZkClient (http://github.com/sgroschupf/zkclient) provides a nice wrapper
around the ZooKeeper client and handles things like retry during
ConnectionLoss events, and auto reconnect. Does anyone (other than Katta)
use it? Would people recommend using it? Thanks,
Jun
Hi Jun,
I
Hi Andrei,
I needed to install the following:
apt-get install libtool autoconf libcppunit-dev
There could well be other packages that were already installed on my machine
(automake, gcc etc), but my build works now.
I have since found that zookeeper is already packaged in debian testing,
Hi Mahadev,
The suggestions from Sergey and Andrei have fixed this for me.
regards,
Martin
On 13 July 2010 19:11, Mahadev Konar maha...@yahoo-inc.com wrote:
Hi Martin,
There is a list of tools, i.e cppunit. That is the only required tool to
build the zookeeper c library. The readme says
Hi,
I am attempting to build the C client on debian lenny.
autoconf, configure, make and make install all appear to work cleanly.
I ran:
autoreconf -if
./configure
make
make install
make run-check
However, the unit tests fail:
$ make run-check
make zktest-st zktest-mt
make[1]: Entering
HI Martin,
Can you check if you have a stale java process (ZooKeeperServer) running
on your machine? That might cause some issues with the tests.
Thanks
mahadev
On 7/14/10 8:03 AM, Martin Waite waite@gmail.com wrote:
Hi,
I am attempting to build the C client on debian lenny.
Thomas -
I like the ideas of your proposal, it seems very natural to use
Callable/Future for zk operations rather than something with more
opaque semantics (does this method block? etc.). Let's discuss this
more, I'd be more than happy to help out.
We're still using 3.2.1 so I'll probably have
Hi All
I run into this periodically. I am curious to know what this means, why
would this happen and how am I to react to it programmatically.
org.apache.thrift.TException:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /Config/Stats/count
Hi,
We are currently evaluating use of ZK in our infrastructure. In our
setup we have a set of servers running from two different power feeds.
If one power feed goes away so does half of the servers. This makes
problematic to configure ZK ensemble that would tolerate such outage.
The network
I'm running a Tornado webserver and using ZooKeeper to store some metadata and
occasionally the ZooKeeper connection will error out irrevocably. Any
subsequent calls to ZooKeeper from this process will result in a SystemError.
Here is the relevant portion of the Python traceback:
snip...
by custom QuorumVerifier are you referring to
http://hadoop.apache.org/zookeeper/docs/r3.3.1/zookeeperHierarchicalQuorums.html
?
ben
On 07/14/2010 12:43 PM, Sergei Babovich wrote:
Hi,
We are currently evaluating use of ZK in our infrastructure. In our
setup we have a set of servers running
Hi Sergei, I'm not sure what the implementation of QuorumVerifier you have in mind would look like to make your setting work. Even if you don't have partitions, variation in message delays can cause inconsistencies in your ZooKeeper cluster. Keep in mind that we make the assumption that quorums
Just another implementation of QuorumVerifier (based on existing
implementation: either majority or hierarchical quorums). Probably
hierarchical quorum is simplest to adjust - it already has notion of
groups, etc.
On 07/14/2010 04:46 PM, Benjamin Reed wrote:
by custom QuorumVerifier are you
Thanks, Flavio,
Yep... I see. This is a problem. Any better idea?
As an alternative option we could probably consider running single ZK
node on EC2 - only in order to handle this specific case. Does it make
sense to you? Is it feasible? Would it result in considerable
performance impact due to
On Wed, Jul 14, 2010 at 2:16 PM, Sergei Babovich
sbabov...@demandware.comwrote:
Yep... I see. This is a problem. Any better idea?
I think that the production of slightly elaborate quorum rules to handle
specific failure modes isn't a reasonable thing. What you need to do in
conjunction is to
14 matches
Mail list logo