Thiago Borges wrote:
I read the documentation at zoo site and can't find some text about sharing/limits of zoo clients connections.

No limits in particular to ZK itself (given enough memory) - usually the limitations are due to the max number of file descriptors the host OS allows. Often this is on the order of 1-8k, check your ulimit.

I only see the parameter in .conf file about the max number of connections per client.

This is to limit "DOS" attacks - it was added after we saw issues with buggy client implementations that would create infinite numbers of sessions with the ZK service. Eventually running into the FD limit problem I mentioned.

Can someone point me some documentation about sharing the zookeeper connections? Can I do this among different threads?

The API docs have those details:
http://hadoop.apache.org/zookeeper/docs/current/api/index.html
generally the client interface is thread safe though.

And about client connections limits and how much throughput decreases when the number of connections increase?

This test has 910 clients (sessions) involved:
http://hadoop.apache.org/zookeeper/docs/current/zookeeperOver.html#Performance

We have users with 10k sessions accessing a single 5 node ZK ensemble. That's the largest I know about that's in production. I've personally tested up to 20k sessions attaching to a 3 node ensemble with 10 second session timeout and it was fine (although I didn't do much other than test session establishment and teardown).

Also see this: http://bit.ly/4ekN8G

Patrick

Reply via email to