I find this happening in my observers node in the logs. The observers are
running in a different data center from where the ZK non-observers are
running. The only way to fix this seems to be restarting. How can I start
addressing this? Here is the stack trace.
Too many connections from
Thanks Patrick. But what does this mean? I see the log on server A telling
me Too many connections from A - default is 10. Too many connection from A
to whom? I do not see who the other end of the connection is.
Cheers
Avinash
On Tue, Oct 5, 2010 at 9:27 AM, Patrick Hunt ph...@apache.org wrote:
So shouldn't all servers in another DC just have one session? So even if I
have 50 observers in another DC that should be 50 sessions established since
the IP doesn't change correct? Am I missing something? In some ZK clients I
see the following exception even though they are in the same DC.
WARN
Hi,
I just wondered: has anybody ever ran zookeeper to the max on a 68GB
quadruple extra large high memory EC2 instance? With, say, 60GB allocated or so?
Because EC2 with EBS is a nice way to grow your zookeeper cluster (data on the
ebs columes, upgrade as your memory utilization grows) -
Hi Maarteen,
I definitely know of a group which uses around 3GB of memory heap for
zookeeper but never heard of someone with such huge requirements. I would
say it definitely would be a learning experience with such high memory which
I definitely think would be very very useful for others in the
I have run it over 5 GB of heap with over 10M znodes. We will definitely run
it with over 64 GB of heap. Technically I do not see any limitiation.
However I will the experts chime in.
Avinash
On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konar maha...@yahoo-inc.comwrote:
Hi Maarteen,
I definitely
That would be an interesting experiment although it is way outside normal
usage as a coordination store.
I have used ZK as a session store for PHP with OK results. I never
implemented an expiration mechanism so things
had to be cleared out manually sometimes. It worked pretty well until
things
you will need to time how long it takes to read all that state back in
and adjust the initTime accordingly. it will probably take a while to
pull all that data into memory.
ben
On 10/05/2010 11:36 AM, Avinash Lakshman wrote:
I have run it over 5 GB of heap with over 10M znodes. We will
Tuning GC is going to be critical, otw all the sessions will timeout (and
potentially expire) during GC pauses.
Patrick
On Tue, Oct 5, 2010 at 1:18 PM, Maarten Koopmans maar...@vrijheid.netwrote:
Yes, and syncing after a crash will be interesting as well. Off note; I am
running it with a 6GB
I think the issue of having to write a full ~60GB snapshot file at
intervals would make this prohibitive, particularly on EC2 via EBS. At
a scale like that I think you'd be better off with a traditional
database or a nosql database like Cassandra, possibly using Zookeeper
for transaction
Yup, and that's ironic, isn't it? The gc tuning is so specialistic, as is the
profiling, that automated memory management (to me) hasn't brought what I hoped
it would. I've had some conversations about this topic a few years back with a
well respected OS designer, and his point is that we
Good point. And Cassandra is a no-go for me for now. I get the model, but I
don't like, check, dislike, things like Thrift.
Op 5 okt. 2010 om 23:54 heeft Dave Wright wrig...@gmail.com het volgende
geschreven:
I think the issue of having to write a full ~60GB snapshot file at
intervals
The ZK create method explicitly states in the documentation If the
parent node does not exist in the ZooKeeper, a KeeperException with
error code KeeperException.NoNode will be thrown. (
Hi,
In Hedwig talk (http://vimeo.com/13282102), it was mentioned that the primary
use case for Hedwig comes from the distributed key-value store PNUTS in Yahoo!,
but also said that the work is new.
Could you please about the following:
Production readiness / Deployment
1. What is the
14 matches
Mail list logo