1.3gb serialized datatree is quite large. Not "too large", just large in the sense it means more time for things like bringing up a cluster. (which means serializing/deserializing the snapshots, transferring data, etc...) This is typically why (among other reasons) we say not to store your data in zk itself, rather just the coordination details.
How much heap did you assign? Review the GC activity (also rule out swapping) which might be slowing things down. You might try copying the datadir and bringing up a standalone server. That would give you some insight into how long it takes to deserialize the data. Patrick On Wed, May 30, 2012 at 2:03 PM, Jordan Zimmerman <[email protected]> wrote: > What's the latest snapshot size look like? > > 1,378,363,003 > > > What's the size of the ensemble. How many znodes. etc... > > 3 nodes - AWS m1.xlarge > > > >>________________________________ >> From: Patrick Hunt <[email protected]> >>To: [email protected]; Jordan Zimmerman <[email protected]> >>Sent: Wednesday, May 30, 2012 1:59 PM >>Subject: Re: initLimit/syncLimit >> >>On Wed, May 30, 2012 at 1:52 PM, Jordan Zimmerman >><[email protected]> wrote: >>> Our ZK data size is currently around 116GB on disk. I find that I'm needing >>> to set initLimit and syncLimit to very big numbers. Currently, the cluster >>> cannot get a quorum on restart unless I have these values: >>> >>> initLimit=300 >>> syncLimit=300 >>> tickTime=2000 >>> >>> Does that seem normal? >> >>What's gc/swap activity look like? >> >>Hard to say if this is "normal". "116gb on disk" means what, the >>datadir? What's the latest snapshot size look like? What's the size of >>the ensemble. How many znodes. etc... >> >>Patrick >> >> >>
