Hi Patrick,

I tried setting the max heap size to 2G, but still after creating 1027 x 480 nodes zookeeper is throwing the error code "-5". Following is the exception I'm seeing:

-----------------------------------------------------------------------------------------------------------------------------------------------
2011-11-09 23:42:49,484 - ERROR [ProcessThread:-1:PrepRequestProcessor@415] - Failed to process sessionid:0x1338bb71d340000 type:create cxid:0x4ec3599e zxid:0xfffffffffffffffe txntype:unknown reqpath:/4/158:1096
java.nio.BufferOverflowException
        at java.nio.charset.CoderResult.throwException(Unknown Source)
        at java.lang.StringCoding$StringDecoder.decode(Unknown Source)
        at java.lang.StringCoding.decode(Unknown Source)
        at java.lang.String.<init>(Unknown Source)
        at java.lang.String.<init>(Unknown Source)
at org.apache.jute.BinaryInputArchive.readString(BinaryInputArchive.java:83)
        at org.apache.zookeeper.data.Id.deserialize(Id.java:55)
at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:108)
        at org.apache.zookeeper.data.ACL.deserialize(ACL.java:57)
at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:108) at org.apache.zookeeper.proto.CreateRequest.deserialize(CreateRequest.java:92) at org.apache.zookeeper.server.ZooKeeperServer.byteBuffer2Record(ZooKeeperServer.java:599) at org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProcessor.java:216) at org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:114) 2011-11-09 23:42:49,485 - ERROR [ProcessThread:-1:PrepRequestProcessor@428] - Dumping request buffer: 0x000b2f342f3135383a3130393600013100010001f0005776f726c640006616e796f6e650000
-----------------------------------------------------------------------------------------------------------------------------

Please advise.

Thanks,
Aniket

On 11/9/2011 6:00 PM, Patrick Hunt wrote:
er, make that JVMFLAGS=-Xmx<mem in gig>g bin/zkServer.sh

(no -D)

Patrick

On Wed, Nov 9, 2011 at 2:31 PM, Patrick Hunt<[email protected]>  wrote:
On Wed, Nov 9, 2011 at 1:14 PM, Aniket Chakrabarti
<[email protected]>  wrote:
I am trying to load a huge matrix(100,000 by 500) to my zookeeper instance.
Each element of the matrix is a ZNODE and value of each element is a
digit(0-9).

But I'm only able to load around 1000 x 500 nodes. Zookeeper is throwing an
error after that. Mostly it is throwing a "-5" error code which is a
marshalling/unmarshalling error. I'm using the perl interface of the
zookeeper.

My question is: Is there a limit to the maximum number of ZNODES a zookeeper
instance can hold or this is limited by the system memory?

Any pointers on how to avoid the error would be very helpful.
Available heap memory is really the only limit. Try

$ JVMFLAGS=-D-Xmx<mem in gig>g bin/zkServer.sh

also

$ sudo jmap -heap<jvm pid>

will give you some insight into whether it was set correctly or not
(ie MaxHeapSize)

The most I've tried is 5 million znodes with 25 million watches (using
zkpython, but zkperl should be fine). iirc that was 8gig heap, but
ymmv depending on data size.

You may also need to tune the GC at some point (would suggest turning
on cms and parallel collector) to limit stop the world pauses.

Regards,

Patrick

Reply via email to