Extending KeeperException could be a problem is current code expect an
IOException (I don't know) or a specific KeeperException. Extending the
current exception with a new type should be fine.
On Fri, Jan 8, 2021 at 3:01 PM Michael Han wrote:
> Server should really check the length of incoming
Thank you very much for the helpful input, Ted, Norbert and Michael!
I believe I've got a very good answer for my question. So I've created
2 tickets for further discussions and patch submissions.
https://issues.apache.org/jira/browse/ZOOKEEPER-4053
Server should really check the length of incoming buffer size against the
sum of jute.maxbuffer and an extra configurable padding (reserved for
packet / requests headers). The default padding value is 1024 bytes and now
it's configurable through Java properties. I believe we do use this
combined
We see a lot of issues (even on prod systems) around jute.maxbuffer. I
agree it is not the "cleanest" of errors. If ZK is involved in some issue,
usually we always check first for signs of requests being too big (a.k.a.
jute.maxbuffer issue).
But if we wan't to improve on this, we have to make
OK, I think I get it. The rough sanity check is applied only when
deserializing, the len of incoming buffer is read. There is no check
for outgoing data when serializing. And there are 10s of bytes in the
serialization metadata, so if a client is writing just below 1 MB
(1024 * 1024 - 1 bytes),
>From what I've learned and also the doc:
"jute.maxbuffer : (Java system property:jute.maxbuffer).
When jute.maxbuffer in the client side is greater than the server
side, the client wants to write the data exceeds jute.maxbuffer in the
server side, the server side will get java.io.IOException:
Hi Ted,
Really appreciate your prompt response and detailed explanation!
For some reason, ZK could be abused for writing large data objects.
I understand we should correctly use ZK for coordination that ZK is best at.
It's definitely something we could improve how we use ZK. But maybe
it'd be a
Let's be clear from the start, storing large data objects in Zookeeper is
strongly discouraged. If you want to store large objects with good
consistency models, store the data in something else (like a distributed
file system or key value store), commit the data and then use ZK to provide
a
Hi ZK Experts,
I would like to ask a quick question. As we know, assume we are using
the default 1 MB jute.maxbuffer, if a zk client tries to write a large
znode > 1MB, the server will fail it. Server will log "Len error" and
close the connection. The client will receive a connection loss. In a