Extending KeeperException could be a problem is current code expect an
IOException (I don't know) or a specific KeeperException. Extending the
current exception with a new type should be fine.
On Fri, Jan 8, 2021 at 3:01 PM Michael Han wrote:
> Server should really check the length of incoming
HeZhangJian created ZOOKEEPER-4054:
--
Summary: Make prometheus listen host can configure
Key: ZOOKEEPER-4054
URL: https://issues.apache.org/jira/browse/ZOOKEEPER-4054
Project: ZooKeeper
Thank you very much for the helpful input, Ted, Norbert and Michael!
I believe I've got a very good answer for my question. So I've created
2 tickets for further discussions and patch submissions.
https://issues.apache.org/jira/browse/ZOOKEEPER-4053
Huizhi Lu created ZOOKEEPER-4053:
Summary: ConnectionLossException is vague for failing to
read/write large znode
Key: ZOOKEEPER-4053
URL: https://issues.apache.org/jira/browse/ZOOKEEPER-4053
Server should really check the length of incoming buffer size against the
sum of jute.maxbuffer and an extra configurable padding (reserved for
packet / requests headers). The default padding value is 1024 bytes and now
it's configurable through Java properties. I believe we do use this
combined
Hi All,
Sorry for being slow in answering. The jUnit tests in zk-contrib and zk-it
are never called and they are not working.
Based on this I would not call them blockers for 3.7.0.
I tried to fix and convert them to jUnit5, but didn't have time to
finish them.
Sadly I'm also very busy nowadays
I'd say there are quite a few tasks aimed at 4.0. I just answered a thread
about jute.maxbuffer error, which could be improved for example. Or better
yet, throw jute out and use a standardized serialization library.
But there's also the issue of separating client and server code. And I'm
sure
We see a lot of issues (even on prod systems) around jute.maxbuffer. I
agree it is not the "cleanest" of errors. If ZK is involved in some issue,
usually we always check first for signs of requests being too big (a.k.a.
jute.maxbuffer issue).
But if we wan't to improve on this, we have to make
Okay, let’s stay on JDK 8 with 3.7.0 release and do the transition in 4.0.
Not sure if we want 3.8 release or make the master 4.0 from now on.
Andor
> On 2021. Jan 6., at 22:35, Christopher wrote:
>
> I agree with Enrico on this point. If the ZK PMC is considering a 3.7
> release, now would
Huizhi Lu created ZOOKEEPER-4052:
Summary: Failed to read large znode that is written successfully
Key: ZOOKEEPER-4052
URL: https://issues.apache.org/jira/browse/ZOOKEEPER-4052
Project: ZooKeeper
OK, I think I get it. The rough sanity check is applied only when
deserializing, the len of incoming buffer is read. There is no check
for outgoing data when serializing. And there are 10s of bytes in the
serialization metadata, so if a client is writing just below 1 MB
(1024 * 1024 - 1 bytes),
>From what I've learned and also the doc:
"jute.maxbuffer : (Java system property:jute.maxbuffer).
When jute.maxbuffer in the client side is greater than the server
side, the client wants to write the data exceeds jute.maxbuffer in the
server side, the server side will get java.io.IOException:
Hi Ted,
Really appreciate your prompt response and detailed explanation!
For some reason, ZK could be abused for writing large data objects.
I understand we should correctly use ZK for coordination that ZK is best at.
It's definitely something we could improve how we use ZK. But maybe
it'd be a
Let's be clear from the start, storing large data objects in Zookeeper is
strongly discouraged. If you want to store large objects with good
consistency models, store the data in something else (like a distributed
file system or key value store), commit the data and then use ZK to provide
a
14 matches
Mail list logo