ports 2888 or 3888 went to down accidently
Hi team, I have a zk cluster with three nodes with zk version 3.5.5. zoo.cfg looks like this, with a exception that server.61 has an additional line extendedTypesEnabled=true: maxClientCnxns=300 reconfigEnabled=false 4lw.commands.whitelist=* snapCount=50 initLimit=10 syncLimit=5 tickTime=2000 clientPort=2181 dataDir=/var/zookeeperdata server.61=10.xxx.130.61:2888:3888:participant server.222=10.xxx.130.222:2888:3888:participant server.21=10.xxx.131.21:2888:3888:participant After the cluster launched, ports 2888 and 3888 went to down accidently. For example, I've ever observe these scenarios: * 3888 went to down in server.222. I believe it has ever been up since at that time 222 was the leader and 61 was a follower but 21 failed to startup due to 3888 of 222 was down(21 started tens of minutes later than 222 and 61). How to find out what cause the down? I tried to search in zookeeper.out and zookeeper.log but doesn't recognize any suspicious. Does anyone has a keyword to search? * In another scenario where 61 is the leader while 21 and 222 is followers, 2888 is down in 21 and 222. But at the same time when I run zkCli.sh in node 222, I can change a key to a different value. Shouldn't a follower forward a write operation to leader through port 2888 then this would suppose to fail due to 2888 is down? BRs Fengtao Public
CVE-2023-44981: Apache ZooKeeper: Authorization bypass in SASL Quorum Peer Authentication
Severity: critical Affected versions: - Apache ZooKeeper 3.9.0 - Apache ZooKeeper 3.8.0 through 3.8.2 - Apache ZooKeeper 3.7.0 through 3.7.1 - Apache ZooKeeper before 3.7.0 Description: Authorization Bypass Through User-Controlled Key vulnerability in Apache ZooKeeper. If SASL Quorum Peer authentication is enabled in ZooKeeper (quorum.auth.enableSasl=true), the authorization is done by verifying that the instance part in SASL authentication ID is listed in zoo.cfg server list. The instance part in SASL auth ID is optional and if it's missing, like 'e...@example.com', the authorization check will be skipped. As a result an arbitrary endpoint could join the cluster and begin propagating counterfeit changes to the leader, essentially giving it complete read-write access to the data tree. Quorum Peer authentication is not enabled by default. Users are recommended to upgrade to version 3.9.1, 3.8.3, 3.7.2, which fixes the issue. Alternately ensure the ensemble election/quorum communication is protected by a firewall as this will mitigate the issue. See the documentation for more details on correct cluster administration. Credit: Damien Diederen (reporter) References: https://zookeeper.apache.org/ https://www.cve.org/CVERecord?id=CVE-2023-44981
Re: API to get a whole subtree
The last time I heard of a discussion along these lines, such an API was frowned upon a bit because it is susceptible to having a very large amount of returned data and thus being having a strong potential for causing disruption for other uses, particularly if the entire returned result has to be as of a single moment in transactional time. This same argument can be applied to getting all of the first level children of a single znode, but getting an atomic view of that is considerably more important since many algorithms depend on seeing a consistent view of the children of a node. On Mon, Oct 9, 2023 at 9:25 PM Enrico Olivelli wrote: > Hello, > today I was discussing with a friend from the Solr community, they > would need to read a whole subtree in one shot. > > I can't remember if we have something like that, do you have any pointers ? > > Cheers > Enrico >
[jira] [Created] (ZOOKEEPER-4758) Upgrade snappy-java to 1.1.10.4 to fix CVE-2023-43642
Dhoka Pramod created ZOOKEEPER-4758: --- Summary: Upgrade snappy-java to 1.1.10.4 to fix CVE-2023-43642 Key: ZOOKEEPER-4758 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-4758 Project: ZooKeeper Issue Type: Bug Affects Versions: 3.8.3 Reporter: Dhoka Pramod Fix For: 3.8.4 The SnappyInputStream was found to be vulnerable to Denial of Service (DoS) attacks when decompressing data with a too large chunk size. Due to missing upper bound check on chunk length, an unrecoverable fatal error can occur. All versions of snappy-java including the latest released version 1.1.10.3 are vulnerable to this issue. A fix has been introduced in commit `9f8c3cf74` which will be included in the 1.1.10.4 release. Users are advised to upgrade. -- This message was sent by Atlassian Jira (v8.20.10#820010)