> but we could easily improve this behaviour by going through all the resolved hostnames and if anything matches, the verification succeeds.
Yes, exactly. > Why would you add more hostnames to a single host without adding them to the certificate? The certificate certifies the actor (machine in this case), not just the hostname. I think this is a question of mentality, of whether you use DNS for service discovery or what I'm going to call machine discovery (not sure if there's a better name for it). In a machine discovery framework, you assign the host a name (let's say "kåbdalis"), and configure all peers and clients that zookeeper.1 is hosted on kåbdalis:2888:3888. You set the SAN to kåbdalis. Everyone is happy. If kåbdalis is running some other service, you do the same thing there. In a service discovery framework, you assign the *role* a name ("zookeeper-1"), and configure all peers and clients that zookeeper.1 is hosted on zookeeper-1, which might be on any server. You issue kåbdalis a certificate with the SAN zookeeper-1, to prove that it is authorized to host it. kåbdalis will likely have other DNS names too; both for "itself" (for ssh access, for example), and for any other roles it might fulfil (kafka-1?). Those will have their own certificates, since zookeeper-1 shouldn't be able to pretend to be kafka-1. Kubernetes generally follows the latter framework, a Pod gets one IP address, but a DNS hostname for every Service that it matches. A primary Service can be selected (via *Pod.spec.subdomain* and *Pod.spec.hostname*), but every other Service that contains it will still be included in the reverse DNS reply. Those secondary hostnames are effectively impossible to predict since they include the Pod IP (*1-2-3-4.service.namespace.svc.cluster.local* for the IP *1.2.3.4*), which is considered transient in Kubernetes (it will be regenerated every time the Pod is restarted). This IP is also assigned far later than most reasonable hooks to provide certificates. > On the flipside, turning off this check doesn't make sense to me, > because owning a certificate alone should not be enough to authorize > yourself (showing the ID of a random guy doesn't work, right, you have > to show your own ID), though in most scenarios Quorum is SASL > authenticated, so the authenticity of peers is already guaranteed. We really have two different properties that we want to verify during the connection process: 1. Do you control the key signed by the certificate? (managed by the standard TLS handshake) 2. Is this certificate authorized to perform this action? (does the SAN match the name expected by zoo.cfg?) Validating the connecting IP address is a weaker form of 1 (the IP header and DNS traffic are both cleartext and much more susceptible to tampering than the TLS handshake itself), and doesn't replace 2 at all (your HTTP server might be trusted by the same PKI and *does* have a fully valid hostname, but it still probably shouldn't be able to connect as a ZK quorum peer). On Thu, Nov 21, 2024 at 10:37 PM Andor Molnar <an...@apache.org> wrote: > Hi Natalie, > > > On Thu, 2024-11-21 at 16:19 +0100, Natalie Klestrup Röijezon wrote: > > I think Andor's suggestion mostly makes sense, though I have a few > > notes: > > > > 1. The current reverse DNS situation (it picks a random hostname from > > the > > DNS response and validates that) is just flat out *wrong*. PR#2 > > doesn't > > need to solve that, but I don't think there is a use case for keeping > > it > > around together with a fixed reverse DNS implementation that respects > > the > > full DNS response. > > IMHO returning multiple hostnames for a reverse DNS query is bad > config, but we could easily improve this behaviour by going through all > the resolved hostnames and if anything matches, the verification > succeeds. > > Let's put it this way from security perspective: I want to certify a > machine (VM or on-prem) to be able to connect to my quorum, so I issue > a certificate. Host has a single IP address and one or more associated > hostnames. All hostnames and IP address(es) should identify one single > host in the network otherwise there's no way to distinguish them. > > So what I do is, I issue a cert and put every information about the > machine into the SAN: IP address and all hostnames. Now ZooKeeper is > able to verify the hostname of the peer by picking any resolved > hostname from the reverse DNS query, because all of them have been > added to the cert. This approach works out pretty well in our existing > production systems, though none of them is in Kubernetes. > > Why would you add more hostnames to a single host without adding them > to the certificate? The certificate certifies the actor (machine in > this case), not just the hostname. > > On the flipside, turning off this check doesn't make sense to me, > because owning a certificate alone should not be enough to authorize > yourself (showing the ID of a random guy doesn't work, right, you have > to show your own ID), though in most scenarios Quorum is SASL > authenticated, so the authenticity of peers is already guaranteed. > > > > 2. I'm not quite sure what you mean by "_and_ if it fails, will do > > the same > > with the IP address as a last resort.". Does this mean that it would > > also > > check the SAN against IP addresses in the zoo.cfg? Falling back to > > the > > current reverse DNS implementation? Something else entirely? > > Yes, IP address can be added to the SAN, but it's quite inconvenient, > because if IP address changes, a new certificate must be issued. > > So, quorum hostname verification should work like this: > > 1. (if enabled) Check hostname in SAN with reverse DNS lookup (check > all) > 2. Check hostnames listed in zoo.cfg in SAN > 3. Check IP address in SAN > > If any these methods find a match, the hostname verification succeeds. > > > Andor > > > > > > > On Wed, Nov 20, 2024 at 10:06 PM Andor Molnar <an...@apache.org> > > wrote: > > > > > Okay, let's discuss it here. > > > > > > So, currently we have 2 clientHostnameVerification options: quorum > > > and > > > client. They're implemented in the Quroum509Util and ClientX509Util > > > classes respectively and share common code in X509Util base class. > > > The > > > 3 different options that you described makes sense to me, but it's > > > only > > > meaningful in Quorum communication, so it would be difficult to > > > implement. Separating the 2 properties and changing the type of one > > > of > > > them will be very cumbersome. > > > > > > Actually we only have 1 setting currently which is > > > "hostnameVerification", so properties look like: > > > > > > - zookeeper.ssl.hostnameVerification (client-server) > > > - zookeeper.ssl.quorum.hostnameVerification (quorum) > > > > > > Both controls the server hostname verification directly and client > > > hostname verification indirectly. Your current patch introduces 2 > > > new > > > properties: > > > > > > - zookeeper.ssl.clientHostnameVerification (client-server) > > > - zookeeper.ssl.quorum.clientHostnameVerification (quorum) > > > > > > They will directly control the client hostname verification with > > > the > > > restriction that the server side (hostnameVerification) must be > > > enabled > > > in order to get them working. Default values are 'false' for > > > client- > > > server and 'true' for quorum. > > > > > > This is backward compatible and can be committed as it is (tweak > > > the > > > docs first). > > > > > > Next patch will introduce another 2 parameters which are: > > > > > > - zookeeper.ssl.enableReverseDnsLookup (client-server) > > > - zookeeper.ssl.quorum.enableReverseDnsLookup (quorum) > > > > > > Both of them will be true by default which keeps backward > > > compatibility. If the option is disabled for client-server, the > > > hostname verification won't try a reverse DNS lookup to get the > > > hostname, but still checks if the IP address is listed in the SAN. > > > > > > If the option is disabled for quorum, hostname verification won't > > > do > > > reverse lookup, but checks the hostnames listed in zoo.cfg against > > > the > > > SAN _and_ if it fails, will do the same with the IP address as a > > > last > > > resort. > > > > > > We also have the zookeeper.fips-mode property which will remove the > > > entire ZK hostname verification logic and uses the built-in JDK > > > mechanism. > > > > > > Wdyt? > > > > > > Andor > > > > > > > > > > > > > > > On Wed, 2024-11-20 at 09:21 +0100, Sönke Liebau wrote: > > > > Happy to split this into two patches, my main worry was that we > > > > notice in > > > > the second patch, that we are not compatible with the first one > > > > from > > > > a > > > > config perspective any more. > > > > > > > > The proposed `clientHostnameVerification` parameters take a > > > > boolean > > > > value I > > > > think. One could argue that this parameter would also be > > > > appropriate > > > > for > > > > the new "hostname from list of peers" logic, as that is also a > > > > hostnameVerification, so instead of it being a bool it could > > > > become a > > > > list > > > > of allowed values [ "none", "peer", "full"]. > > > > With 'peer' being the new and improved thing, 'none' disabling > > > > all > > > > checks > > > > and 'full' being the current behavior. Full is probably not the > > > > correct > > > > term for this, maybe 'legacy' or 'dns' or something along these > > > > lines > > > > could > > > > be appropriate. > > > > > > > > It might be worth deciding on this up front, to make sure we > > > > don't > > > > have to > > > > backtrack at some point. > > > > > > > > Best regards, > > > > Sönke > > > > > > > > On Wed, Nov 20, 2024 at 2:31 AM Andor Molnar <an...@apache.org> > > > > wrote: > > > > > > > > > Sounds like a plan, but I would do the 2 things in separate > > > > > patches: the > > > > > current patch covers the toggle for client hostname > > > > > verification > > > > > and I > > > > > think it’s already ready for submit. > > > > > > > > > > Another patch should cover the alternate way of hostname > > > > > verification > > > > > based on what we discussed. Could you please update the Jira > > > > > ticket(s) > > > > > accordingly to make sure we’re on the same page? > > > > > > > > > > Thanks, > > > > > Andor > > > > > > > > > > > > > > > > > > > > > On Nov 19, 2024, at 15:19, Sönke Liebau > > > > > <soenke.lie...@stackable.tech.INVALID> wrote: > > > > > > > > > > > > It would be nice, being able to find out which peer is trying > > > > > > to > > > > > > connect, > > > > > > but as you say, we won't have the id available before the > > > > > > handshake is > > > > > done > > > > > > (unless we have users stick it in the cert, and I don't think > > > > > > we > > > > > > want > > > > > that > > > > > > :) ) > > > > > > Anything we could do to find out would probably need to > > > > > > involve > > > > > > DNS and > > > > > > some dirty compromise.. so probably best to just not go > > > > > > there. > > > > > > > > > > > > Your idea for backwards compatible config sounds sensible to > > > > > > me. > > > > > > It would probably make sense to get this done as part of > > > > > > https://github.com/apache/zookeeper/pull/2173 and discuss > > > > > > possible > > > > > config > > > > > > changes up front for everything? > > > > > > > > > > > > Best, Sönke > > > > > > > > > > > > > > > > > > On Tue, 19 Nov 2024, 16:46 Andor Molnar, <an...@kazinczy.hu> > > > > > > wrote: > > > > > > > > > > > > > Correction, sorry: > > > > > > > > > > > > > > > Not necessarily one SAN matches, but _the one_ matches > > > > > > > > which > > > > > corresponds > > > > > > > to the peer. > > > > > > > > > > > > > > I meant not any of the listed server names matches in > > > > > > > zoo.cfg, > > > > > > > but we > > > > > > > might be able to identify which peer is trying to connect. > > > > > > > This > > > > > > > is > > > > > probably > > > > > > > not feasible, because at the time of TLS handshake the peer > > > > > > > id > > > > > > > is not > > > > > yet > > > > > > > available. > > > > > > > > > > > > > > We should do this in backward compatible way: client > > > > > > > hostname > > > > > > > verification, by default, works at is, but we introduce an > > > > > > > option to > > > > > > > disable reverse DNS lookups. That switches the behavior to > > > > > > > check against > > > > > > > list of hostnames in zoo.cfg. In addition we enable client > > > > > > > hostname > > > > > > > verification to be completely disabled too. > > > > > > > > > > > > > > Wdyt? > > > > > > > > > > > > > > Andor > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Nov 19, 2024, at 09:38, Andor Molnar > > > > > > > > <an...@kazinczy.hu> > > > > > > > > wrote: > > > > > > > > > > > > > > > > Hi Sönke, > > > > > > > > > > > > > > > > Are you still working on the patch btw? > > > > > > > > https://github.com/apache/zookeeper/pull/2173 > > > > > > > > > > > > > > > > (Though I’m not sure if it was you who opened it) > > > > > > > > > > > > > > > > Thanks for the pointer, ZooKeeper support for K8s is > > > > > > > > unfortunately > > > > > still > > > > > > > far from perfect. Introducing the FIPS mode option might > > > > > > > not > > > > > > > have been > > > > > the > > > > > > > best choice, but we first faced the issue during FIPS > > > > > > > deployment where > > > > > you > > > > > > > cannot use custom hostname verificator, so we had to > > > > > > > disable it > > > > > completely. > > > > > > > > > > > > > > > > I’d like to idea of > > > > > > > > > > > > > > > > "The ZK server could verify the SAN against the list of > > > > > > > > servers > > > > > > > (servers.N in the config). A peer should be able to connect > > > > > > > on > > > > > > > the > > > > > quorum > > > > > > > port if and only if at least one SAN matches at least one > > > > > > > of > > > > > > > the listed > > > > > > > servers.” > > > > > > > > > > > > > > > > Not necessarily one SAN matches, but _the one_ matches > > > > > > > > which > > > > > corresponds > > > > > > > to the peer. Does that make sense? Not sure if technically > > > > > > > feasible, but > > > > > > > are you willing to create a patch? > > > > > > > > > > > > > > > > Regards, > > > > > > > > Andor > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Nov 19, 2024, at 03:11, Sönke Liebau > > > > > > > <soenke.lie...@stackable.tech.INVALID> wrote: > > > > > > > > > > > > > > > > > > You are probably running into ZOOKEEPER-4790 [1] here. > > > > > > > > > > > > > > > > > > When we encountered this back in the day [2] we figured > > > > > > > > > out > > > > > > > > > that > > > > > > > enabling > > > > > > > > > FIPS mode bypasses all the ZK specific TLS checks and > > > > > > > > > makes > > > > > > > > > it work. > > > > > In > > > > > > > the > > > > > > > > > ZK version you are on it is not yet enabled by default, > > > > > > > > > you > > > > > > > > > could > > > > > either > > > > > > > > > update or set *zookeeper.fips-mode *and this error > > > > > > > > > _should_ go away. > > > > > > > > > > > > > > > > > > Good luck :) > > > > > > > > > > > > > > > > > > Best, > > > > > > > > > Sönke > > > > > > > > > > > > > > > > > > [1] > > > > > > > > > https://issues.apache.org/jira/browse/ZOOKEEPER-4790 > > > > > > > > > [2] > > > > > > > > > > https://github.com/stackabletech/zookeeper-operator/issues/760 > > > > > > > > > > > > > > > > > > On Tue, Nov 19, 2024 at 9:08 AM Dharani (Jira) > > > > > > > > > <j...@apache.org> > > > > > wrote: > > > > > > > > > > > > > > > > > > > Dharani created ZOOKEEPER-4887: > > > > > > > > > > ---------------------------------- > > > > > > > > > > > > > > > > > > > > Summary: Zookeeper quorum formation fails > > > > > > > > > > when > > > > > > > > > > TLS is > > > > > > > enabled > > > > > > > > > > in k8s env > > > > > > > > > > Key: ZOOKEEPER-4887 > > > > > > > > > > URL: > > > > > > > https://issues.apache.org/jira/browse/ZOOKEEPER-4887 > > > > > > > > > > Project: ZooKeeper > > > > > > > > > > Issue Type: Bug > > > > > > > > > > Affects Versions: 3.8.3 > > > > > > > > > > Reporter: Dharani > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > We have three(3) node zookeeper cluster running as a > > > > > > > > > > pod > > > > > > > > > > on > > > > > Kubernetes > > > > > > > > > > cluster, zookeeper quorum formation fails with TLS > > > > > > > > > > handshake error, > > > > > as > > > > > > > the > > > > > > > > > > server name in the https request does not match with > > > > > > > > > > any > > > > > > > > > > of the SANs > > > > > > > in the > > > > > > > > > > certificate configured for zookeeper server. Server > > > > > > > > > > name > > > > > > > > > > in the > > > > > > > request is > > > > > > > > > > of the form "x-x-x- > > > > > > > > > > x.kubernetes.default.svc.cluster.local" (where > > > > > > > x-x-x-x > > > > > > > > > > is the IP address of the POD), and I am unable to > > > > > > > > > > understand the > > > > > reason > > > > > > > > > > behind pre-pending FQDN with a IP address. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please find below the extract of the error logs from > > > > > > > > > > the > > > > > > > > > > zookeeper > > > > > POD > > > > > > > > > > {code:java} > > > > > > > > > > [myid:] - ERROR [LearnerHandler-/192.168.220. > > > > > > > > > > 10:46516:o.a.z.c.ZKTrustManager@191] - Failed to > > > > > > > > > > verify > > > > > > > > > > host > > > > > address: > > > > > > > > > > 192.168.220.10 > > > > > > > > > > javax.net.ssl.SSLPeerUnverifiedException: Certificate > > > > > > > > > > for > > > > > > > <192.168.220.10> > > > > > > > > > > doesn't match any of the subject alternative names: > > > > > > > > > > [eric-data-coordinator-zk, eric-data-coordinator- > > > > > > > > > > zk.zdhagxx1, > > > > > > > > > > eric-data-coordinator-zk.zdhagxx1.svc, > > > > > > > > > > eric-data-coordinator-zk.zdhagxx1.svc.cluster.local, > > > > > > > > > > > > > > > *.eric-data-coordinator-zk-ensemble- > > > > > service.zdhagxx1.svc.cluster.local, > > > > > > > > > > certified-scrape-target] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.common.ZKHostnameVerifier.matchIPAddress(Z > > > > > KHos > > > > > tnameVerifier.java:197) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.common.ZKHostnameVerifier.verify(ZKHostnam > > > > > eVer > > > > > ifier.java:165) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.common.ZKTrustManager.performHostVerificat > > > > > ion( > > > > > ZKTrustManager.java:180) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.common.ZKTrustManager.checkClientTrusted(Z > > > > > KTru > > > > > stManager.java:93) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.CertificateMessage$T13CertificateCon > > > > > sume > > > > > r.checkClientCerts(CertificateMessage.java:1285) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.CertificateMessage$T13CertificateCon > > > > > sume > > > > > r.onConsumeCertificate(CertificateMessage.java:1204) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.CertificateMessage$T13CertificateCon > > > > > sume > > > > > r.consume(CertificateMessage.java:1181) > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.ja > > > > > va:3 > > > > > 92) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeC > > > > > onte > > > > > xt.java:443) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeC > > > > > onte > > > > > xt.java:421) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.TransportContext.dispatch(TransportC > > > > > onte > > > > > xt.java:183) > > > > > > > > > > java.base/sun.security.ssl.SSLTransport.decode(SSLTra > > > > > > > > > > nspo > > > > > > > > > > rt.java:172) > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketIm > > > > > > > pl.j > > > > > > > ava:1511) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SS > > > > > LSoc > > > > > ketImpl.java:1421) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSock > > > > > etIm > > > > > pl.java:456) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSo > > > > > cket > > > > > Impl.java:926) > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.getSession(SSLSocketIm > > > > > pl.j > > > > > ava:372) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedS > > > > > ocke > > > > > t.detectMode(UnifiedServerSocket.java:269) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedS > > > > > ocke > > > > > t.getSocket(UnifiedServerSocket.java:298) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedS > > > > > ocke > > > > > t.access$400(UnifiedServerSocket.java:172) > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedI > > > > > nput > > > > > Stream.getRealInputStream(UnifiedServerSocket.java:699) > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedI > > > > > nput > > > > > Stream.read(UnifiedServerSocket.java:693) > > > > > > > > > > java.base/java.io > > > > > > > .BufferedInputStream.fill(BufferedInputStream.java:252) > > > > > > > > > > java.base/java.io > > > > > > > .BufferedInputStream.read(BufferedInputStream.java:271) > > > > > > > > > > java.base/java.io.DataInputStream.readInt(DataInputSt > > > > > > > > > > ream > > > > > > > > > > .java:392) > > > > > > > > > > > > > > > org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.j > > > > > ava: > > > > > 96) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(Quo > > > > > rumP > > > > > acket.java:86) > > > > > > > > > > > > > > > > > > > > > > org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchiv > > > > > e.ja > > > > > va:134) > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHa > > > > > ndle > > > > > r.java:472)[myid:] > > > > > > > > > > - ERROR [LearnerHandler-/192.168.220. > > > > > > > 10:46516:o.a.z.c.ZKTrustManager@192] > > > > > > > > > > - Failed to verify hostname: > > > > > > > > > > 192-168-220-10.eric-data-coordinator- > > > > > > > > > > zk.zdhagxx1.svc.cluster.local > > > > > > > > > > javax.net.ssl.SSLPeerUnverifiedException: Certificate > > > > > > > > > > for > > > > > > > > > > <192-168-220-10.eric-data-coordinator- > > > > > > > > > > zk.zdhagxx1.svc.cluster.local> > > > > > > > > > > doesn't match any of the subject alternative names: > > > > > > > > > > [eric-data-coordinator-zk, eric-data-coordinator- > > > > > > > > > > zk.zdhagxx1, > > > > > > > > > > eric-data-coordinator-zk.zdhagxx1.svc, > > > > > > > > > > eric-data-coordinator-zk.zdhagxx1.svc.cluster.local, > > > > > > > > > > > > > > > *.eric-data-coordinator-zk-ensemble- > > > > > service.zdhagxx1.svc.cluster.local, > > > > > > > > > > certified-scrape-target] > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.common.ZKHostnameVerifier.matchDNSName(ZKH > > > > > ostn > > > > > ameVerifier.java:230) > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.common.ZKHostnameVerifier.verify(ZKHostnam > > > > > eVer > > > > > ifier.java:171) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.common.ZKTrustManager.performHostVerificat > > > > > ion( > > > > > ZKTrustManager.java:189) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.common.ZKTrustManager.checkClientTrusted(Z > > > > > KTru > > > > > stManager.java:93) > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.CertificateMessage$T13CertificateCon > > > > > sume > > > > > r.checkClientCerts(CertificateMessage.java:1285) > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.CertificateMessage$T13CertificateCon > > > > > sume > > > > > r.onConsumeCertificate(CertificateMessage.java:1204) > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.CertificateMessage$T13CertificateCon > > > > > sume > > > > > r.consume(CertificateMessage.java:1181) > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.ja > > > > > va:3 > > > > > 92) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeC > > > > > onte > > > > > xt.java:443) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeC > > > > > onte > > > > > xt.java:421) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.TransportContext.dispatch(TransportC > > > > > onte > > > > > xt.java:183) > > > > > > > > > > java.base/sun.security.ssl.SSLTransport.decode(SSLTra > > > > > > > > > > nspo > > > > > > > > > > rt.java:172) > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketIm > > > > > > > pl.j > > > > > > > ava:1511) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SS > > > > > LSoc > > > > > ketImpl.java:1421) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSock > > > > > etIm > > > > > pl.java:456) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSo > > > > > cket > > > > > Impl.java:926) > > > > > > > > > > > > > > > > > > > > > > java.base/sun.security.ssl.SSLSocketImpl.getSession(SSLSocketIm > > > > > pl.j > > > > > ava:372) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedS > > > > > ocke > > > > > t.detectMode(UnifiedServerSocket.java:269) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedS > > > > > ocke > > > > > t.getSocket(UnifiedServerSocket.java:298) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedS > > > > > ocke > > > > > t.access$400(UnifiedServerSocket.java:172) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedI > > > > > nput > > > > > Stream.getRealInputStream(UnifiedServerSocket.java:699) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.UnifiedServerSocket$UnifiedI > > > > > nput > > > > > Stream.read(UnifiedServerSocket.java:693) > > > > > > > > > > java.base/java.io > > > > > > > .BufferedInputStream.fill(BufferedInputStream.java:252) > > > > > > > > > > java.base/java.io > > > > > > > .BufferedInputStream.read(BufferedInputStream.java:271) > > > > > > > > > > java.base/java.io.DataInputStream.readInt(DataInputSt > > > > > > > > > > ream > > > > > > > > > > .java:392) > > > > > > > > > > > > > > > org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.j > > > > > ava: > > > > > 96) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(Quo > > > > > rumP > > > > > acket.java:86) > > > > > > > > > > > > > > > > > > > > > > org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchiv > > > > > e.ja > > > > > va:134) > > > > > > > > > > > > > > > > > > > > > > org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHa > > > > > ndle > > > > > r.java:472) > > > > > > > > > > {code} > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > This message was sent by Atlassian Jira > > > > > > > > > > (v8.20.10#820010) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >