Re: [jira] [Commented] (ZOOKEEPER-3556) Dynamic configuration file can not be updated automatically after some zookeeper servers of zk cluster are down

2019-09-25 Thread Michael Han
>> There were recently a post here from someone who has implemented this

Maybe this one?
http://zookeeper-user.578899.n2.nabble.com/About-ZooKeeper-Dynamic-Reconfiguration-td7584271.html

On Wed, Sep 25, 2019 at 9:19 PM Alexander Shraer  wrote:

> There were recently a post here from someone who has implemented this, but
> I couldn't find it for some reason.
>
> Essentially I think that you'd need to monitor the "health" and
> connectivity of servers to the leader, and issue reconfig commands to
> remove them when you suspect that they're down or add them back when you
> think they're up.
> Notice that you always have to have at least a quorum of the ensemble, so
> issuing a reconfig command if a quorum is lost (or any other command) won't
> work.
> You could use the information exposed in ZK's 4 letter commands to decide
> whether you think a server is up and connected to the quorum or down.
> Ideally we could also use the leader's view on who is connected
> but it doesn't look like this is being exposed right now. You can also
> periodically issue test read/write operations on various servers to check
> if they're really operational
>
> https://github.com/apache/zookeeper/blob/1ca627b5a3105d80ed4d851c6e9f1a1e2ac7d64a/zookeeper-docs/src/main/resources/markdown/zookeeperAdmin.md#sc_4lw
>
> As accurate failure detection is impossible in async. systems, you'll need
> to decide how sensitive you are to potential failures vs false suspicions.
>
> Hope this helps...
>
> Alex
>
> On Wed, Sep 25, 2019 at 6:00 PM Gao,Wei  wrote:
>
> > Hi Alexander Shraer,
> >  Could you please tell me how to implement automation on top?
> > Thank you very much!
> >
> > -Original Message-
> > From: Alexander Shraer (Jira) 
> > Sent: Thursday, September 26, 2019 1:27 AM
> > To: issues@zookeeper.apache.org
> > Subject: [jira] [Commented] (ZOOKEEPER-3556) Dynamic configuration file
> > can not be updated automatically after some zookeeper servers of zk
> cluster
> > are down
> >
> >
> > [
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_ZOOKEEPER-2D3556-3Fpage-3Dcom.atlassian.jira.plugin.system.issuetabpanels-3Acomment-2Dtabpanel-26focusedCommentId-3D16937925-23comment-2D16937925=DwIFaQ=ZmK7amRlbztwfC_NTU_hNw=bTmnMF5RGYcfg4qOcKQAYjkGGUtOB2jR22ryrk8hNWk=UNFnO3kfjtUL8Jievmh9VMXf_nTLKBCfuJsaxe6FshU=XxgusqUbHgFrxTfTTcYuxMWxol3W-1dJ7WVzUqh1HAE=
> > ]
> >
> > Alexander Shraer commented on ZOOKEEPER-3556:
> > -
> >
> > The described behavior is not a bug – currently reconfiguration requires
> > explicit action by an operator. One could implement automation on top. We
> > should consider this as a feature, since it sounds like several adopters
> > have implemented such automation. Perhaps one of them could contribute
> this
> > upstream.
> >
> > > Dynamic configuration file can not be updated automatically after some
> > > zookeeper servers of zk cluster are down
> > > --
> > > -
> > >
> > > Key: ZOOKEEPER-3556
> > > URL:
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_ZOOKEEPER-2D3556=DwIFaQ=ZmK7amRlbztwfC_NTU_hNw=bTmnMF5RGYcfg4qOcKQAYjkGGUtOB2jR22ryrk8hNWk=UNFnO3kfjtUL8Jievmh9VMXf_nTLKBCfuJsaxe6FshU=NQvX26JbBDNMmEtQhirmYk7ELe46vCjn4kbm1VqcNsA=
> > > Project: ZooKeeper
> > >  Issue Type: Wish
> > >  Components: java client
> > >Affects Versions: 3.5.5
> > >Reporter: Steven Chan
> > >Priority: Major
> > >   Original Estimate: 12h
> > >  Remaining Estimate: 12h
> > >
> > > *I encountered a problem which blocks my development of load balance
> > > using ZooKeeper 3.5.5.*
> > >*Actually, I have a ZooKeeper cluster which comprises of five zk
> > > servers. And the dynamic configuration file is as follows:*
> > >  **
> > > {color:#FF}
> > > *server.1=zk1:2888:3888:participant;0.0.0.0:2181*{color}
> > > {color:#FF}
> > > *server.2=zk2:2888:3888:participant;0.0.0.0:2181*{color}
> > > {color:#FF}
> > > *server.3=zk3:2888:3888:participant;0.0.0.0:2181*{color}
> > > {color:#FF}
> > > *server.4=zk4:2888:3888:participant;0.0.0.0:2181*{color}
> > > {color:#FF}
> > > *server.5=zk5:2888:3888:participant;0.0.0.0:2181*{color}
> > >  **
> > >   *The zk cluster can work fine if every member works normally.
> > > However, if say two of them are suddenly down without previously being
> > > notified,* *the dynamic configuration file shown above will not be
> > > synchronized dynamically, which leads to the zk cluster fail to work
> > > normally.*
> > >   *As far as I am concerned, the dynamic configuration file should be
> > > modified to this if server 1 and server 5 are down suddenly as
> > > follows:* {color:#FF}
> > > *server.2=zk2:2888:3888:participant;0.0.0.0:2181*{color}
> > > {color:#FF}
> > > 

[jira] [Commented] (ZOOKEEPER-3558) Support authentication enforcement

2019-09-25 Thread Michael Han (Jira)


[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938245#comment-16938245
 ] 

Michael Han commented on ZOOKEEPER-3558:


FYI ZOOKEEPER-1634 introduced an option that user can choose to enforce 
authentication such as unauthenticated clients will not be able to connect to 
ZooKeeper, which solves problem 2. 

> Support authentication enforcement
> --
>
> Key: ZOOKEEPER-3558
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3558
> Project: ZooKeeper
>  Issue Type: New Feature
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
> Fix For: 3.5.7
>
> Attachments: ZOOKEEPER-3558-01.patch
>
>
> Provide authentication enforcement in ZooKeeper that is backward compatible 
> and can work for any authentication scheme, can work even with custom 
> authentication schemes.
> *Problems:*
> 1. Currently server is starting with default authentication 
> providers(DigestAuthenticationProvider, IPAuthenticationProvider). These 
> default authentication providers are not really secure.
> 2. ZooKeeper server is not checking whether authentication is done or not 
> before performing any user operation.
> *Solutions:*
> 1. We should not start any authentication provider by default. But this would 
> be backward incompatible change. So we can provide configuration whether to 
> start default authentication provides are not.
> By default we can start these authentication providers.
> 2. Before any user operation server should check whether authentication 
> happened or not. At least client must be authenticated with one 
> authentication scheme.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ZOOKEEPER-3557) Towards a testable codebase

2019-09-25 Thread Michael Han (Jira)


[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938241#comment-16938241
 ] 

Michael Han commented on ZOOKEEPER-3557:


This sounds a useful improvement.

On a side note, we have improved TestableZooKeeper internally so it can serve 
as a testable zookeeper cluster for various client teams who integrates with 
ZooKeeper - the TestableZooKeeper accepts various commands to simulate misc 
server conditions (e.g. trigger a leader election, or reject requests). This 
utility currently depends on some of our internal code base, and we are 
rewriting part of it so this can be contributed upstream. I think this can 
address part of this umbrella issue. 

> Towards a testable codebase
> ---
>
> Key: ZOOKEEPER-3557
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3557
> Project: ZooKeeper
>  Issue Type: Task
>  Components: tests
>Reporter: Zili Chen
>Priority: Major
>
> This issue is umbrella issue tracks all efforts towards a testable ZooKeeper 
> codebase.
> *Motivation*
> On the one hand, many of our adopters such as HBase, Curator and so on 
> maintain their own testkit for ZooKeeper[1][2]; on the other hand, ZooKeeper 
> itself doesn't have a well-designed testkit. Here are some of issues in our 
> testing "framework".
> 1. {{ZooKeeperTestable}} becomes a production scope class while it should be 
> in testing scope.
> 2. {{ZooKeeperTestable}} is only used in {{SessionTimeoutTest}} while its 
> name infers a completed testing class.
> 3. {{ClientBase}} is super class of many of zookeeper tests while it contains 
> too many orthogonal functions that its subclass inherits lots of burdens that 
> is not required.
> 4. Testing logics are injected casually so that we suffer from visibility 
> chaos.
> ...
> Due to ZooKeeper doesn't provide testkit our adopters have to write ZK 
> relative tests with quite internal concepts. For example, HBase wait for ZK 
> server launched by 4 letter words which causes issue when upgrade from ZK 
> 3.4.x to ZK 3.5.5 where 4 letter words are disabled by default.
> [1] 
> https://github.com/apache/hbase/blob/master/hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java
> [2] 
> https://github.com/apache/curator/blob/master/curator-test/src/main/java/org/apache/curator/test/TestingCluster.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ZOOKEEPER-3556) Dynamic configuration file can not be updated automatically

2019-09-25 Thread Steven Chan (Jira)
Steven Chan created ZOOKEEPER-3556:
--

 Summary: Dynamic configuration file can not be updated 
automatically
 Key: ZOOKEEPER-3556
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3556
 Project: ZooKeeper
  Issue Type: Bug
  Components: java client
Affects Versions: 3.5.5
Reporter: Steven Chan


*I encountered a problem which blocks my development of load balance using 
ZooKeeper 3.5.5.*

   *Actually, I have a ZooKeeper cluster which comprises of five zk servers. 
And the dynamic configuration file is as follows:*

 ** 

  *server.1=zk1:2888:3888:participant;0.0.0.0:2181*

  *server.2=zk2:2888:3888:participant;0.0.0.0:2181*

  *server.3=zk3:2888:3888:participant;0.0.0.0:2181*

  *server.4=zk4:2888:3888:participant;0.0.0.0:2181*

  *server.5=zk5:2888:3888:participant;0.0.0.0:2181*

 ** 

  *The zk cluster can work fine if every member works normally. However, if say 
two of them are suddenly down without previously being notified,*

*the dynamic configuration file shown above will not be synchronized 
dynamically, which leads to the zk cluster fail to work normally.*

  *As far as I am concerned, the dynamic configuration file should be modified 
to this if server 1 and server 5 are down suddenly as follows:*

  *server.2=zk2:2888:3888:participant;0.0.0.0:2181*

  *server.3=zk3:2888:3888:participant;0.0.0.0:2181*

  *server.4=zk4:2888:3888:participant;0.0.0.0:2181*

*But in this case, the dynamic configuration file will never change 
automatically unless you manually revise it.*

  *I think this is a very common case which may happen at any time. If so, how 
can we handle with it?*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ZOOKEEPER-3556) Dynamic configuration file can not be updated automatically after some zookeeper servers of zk cluster are down

2019-09-25 Thread Steven Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Chan updated ZOOKEEPER-3556:
---
Summary: Dynamic configuration file can not be updated automatically after 
some zookeeper servers of zk cluster are down  (was: Dynamic configuration file 
can not be updated automatically)

> Dynamic configuration file can not be updated automatically after some 
> zookeeper servers of zk cluster are down
> ---
>
> Key: ZOOKEEPER-3556
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3556
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: java client
>Affects Versions: 3.5.5
>Reporter: Steven Chan
>Priority: Blocker
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> *I encountered a problem which blocks my development of load balance using 
> ZooKeeper 3.5.5.*
>    *Actually, I have a ZooKeeper cluster which comprises of five zk servers. 
> And the dynamic configuration file is as follows:*
>  ** 
>   *server.1=zk1:2888:3888:participant;0.0.0.0:2181*
>   *server.2=zk2:2888:3888:participant;0.0.0.0:2181*
>   *server.3=zk3:2888:3888:participant;0.0.0.0:2181*
>   *server.4=zk4:2888:3888:participant;0.0.0.0:2181*
>   *server.5=zk5:2888:3888:participant;0.0.0.0:2181*
>  ** 
>   *The zk cluster can work fine if every member works normally. However, if 
> say two of them are suddenly down without previously being notified,*
> *the dynamic configuration file shown above will not be synchronized 
> dynamically, which leads to the zk cluster fail to work normally.*
>   *As far as I am concerned, the dynamic configuration file should be 
> modified to this if server 1 and server 5 are down suddenly as follows:*
>   *server.2=zk2:2888:3888:participant;0.0.0.0:2181*
>   *server.3=zk3:2888:3888:participant;0.0.0.0:2181*
>   *server.4=zk4:2888:3888:participant;0.0.0.0:2181*
> *But in this case, the dynamic configuration file will never change 
> automatically unless you manually revise it.*
>   *I think this is a very common case which may happen at any time. If so, 
> how can we handle with it?*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ZOOKEEPER-3509) Revisit log format

2019-09-25 Thread tison (Jira)


 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tison updated ZOOKEEPER-3509:
-
Fix Version/s: 3.6.0

> Revisit log format
> --
>
> Key: ZOOKEEPER-3509
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3509
> Project: ZooKeeper
>  Issue Type: Improvement
>  Components: server
>Reporter: tison
>Assignee: tison
>Priority: Major
> Fix For: 3.6.0
>
>
> Currently ZooKeeper mixes up different log format and even a number of log 
> statements are buggy. It is an opportunity that we revisit log format in 
> ZooKeeper and do a pass to fix all log format related issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ZOOKEEPER-3509) Revisit log format

2019-09-25 Thread tison (Jira)


[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937871#comment-16937871
 ] 

tison commented on ZOOKEEPER-3509:
--

I'd like to work on this issue on this Saturday.

> Revisit log format
> --
>
> Key: ZOOKEEPER-3509
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3509
> Project: ZooKeeper
>  Issue Type: Improvement
>  Components: server
>Reporter: tison
>Assignee: tison
>Priority: Major
>
> Currently ZooKeeper mixes up different log format and even a number of log 
> statements are buggy. It is an opportunity that we revisit log format in 
> ZooKeeper and do a pass to fix all log format related issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ZOOKEEPER-3558) Support authentication enforcement

2019-09-25 Thread Mohammad Arshad (Jira)


[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937881#comment-16937881
 ] 

Mohammad Arshad commented on ZOOKEEPER-3558:


Attached fix ZOOKEEPER-3558-01.patch for reference, will create PR later.

> Support authentication enforcement
> --
>
> Key: ZOOKEEPER-3558
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3558
> Project: ZooKeeper
>  Issue Type: New Feature
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
> Fix For: 3.5.7
>
> Attachments: ZOOKEEPER-3558-01.patch
>
>
> Provide authentication enforcement in ZooKeeper that is backward compatible 
> and can work for any authentication scheme, can work even with custom 
> authentication schemes.
> *Problems:*
> 1. Currently server is starting with default authentication 
> providers(DigestAuthenticationProvider, IPAuthenticationProvider). These 
> default authentication providers are not really secure.
> 2. ZooKeeper server is not checking whether authentication is done or not 
> before performing any user operation.
> *Solutions:*
> 1. We should not start any authentication provider by default. But this would 
> be backward incompatible change. So we can provide configuration whether to 
> start default authentication provides are not.
> By default we can start these authentication providers.
> 2. Before any user operation server should check whether authentication 
> happened or not. At least client must be authenticated with one 
> authentication scheme.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ZOOKEEPER-3556) Dynamic configuration file can not be updated automatically after some zookeeper servers of zk cluster are down

2019-09-25 Thread Alexander Shraer (Jira)


 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Shraer updated ZOOKEEPER-3556:

Issue Type: Wish  (was: Bug)

> Dynamic configuration file can not be updated automatically after some 
> zookeeper servers of zk cluster are down
> ---
>
> Key: ZOOKEEPER-3556
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3556
> Project: ZooKeeper
>  Issue Type: Wish
>  Components: java client
>Affects Versions: 3.5.5
>Reporter: Steven Chan
>Priority: Blocker
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> *I encountered a problem which blocks my development of load balance using 
> ZooKeeper 3.5.5.*
>    *Actually, I have a ZooKeeper cluster which comprises of five zk servers. 
> And the dynamic configuration file is as follows:*
>  ** 
> {color:#FF}  *server.1=zk1:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.2=zk2:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.3=zk3:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.4=zk4:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.5=zk5:2888:3888:participant;0.0.0.0:2181*{color}
>  ** 
>   *The zk cluster can work fine if every member works normally. However, if 
> say two of them are suddenly down without previously being notified,*
> *the dynamic configuration file shown above will not be synchronized 
> dynamically, which leads to the zk cluster fail to work normally.*
>   *As far as I am concerned, the dynamic configuration file should be 
> modified to this if server 1 and server 5 are down suddenly as follows:*
> {color:#FF}  *server.2=zk2:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.3=zk3:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.4=zk4:2888:3888:participant;0.0.0.0:2181*{color}
> *But in this case, the dynamic configuration file will never change 
> automatically unless you manually revise it.*
>   *I think this is a very common case which may happen at any time. If so, 
> how can we handle with it?*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ZOOKEEPER-2230) Connections fo ZooKeeper server becomes slow over time with native GSSAPI

2019-09-25 Thread Rajkiran Sura (Jira)


[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937930#comment-16937930
 ] 

Rajkiran Sura commented on ZOOKEEPER-2230:
--

Hi [~fittey], Apologies that Deepesh couldn't test out your patch. I am a 
colleague of Deepesh and now working on upgrading ZooKeeper to 3.5.5 branch. As 
you said, this bug exists in v3.5.5 too. Could you please provide me your patch 
with v3.5.5 branch and I would be happy to test it out in our environment. 
Thanks!

> Connections fo ZooKeeper server becomes slow over time with native GSSAPI
> -
>
> Key: ZOOKEEPER-2230
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2230
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.4.6, 3.4.7, 3.4.8, 3.5.0
> Environment: OS: RHEL6
> Java: 1.8.0_40
> Configuration:
> java.env:
> {noformat}
> SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Xmx5120m"
> SERVER_JVMFLAGS="$SERVER_JVMFLAGS 
> -Djava.security.auth.login.config=/local/apps/zookeeper-test1/conf/jaas-server.conf"
> SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Dsun.security.jgss.native=true"
> {noformat}
> jaas-server.conf:
> {noformat}
> Server {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> isInitiator=false
> principal="zookeeper/@";
> };
> {noformat}
> Process environment:
> {noformat}
> KRB5_KTNAME=/local/apps/zookeeper-test1/conf/keytab
> ZOO_LOG_DIR=/local/apps/zookeeper-test1/log
> ZOOCFGDIR=/local/apps/zookeeper-test1/conf
> {noformat}
>Reporter: Deepesh Reja
>Assignee: Enis Soztutar
>Priority: Major
>  Labels: patch, pull-request-available
> Fix For: 3.4.6, 3.4.7, 3.4.8, 3.5.2
>
> Attachments: ZOOKEEPER-2230.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ZooKeeper server becomes slow over time when native GSSAPI is used. The 
> connection to the server starts taking upto 10 seconds.
> This is happening with ZooKeeper-3.4.6 and is fairly reproducible.
> Debug logs:
> {noformat}
> 2015-07-02 00:58:49,318 [myid:] - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:NIOServerCnxnFactory@197] - 
> Accepted socket connection from /:47942
> 2015-07-02 00:58:49,318 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@78] - 
> serviceHostname is ''
> 2015-07-02 00:58:49,318 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@79] - 
> servicePrincipalName is 'zookeeper'
> 2015-07-02 00:58:49,318 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@80] - SASL 
> mechanism(mech) is 'GSSAPI'
> 2015-07-02 00:58:49,324 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@106] - Added 
> private credential to subject: [GSSCredential: 
> zookeeper@ 1.2.840.113554.1.2.2 Accept [class 
> sun.security.jgss.wrapper.GSSCredElement]]
> 2015-07-02 00:58:59,441 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@810] - Session 
> establishment request from client /:47942 client's lastZxid is 0x0
> 2015-07-02 00:58:59,441 [myid:] - INFO  
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@868] - Client 
> attempting to establish new session at /:47942
> 2015-07-02 00:58:59,448 [myid:] - DEBUG 
> [SyncThread:0:FinalRequestProcessor@88] - Processing request:: 
> sessionid:0x14e486028785c81 type:createSession cxid:0x0 zxid:0x110e79 
> txntype:-10 reqpath:n/a
> 2015-07-02 00:58:59,448 [myid:] - DEBUG 
> [SyncThread:0:FinalRequestProcessor@160] - sessionid:0x14e486028785c81 
> type:createSession cxid:0x0 zxid:0x110e79 txntype:-10 reqpath:n/a
> 2015-07-02 00:58:59,448 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] - 
> Established session 0x14e486028785c81 with negotiated timeout 1 for 
> client /:47942
> 2015-07-02 00:58:59,452 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@949] - Responding 
> to client SASL token.
> 2015-07-02 00:58:59,452 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@953] - Size of 
> client SASL token: 706
> 2015-07-02 00:58:59,460 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@984] - Size of 
> server SASL response: 161
> 2015-07-02 00:58:59,462 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@949] - Responding 
> to client SASL token.
> 2015-07-02 00:58:59,462 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@953] - Size of 
> client SASL token: 0
> 2015-07-02 00:58:59,462 [myid:] - DEBUG 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@984] - Size of 
> server SASL response: 32
> 2015-07-02 00:58:59,463 [myid:] - DEBUG 
> 

[jira] [Updated] (ZOOKEEPER-3558) Support authentication enforcement

2019-09-25 Thread Mohammad Arshad (Jira)


 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Arshad updated ZOOKEEPER-3558:
---
Attachment: ZOOKEEPER-3558-01.patch

> Support authentication enforcement
> --
>
> Key: ZOOKEEPER-3558
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3558
> Project: ZooKeeper
>  Issue Type: New Feature
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
> Fix For: 3.5.7
>
> Attachments: ZOOKEEPER-3558-01.patch
>
>
> Provide authentication enforcement in ZooKeeper that is backward compatible 
> and can work for any authentication scheme, can work even with custom 
> authentication schemes.
> *Problems:*
> 1. Currently server is starting with default authentication 
> providers(DigestAuthenticationProvider, IPAuthenticationProvider). These 
> default authentication providers are not really secure.
> 2. ZooKeeper server is not checking whether authentication is done or not 
> before performing any user operation.
> *Solutions:*
> 1. We should not start any authentication provider by default. But this would 
> be backward incompatible change. So we can provide configuration whether to 
> start default authentication provides are not.
> By default we can start these authentication providers.
> 2. Before any user operation server should check whether authentication 
> happened or not. At least client must be authenticated with one 
> authentication scheme.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ZOOKEEPER-3557) Towards a testable codebase

2019-09-25 Thread tison (Jira)
tison created ZOOKEEPER-3557:


 Summary: Towards a testable codebase
 Key: ZOOKEEPER-3557
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3557
 Project: ZooKeeper
  Issue Type: Task
  Components: tests
Reporter: tison


This issue is umbrella issue tracks all efforts towards a testable ZooKeeper 
codebase.

*Motivation*

On the one hand, many of our adopters such as HBase, Curator and so on maintain 
their own testkit for ZooKeeper[1][2]; on the other hand, ZooKeeper itself 
doesn't have a well-designed testkit. Here are some of issues in our testing 
"framework".

1. {{ZooKeeperTestable}} becomes a production scope class while it should be in 
testing scope.
2. {{ZooKeeperTestable}} is only used in {{SessionTimeoutTest}} while its name 
infers a completed testing class.
3. {{ClientBase}} is super class of many of zookeeper tests while it contains 
too many orthogonal functions that its subclass inherits lots of burdens that 
is not required.
4. Testing logics are injected casually so that we suffer from visibility chaos.
...

Due to ZooKeeper doesn't provide testkit our adopters have to write ZK relative 
tests with quite internal concepts. For example, HBase wait for ZK server 
launched by 4 letter words which causes issue when upgrade from ZK 3.4.x to ZK 
3.5.5 where 4 letter words are disabled by default.

[1] 
https://github.com/apache/hbase/blob/master/hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java
[2] 
https://github.com/apache/curator/blob/master/curator-test/src/main/java/org/apache/curator/test/TestingCluster.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ZOOKEEPER-3556) Dynamic configuration file can not be updated automatically after some zookeeper servers of zk cluster are down

2019-09-25 Thread Alexander Shraer (Jira)


[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937925#comment-16937925
 ] 

Alexander Shraer commented on ZOOKEEPER-3556:
-

The described behavior is not a bug – currently reconfiguration requires 
explicit action by an operator. One could implement automation on top. We 
should consider this as a feature, since it sounds like several adopters have 
implemented such automation. Perhaps one of them could contribute this upstream.

> Dynamic configuration file can not be updated automatically after some 
> zookeeper servers of zk cluster are down
> ---
>
> Key: ZOOKEEPER-3556
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3556
> Project: ZooKeeper
>  Issue Type: Wish
>  Components: java client
>Affects Versions: 3.5.5
>Reporter: Steven Chan
>Priority: Major
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> *I encountered a problem which blocks my development of load balance using 
> ZooKeeper 3.5.5.*
>    *Actually, I have a ZooKeeper cluster which comprises of five zk servers. 
> And the dynamic configuration file is as follows:*
>  ** 
> {color:#FF}  *server.1=zk1:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.2=zk2:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.3=zk3:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.4=zk4:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.5=zk5:2888:3888:participant;0.0.0.0:2181*{color}
>  ** 
>   *The zk cluster can work fine if every member works normally. However, if 
> say two of them are suddenly down without previously being notified,*
> *the dynamic configuration file shown above will not be synchronized 
> dynamically, which leads to the zk cluster fail to work normally.*
>   *As far as I am concerned, the dynamic configuration file should be 
> modified to this if server 1 and server 5 are down suddenly as follows:*
> {color:#FF}  *server.2=zk2:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.3=zk3:2888:3888:participant;0.0.0.0:2181*{color}
> {color:#FF}  *server.4=zk4:2888:3888:participant;0.0.0.0:2181*{color}
> *But in this case, the dynamic configuration file will never change 
> automatically unless you manually revise it.*
>   *I think this is a very common case which may happen at any time. If so, 
> how can we handle with it?*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ZOOKEEPER-3559) Update Jackson to 2.9.10

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated ZOOKEEPER-3559:
--
Labels: pull-request-available  (was: )

> Update Jackson to 2.9.10
> 
>
> Key: ZOOKEEPER-3559
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3559
> Project: ZooKeeper
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
>Priority: Major
>  Labels: pull-request-available
>
> Jackson should be updated to the latest version to pick up a fix for 
> CVE-2019-14540



--
This message was sent by Atlassian Jira
(v8.3.4#803005)