[jira] [Commented] (ZOOKEEPER-2556) peerType remains as "observer" in zoo.cfg even though we change the node from observer to participant runtime

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515520#comment-15515520
 ] 

Rakesh Kumar Singh commented on ZOOKEEPER-2556:
---

[~shralex]Thanks for feedback. 

> peerType remains as "observer" in zoo.cfg even though we change the node from 
> observer to participant runtime
> -
>
> Key: ZOOKEEPER-2556
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2556
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> peerType remains as "observer" in zoo.cfg even though we change the node from 
> observer to participant runtime
> Steps to reproduce:-
> 1. Start zookeeper in cluster with one node as observer by configuring 
> peerType=observer in zoo.cfg and server.2=10.18.219.50:2888:3888:observer;2181
> 2. Start the cluster
> 3. start a client and change the node from observer to participant, the 
> configuration related to peertype remained same though other things like 
> clientport got from zoo.cfg
> >reconfig -remove 2 -add 2=10.18.219.50:2888:3888:participant;2181
> We should either remove this parameter or update with correct node type at 
> run time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2556) peerType remains as "observer" in zoo.cfg even though we change the node from observer to participant runtime

2016-09-22 Thread Alexander Shraer (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515492#comment-15515492
 ] 

Alexander Shraer commented on ZOOKEEPER-2556:
-

Thanks for reporting this! Removing this from static config seems like a 
one-line change in editStaticConfig (QuorumPeerConfig.java). There's a bunch of 
parameters that are already being removed, so you can just add it there.

> peerType remains as "observer" in zoo.cfg even though we change the node from 
> observer to participant runtime
> -
>
> Key: ZOOKEEPER-2556
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2556
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> peerType remains as "observer" in zoo.cfg even though we change the node from 
> observer to participant runtime
> Steps to reproduce:-
> 1. Start zookeeper in cluster with one node as observer by configuring 
> peerType=observer in zoo.cfg and server.2=10.18.219.50:2888:3888:observer;2181
> 2. Start the cluster
> 3. start a client and change the node from observer to participant, the 
> configuration related to peertype remained same though other things like 
> clientport got from zoo.cfg
> >reconfig -remove 2 -add 2=10.18.219.50:2888:3888:participant;2181
> We should either remove this parameter or update with correct node type at 
> run time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2533) Close the zkCli using "close" command and then connect using "connect" then provide some invalid input, it closing the channel and connecting again

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2533:
-

Assignee: Rakesh Kumar Singh

> Close the zkCli using "close" command and then connect using "connect" then 
> provide some invalid input, it closing the channel and connecting again
> ---
>
> Key: ZOOKEEPER-2533
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2533
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: java client
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> Close the zkCli using "close" command and then connect using "connect" then 
> provide some invalid input, it closing the channel and connecting again
> Steps to reproduce:-
> 1. Connect the Zookeeper server using zkCli
> 2. close the connection using "close"
> 3. Connect again using "connect host"
> 4. Once connected, input space " " and hit enter
> It is closing the channel and establishing again.
> Console log is as below:-
> [zk: localhost:2181(CONNECTED) 5] close
> 2016-08-25 16:59:04,854 [myid:] - INFO  [main:ClientCnxnSocketNetty@201] - 
> channel is told closing
> 2016-08-25 16:59:04,855 [myid:] - INFO  [main:ZooKeeper@1110] - Session: 
> 0x101a00305cc0008 closed
> [zk: localhost:2181(CLOSED) 6] 2016-08-25 16:59:04,855 [myid:] - INFO  
> [main-EventThread:ClientCnxn$EventThread@542] - EventThread shut down for 
> session: 0x101a00305cc0008
> 2016-08-25 16:59:04,856 [myid:] - INFO  [New I/O worker 
> #1:ClientCnxnSocketNetty$ZKClientHandler@377] - channel is disconnected: [id: 
> 0xd9735868, /0:0:0:0:0:0:0:1:44595 :> localhost/0:0:0:0:0:0:0:1:2181]
> 2016-08-25 16:59:04,856 [myid:] - INFO  [New I/O worker 
> #1:ClientCnxnSocketNetty@201] - channel is told closing
> connect 10.18.101.80
> 2016-08-25 16:59:14,410 [myid:] - INFO  [main:ZooKeeper@716] - Initiating 
> client connection, connectString=10.18.101.80 sessionTimeout=3 
> watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@19c50523
> [zk: 10.18.101.80(CONNECTING) 7] 2016-08-25 16:59:14,417 [myid:] - INFO  
> [main-SendThread(10.18.101.80:2181):ClientCnxn$SendThread@1138] - Opening 
> socket connection to server 10.18.101.80/10.18.101.80:2181. Will not attempt 
> to authenticate using SASL (unknown error)
> 2016-08-25 16:59:14,426 [myid:] - INFO  
> [main-SendThread(10.18.101.80:2181):ClientCnxnSocketNetty$ZKClientPipelineFactory@363]
>  - SSL handler added for channel: null
> 2016-08-25 16:59:14,428 [myid:] - INFO  [New I/O worker 
> #10:ClientCnxn$SendThread@980] - Socket connection established, initiating 
> session, client: /10.18.101.80:58871, server: 10.18.101.80/10.18.101.80:2181
> 2016-08-25 16:59:14,428 [myid:] - INFO  [New I/O worker 
> #10:ClientCnxnSocketNetty$1@146] - channel is connected: [id: 0xa8f6b724, 
> /10.18.101.80:58871 => 10.18.101.80/10.18.101.80:2181]
> 2016-08-25 16:59:14,473 [myid:] - INFO  [New I/O worker 
> #10:ClientCnxn$SendThread@1400] - Session establishment complete on server 
> 10.18.101.80/10.18.101.80:2181, sessionid = 0x101a00305cc0009, negotiated 
> timeout = 3
> WATCHER::
> WatchedEvent state:SyncConnected type:None path:null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2545) Keep maintaining the old zoo.cfg.dynamic* files which will keep eating system memory which is getting generated as part of reconfig execution

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2545:
-

Assignee: Rakesh Kumar Singh

> Keep maintaining the old zoo.cfg.dynamic* files which will keep eating system 
> memory which is getting generated as part of reconfig execution
> -
>
> Key: ZOOKEEPER-2545
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2545
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>
> Keep maintaining the old zoo.cfg.dynamic* files which will be getting created 
> every time when "reconfig" is executed
> Steps to reproduce:-1
> 1. Setup the zookeeper in cluster mode and start
> 2. trying running reconfig command like 
> >reconfig -remove 3 -add 1=10.18.101.80:2888:3888;2181
> 3. It will create new zoo.cfg.dynamic in conf folder
> The problem is it is not deleting the old zoo.cfg.dynamic* files which will 
> keep eating the memory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2540) When start zookeeper server by configuring the server details in dynamic configuration with passing the client port, wrong log info is logged

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2540:
-

Assignee: Rakesh Kumar Singh

> When start zookeeper server by configuring the server details in dynamic 
> configuration with passing the client port, wrong log info is logged
> -
>
> Key: ZOOKEEPER-2540
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2540
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> When start zookeeper server by configuring the server details in dynamic 
> configuration with passing the client port, wrong log info is logged:-
> Configure the server details as below which contains client port as well and 
> remove the client port from zoo.cfg (as it is duplicate) :-
> server.1=10.18.101.80:2888:3888:participant;2181
> server.2=10.18.219.50:2888:3888:participant;2181
> server.3=10.18.221.194:2888:3888:participant;2181
> Start the cluster, we can see message as 
> 2016-08-30 17:00:33,984 [myid:] - INFO  [main:QuorumPeerConfig@306] - 
> clientPort is not set
> which is not correct



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2556) peerType remains as "observer" in zoo.cfg even though we change the node from observer to participant runtime

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2556:
-

Assignee: Rakesh Kumar Singh

> peerType remains as "observer" in zoo.cfg even though we change the node from 
> observer to participant runtime
> -
>
> Key: ZOOKEEPER-2556
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2556
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> peerType remains as "observer" in zoo.cfg even though we change the node from 
> observer to participant runtime
> Steps to reproduce:-
> 1. Start zookeeper in cluster with one node as observer by configuring 
> peerType=observer in zoo.cfg and server.2=10.18.219.50:2888:3888:observer;2181
> 2. Start the cluster
> 3. start a client and change the node from observer to participant, the 
> configuration related to peertype remained same though other things like 
> clientport got from zoo.cfg
> >reconfig -remove 2 -add 2=10.18.219.50:2888:3888:participant;2181
> We should either remove this parameter or update with correct node type at 
> run time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2564) No message is prompted when trying to delete quota with different quota option

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2564:
-

Assignee: Rakesh Kumar Singh

> No message is prompted when trying to delete quota with different quota option
> --
>
> Key: ZOOKEEPER-2564
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2564
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> No message is prompted when trying to delete quota with different quota 
> option.
> Steps to reproduce:-
> 1. Start zookeeper in cluster mode 
> 2. Create some node and set quota like
> setquota -n 10 /test
> 3. Now try to delete as below:-
> delquota -b /test
> Here no message/exception is prompted. We should prompt message like 
> "Byte Quota does not exist for /test"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2563) delquota -[n|b] is not deleting the set quota properly

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2563:
-

Assignee: Rakesh Kumar Singh

> delquota -[n|b] is not deleting the set quota properly
> --
>
> Key: ZOOKEEPER-2563
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2563
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>
> delquota -[n|b] is not deleting the set quota properly
> Steps to reproduce:-
> 1. Start zookeeper in cluster mode (ssl)
> 2. create some node say /test
> 3. Run command as listquota says (as expected)
> quota for /test does not exist
> 4. setquota let say
> setquota -n 10 /test
> 5. Now try to delete this as below
> delquota -n /test
> 6. now check the quota
> [zk: localhost:2181(CONNECTED) 1] listquota /test
> absolute path is /zookeeper/quota/test/zookeeper_limits
> Output quota for /test count=-1,bytes=-1
> Output stat for /test count=1,bytes=5
> 7. Here it is not deleted quota node for test
> 8. Now try to set some new quota
> It fails as it is not deleted correctly while delete
> [zk: localhost:2181(CONNECTED) 3] setquota -n 11 /test
> Command failed: java.lang.IllegalArgumentException: /test has a parent 
> /zookeeper/quota/test which has a quota
> But through delquota it is able to delete



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2559) Failed to delete the set quota for ephemeral node when the node is deleted because of client session closed

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2559:
-

Assignee: Rakesh Kumar Singh

> Failed to delete the set quota for ephemeral node when the node is deleted 
> because of client session closed
> ---
>
> Key: ZOOKEEPER-2559
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2559
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1, 3.5.2
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>
> Failed to delete the set quota for ephemeral node when the node is deleted 
> because of client session closed
> [zk: localhost:2181(CONNECTED) 0] create -e /e_test hello
> Created /e_test
> [zk: localhost:2181(CONNECTED) 1] setquota -n 10 /e_test
> [zk: localhost:2181(CONNECTED) 2] listquota /e_test
> absolute path is /zookeeper/quota/e_test/zookeeper_limits
> Output quota for /e_test count=10,bytes=-1
> Output stat for /e_test count=1,bytes=5
> Now close the client connection and so the ephemeral node gets deleted. But 
> the corresponding quota is not getting deleted as below:-
> [zk: localhost:2181(CONNECTED) 0] ls /
> [test, test1, test3, zookeeper]
> [zk: localhost:2181(CONNECTED) 1] listquota /e_test
> absolute path is /zookeeper/quota/e_test/zookeeper_limits
> Output quota for /e_test count=10,bytes=-1
> Output stat for /e_test count=0,bytes=0
> [zk: localhost:2181(CONNECTED) 2] 
> and so now again create the ephemeral node with same node and try to set the 
> quota, it will fail.
> [zk: localhost:2181(CONNECTED) 2] create -e /e_test hello
> Created /e_test
> [zk: localhost:2181(CONNECTED) 3] setquota -n 10 /e_test
> Command failed: java.lang.IllegalArgumentException: /e_test has a parent 
> /zookeeper/quota/e_test which has a quota
> [zk: localhost:2181(CONNECTED) 4] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2565) listquota should display the quota even it is set on parent/child node

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2565:
-

Assignee: Rakesh Kumar Singh

> listquota  should display the quota even it is set on parent/child node
> -
>
> Key: ZOOKEEPER-2565
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2565
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> listquota  should display the quota even it is set on parent/child 
> node. As of now if we have a parent-child hierarchy for example n1->n2->n3 
> and quota is set for n2. If we try to get quota details on n1 and n3 using 
> listquota, it says no quota set but if try to set the quota on those nodes it 
> fails saying quota is already set on parent node...
> So listquota should fetch the quota set on any node in hierarchy with exact 
> path (on which quota is set) even though this api is called on any other node 
> in that hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2567) Error message is not correct when wrong argument is passed for "reconfig" cmd

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2567:
-

Assignee: Rakesh Kumar Singh

> Error message is not correct when wrong argument is passed for "reconfig" cmd
> -
>
> Key: ZOOKEEPER-2567
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2567
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: java client
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> Error message is not correct when wrong argument is passed for "reconfig" cmd
> Steps to reproduce:-
> 1. Start zookeeper in cluster mode
> 2. use reconfig cmd with wrong argument (pass : instead of ;)
> [zk: localhost:2181(CONNECTED) 10] reconfig -remove 3 -add 
> 3=10.18.221.194:2888:3888:2181
> KeeperErrorCode = BadArguments for 
> Here error message is not complete and informative on client console.
> The log is as below:-
> 2016-09-08 18:54:08,701 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@512] - Incremental reconfig
> 2016-09-08 18:54:08,702 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@843] - Got user-level KeeperException when 
> processing sessionid:0x100299b7eac type:reconfig cxid:0x7 
> zxid:0x40004 txntype:-1 reqpath:n/a Error Path:Reconfiguration failed 
> Error:KeeperErrorCode = BadArguments for Reconfiguration failed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2580) ErrorMessage is not correct when set IP acl and try to set again from another machine

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2580:
-

Assignee: Rakesh Kumar Singh

> ErrorMessage is not correct when set IP acl and try to set again from another 
> machine
> -
>
> Key: ZOOKEEPER-2580
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2580
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: java client
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> set IP acl and try to set again from another machine:-
> [zk: localhost:2181(CONNECTED) 11] setAcl /ip_test ip:10.18.101.80:crdwa
> KeeperErrorCode = NoAuth for /ip_test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2569) plain password is stored when set individual ACL using digest scheme

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2569:
-

Assignee: Rakesh Kumar Singh

> plain password is stored when set individual ACL using digest scheme
> 
>
> Key: ZOOKEEPER-2569
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2569
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>
> Plain password is stored when set individual ACL using digest scheme instead 
> of storing the username and encoded hash string of 
> [zk: localhost:2181(CONNECTED) 13] addauth digest user:pass
> [zk: localhost:2181(CONNECTED) 14] setAcl /newNode digest:user:pass:crdwa
> [zk: localhost:2181(CONNECTED) 15] getAcl /newNode
> 'digest,'user:pass
> : cdrwa
> [zk: localhost:2181(CONNECTED) 16]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2583) Using one client able to access the znode with localhost but fails from another client when IP ACL is set for znode using 127.0.0.1

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2583:
-

Assignee: Rakesh Kumar Singh

> Using one client able to access the znode with localhost but fails from 
> another client when IP ACL is set for znode using 127.0.0.1
> ---
>
> Key: ZOOKEEPER-2583
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2583
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>Priority: Minor
>
> Using one client able to access the znode with localhost but fails from 
> another client when IP ACL is set for znode using 127.0.0.1
> Start zookeeper in cluster mode.
> Client 1 :-
> [zk: localhost:2181(CONNECTED) 11] create /ip_test hello
> Created /ip_test
> [zk: localhost:2181(CONNECTED) 12] setAcl /ip_test
> ip_testip_test4   
> [zk: localhost:2181(CONNECTED) 12] setAcl /ip_test ip:127.0.0.1:crdwa
> [zk: localhost:2181(CONNECTED) 13] get /ip_test
> hello
> [zk: localhost:2181(CONNECTED) 14] set /ip_test hi
> [zk: localhost:2181(CONNECTED) 15] 
> Client 2 :-
> [zk: localhost:2181(CONNECTED) 0] get /ip_test
> Authentication is not valid : /ip_test
> [zk: localhost:2181(CONNECTED) 1] getAcl /ip_test
> 'ip,'127.0.0.1
> : cdrwa
> [zk: localhost:2181(CONNECTED) 2] quit
> now quit the client connection and connect again using 127.0.0.1 (like :- 
> ./zkCli.sh -server 127.0.0.1:2181)
> [zk: 127.0.0.1:2181(CONNECTED) 0] get /ip_test
> hi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ZOOKEEPER-2584) when setquota for a znode and set ip/user ACL on /zookeeper/quota, still able to delete the quota from client with another ip though it says "Authentication is not v

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Kumar Singh reassigned ZOOKEEPER-2584:
-

Assignee: Rakesh Kumar Singh

> when setquota for a znode and set ip/user ACL on /zookeeper/quota, still able 
> to delete the quota from client with another ip though it says 
> "Authentication is not valid"
> --
>
> Key: ZOOKEEPER-2584
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2584
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Assignee: Rakesh Kumar Singh
>
> when setquota for a znode and set ip/user ACL on /zookeeper/quota, still able 
> to delete the quota from client with another ip though it says 
> "Authentication is not valid"
> >> Set quota and ip ACL from one client (with IP 10.18.101.80)
> [zk: 10.18.101.80:2181(CONNECTED) 9] setquota -n 10 /test
> [zk: 10.18.101.80:2181(CONNECTED) 10] setAcl /zookeeper/quota 
> ip:10.18.101.80:crdwa
> [zk: 10.18.101.80:2181(CONNECTED) 11] 
> >> Try to delete the set quota using different client(with ip 10.18.219.50)
> [zk: 10.18.219.50:2181(CONNECTED) 22] listquota /test
> absolute path is /zookeeper/quota/test/zookeeper_limits
> Output quota for /test count=10,bytes=-1
> Output stat for /test count=1,bytes=5
> [zk: 10.18.219.50:2181(CONNECTED) 23] delquota /test
> Authentication is not valid : /zookeeper/quota/test
> [zk: 10.18.219.50:2181(CONNECTED) 24] listquota /test
> absolute path is /zookeeper/quota/test/zookeeper_limits
> quota for /test does not exist.
> >> Here quota has been deleted though it is saying "Authentication is not 
> >> valid.." which is not correct.
> Now try to set the quota from another ip itself, it fails which is as expected
> [zk: 10.18.219.50:2181(CONNECTED) 25] setquota -n 10 /test
> Authentication is not valid : /zookeeper/quota/test
> [zk: 10.18.219.50:2181(CONNECTED) 26] listquota /test
> absolute path is /zookeeper/quota/test/zookeeper_limits
> quota for /test does not exist.
> >> Sameway when we set user ACL...
> [zk: 10.18.101.80:2181(CONNECTED) 26] addauth digest user:pass
> [zk: 10.18.101.80:2181(CONNECTED) 27] create /test hello
> Node already exists: /test
> [zk: 10.18.101.80:2181(CONNECTED) 28] delete /test
> [zk: 10.18.101.80:2181(CONNECTED) 29] create /test hello
> Created /test
> [zk: 10.18.101.80:2181(CONNECTED) 30] 
> [zk: 10.18.101.80:2181(CONNECTED) 30] setquota -n 10 /test
> [zk: 10.18.101.80:2181(CONNECTED) 31] setAcl /zookeeper/quota 
> auth:user:pass:crdwa
> [zk: 10.18.101.80:2181(CONNECTED) 32] 
> [zk: 10.18.219.50:2181(CONNECTED) 27] listquota /test
> absolute path is /zookeeper/quota/test/zookeeper_limits
> Output quota for /test count=10,bytes=-1
> Output stat for /test count=1,bytes=5
> [zk: 10.18.219.50:2181(CONNECTED) 28] delquota /test
> Authentication is not valid : /zookeeper/quota/test
> [zk: 10.18.219.50:2181(CONNECTED) 29] listquota /test
> absolute path is /zookeeper/quota/test/zookeeper_limits
> quota for /test does not exist.
> [zk: 10.18.219.50:2181(CONNECTED) 30]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2509) Secure mode leaks memory

2016-09-22 Thread Yuliya Feldman (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514864#comment-15514864
 ] 

Yuliya Feldman commented on ZOOKEEPER-2509:
---

[~phunt], [~rgs] - Could you please review the patch, as it seems you are the 
most familiar with the code in question.

> Secure mode leaks memory
> 
>
> Key: ZOOKEEPER-2509
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2509
> Project: ZooKeeper
>  Issue Type: Bug
>Affects Versions: 3.5.1, 3.5.2
>Reporter: Ted Dunning
>Assignee: Ted Dunning
> Fix For: 3.5.3, 3.6.0
>
> Attachments: 
> 0001-Updated-patch-for-Netty-leak-testing-to-trunk.patch, 
> ZOOKEEPER-2509-1.patch, ZOOKEEPER-2509.patch, ZOOKEEPER-2509.patch, 
> ZOOKPEEPER-2509.patch, leak-patch.patch
>
>
> The Netty connection handling logic fails to clean up watches on connection 
> close. This causes memory to leak.
> I will have a repro script available soon and a fix. I am not sure how to 
> build a unit test since we would need to build an entire server and generate 
> keys and such. Advice on that appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2549) As NettyServerCnxn.sendResponse() allows all the exception to bubble up it can stop main ZK requests processing thread

2016-09-22 Thread Yuliya Feldman (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514862#comment-15514862
 ] 

Yuliya Feldman commented on ZOOKEEPER-2549:
---

[~phunt], [~rgs] - Could you please review the patch, as it seems you are the 
most familiar with the code in question.

> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread
> --
>
> Key: ZOOKEEPER-2549
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2549
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Yuliya Feldman
>Assignee: Yuliya Feldman
> Attachments: ZOOKEEPER-2549-2.patch, ZOOKEEPER-2549.patch, 
> ZOOKEEPER-2549.patch, zookeeper-2549-1.patch
>
>
> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread and make Zookeeper server look 
> like it is hanging, while it just can not process any request anymore.
> Idea is to catch all the exceptions in NettyServerCnxn.sendResponse() , 
> convert them to IOException and allow it propagating up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2282) chroot not stripped from path in asynchronous callbacks

2016-09-22 Thread Michael Han (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514664#comment-15514664
 ] 

Michael Han commented on ZOOKEEPER-2282:


looks like the failed C test in build 3450 is relevant to the change in patch:
{noformat}
 [exec]  [exec] terminate called after throwing an instance of 
'CppUnit::Exception'
 [exec]  [exec]   what():  equality assertion failed
 [exec]  [exec] - Expected: 0
 [exec]  [exec] - Actual  : -110
 [exec]  [exec] 
 [exec]  [exec] Zookeeper_simpleSystem::testChrootFAIL: zktest-mt
{noformat}

> chroot not stripped from path in asynchronous callbacks
> ---
>
> Key: ZOOKEEPER-2282
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6, 3.5.0
> Environment: Centos 6.6
>Reporter: Andrew Grasso
>Assignee: Andrew Grasso
>Priority: Critical
> Attachments: ZOOKEEPER-2282.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create 
> ops) are called on paths that include the chroot. This is analagous to issue 
> 1027, which fixed this bug for synchronous calls.
> I've created a patch to fix this in trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZOOKEEPER-2600) dangling ephemerals on overloaded server with local sessions

2016-09-22 Thread Benjamin Reed (JIRA)
Benjamin Reed created ZOOKEEPER-2600:


 Summary: dangling ephemerals on overloaded server with local 
sessions
 Key: ZOOKEEPER-2600
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2600
 Project: ZooKeeper
  Issue Type: Bug
  Components: quorum
Reporter: Benjamin Reed


we had the following strange production bug:

there was an ephemeral znode for a session that was no longer active.  it 
happened even in the absence of failures.

we are running with local sessions enabled and slightly different logic than 
the open source zookeeper, but code inspection shows that the problem is also 
in open source.

the triggering condition was server overload. we had a traffic burst and it we 
were having commit latencies of over 30 seconds.

after digging through logs/code we realized from the logs that the create 
session txn for the ephemeral node started (in the PrepRequestProcessor) at 
11:23:04 and committed at 11:23:38 (the "Adding global session" is output in 
the commit processor). it took 34 seconds to commit the createSession, during 
that time the session expired. due to delays it appears that the interleave was 
as follows:

1) create session hits prep request processor and create session txn generated 
11:23:04
2) time passes as the create session is going through zab
3) the session expires, close session is generated, and close session txn 
generated 11:23:23
4) the create session gets committed and the session gets re-added to the 
sessionTracker 11:23:38
5) the create ephemeral node hits prep request processor and a create txn 
generated 11:23:40
6) the close session gets committed (all ephemeral nodes for the session are 
deleted) and the session is deleted from sessionTracker
7) the create ephemeral node gets committed

the root cause seems to be that the gobal sessions are managed by both the 
PrepRequestProcessor and the CommitProcessor. also with the local session 
upgrading we can have changes in flight before our sessions commits. i think 
there are probably two places to fix:

1) changes to session tracker should not happen in prep request processor.
2) we should not have requests in flight while create session is in process. 
there are two options to prevent this:
a) when a create session is generated in makeUpgradeRequest, we need to start 
queuing the requests from the clients and only submit them once the create 
session is committed
b) the client should explicitly detect that it needs to change from local 
session to global session and explicitly open a global session and get the 
commit before it sends an ephemeral create request

option 2a) is a more transparent fix, but architecturally and in the long term 
i think 2b) might be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2282) chroot not stripped from path in asynchronous callbacks

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514515#comment-15514515
 ] 

Hadoop QA commented on ZOOKEEPER-2282:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12829924/ZOOKEEPER-2282.patch
  against trunk revision ec20c5434cc8a334b3fd25e27d26dccf4793c8f3.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 2.0.3) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3450//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3450//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3450//console

This message is automatically generated.

> chroot not stripped from path in asynchronous callbacks
> ---
>
> Key: ZOOKEEPER-2282
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6, 3.5.0
> Environment: Centos 6.6
>Reporter: Andrew Grasso
>Assignee: Andrew Grasso
>Priority: Critical
> Attachments: ZOOKEEPER-2282.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create 
> ops) are called on paths that include the chroot. This is analagous to issue 
> 1027, which fixed this bug for synchronous calls.
> I've created a patch to fix this in trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Failed: ZOOKEEPER-2282 PreCommit Build #3450

2016-09-22 Thread Apache Jenkins Server
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3450/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 453998 lines...]
 [exec] +1 tests included.  The patch appears to include 3 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
(version 2.0.3) warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] -1 core tests.  The patch failed core unit tests.
 [exec] 
 [exec] +1 contrib tests.  The patch passed contrib unit tests.
 [exec] 
 [exec] Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3450//testReport/
 [exec] Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3450//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
 [exec] Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3450//console
 [exec] 
 [exec] This message is automatically generated.
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Adding comment to Jira.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] Comment added.
 [exec] 6e184d63a86e3843c881e3ca1522306bc6cf1906 logged out
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] mv: 
‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess’ 
and 
‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess’ 
are the same file

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build.xml:1605: 
exec returned: 1

Total time: 16 minutes 40 seconds
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Recording test results
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
[description-setter] Description set: ZOOKEEPER-2282
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7



###
## FAILED TESTS (if any) 
##
All tests passed

[jira] [Updated] (ZOOKEEPER-2282) chroot not stripped from path in asynchronous callbacks

2016-09-22 Thread Andrew Grasso (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Grasso updated ZOOKEEPER-2282:
-
Attachment: ZOOKEEPER-2282.patch

Combine fix and tests

> chroot not stripped from path in asynchronous callbacks
> ---
>
> Key: ZOOKEEPER-2282
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6, 3.5.0
> Environment: Centos 6.6
>Reporter: Andrew Grasso
>Assignee: Andrew Grasso
>Priority: Critical
> Attachments: ZOOKEEPER-2282.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create 
> ops) are called on paths that include the chroot. This is analagous to issue 
> 1027, which fixed this bug for synchronous calls.
> I've created a patch to fix this in trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ZOOKEEPER-2282) chroot not stripped from path in asynchronous callbacks

2016-09-22 Thread Andrew Grasso (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Grasso updated ZOOKEEPER-2282:
-
Attachment: (was: ZOOKEEPER-2282.patch)

> chroot not stripped from path in asynchronous callbacks
> ---
>
> Key: ZOOKEEPER-2282
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6, 3.5.0
> Environment: Centos 6.6
>Reporter: Andrew Grasso
>Assignee: Andrew Grasso
>Priority: Critical
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create 
> ops) are called on paths that include the chroot. This is analagous to issue 
> 1027, which fixed this bug for synchronous calls.
> I've created a patch to fix this in trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ZOOKEEPER-2282) chroot not stripped from path in asynchronous callbacks

2016-09-22 Thread Andrew Grasso (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Grasso updated ZOOKEEPER-2282:
-
Attachment: (was: ZOOKEEPER-2282-TEST.patch)

> chroot not stripped from path in asynchronous callbacks
> ---
>
> Key: ZOOKEEPER-2282
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6, 3.5.0
> Environment: Centos 6.6
>Reporter: Andrew Grasso
>Assignee: Andrew Grasso
>Priority: Critical
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create 
> ops) are called on paths that include the chroot. This is analagous to issue 
> 1027, which fixed this bug for synchronous calls.
> I've created a patch to fix this in trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2549) As NettyServerCnxn.sendResponse() allows all the exception to bubble up it can stop main ZK requests processing thread

2016-09-22 Thread Michael Han (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514323#comment-15514323
 ] 

Michael Han commented on ZOOKEEPER-2549:


You are welcome - I like doing code reviews :)

> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread
> --
>
> Key: ZOOKEEPER-2549
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2549
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Yuliya Feldman
>Assignee: Yuliya Feldman
> Attachments: ZOOKEEPER-2549-2.patch, ZOOKEEPER-2549.patch, 
> ZOOKEEPER-2549.patch, zookeeper-2549-1.patch
>
>
> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread and make Zookeeper server look 
> like it is hanging, while it just can not process any request anymore.
> Idea is to catch all the exceptions in NettyServerCnxn.sendResponse() , 
> convert them to IOException and allow it propagating up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2282) chroot not stripped from path in asynchronous callbacks

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514300#comment-15514300
 ] 

Hadoop QA commented on ZOOKEEPER-2282:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12829901/ZOOKEEPER-2282.patch
  against trunk revision ec20c5434cc8a334b3fd25e27d26dccf4793c8f3.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 2.0.3) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3449//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3449//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3449//console

This message is automatically generated.

> chroot not stripped from path in asynchronous callbacks
> ---
>
> Key: ZOOKEEPER-2282
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6, 3.5.0
> Environment: Centos 6.6
>Reporter: Andrew Grasso
>Assignee: Andrew Grasso
>Priority: Critical
> Attachments: ZOOKEEPER-2282-TEST.patch, ZOOKEEPER-2282.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create 
> ops) are called on paths that include the chroot. This is analagous to issue 
> 1027, which fixed this bug for synchronous calls.
> I've created a patch to fix this in trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Failed: ZOOKEEPER-2282 PreCommit Build #3449

2016-09-22 Thread Apache Jenkins Server
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3449/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 456434 lines...]
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
(version 2.0.3) warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] -1 core tests.  The patch failed core unit tests.
 [exec] 
 [exec] +1 contrib tests.  The patch passed contrib unit tests.
 [exec] 
 [exec] Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3449//testReport/
 [exec] Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3449//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
 [exec] Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3449//console
 [exec] 
 [exec] This message is automatically generated.
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Adding comment to Jira.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] Comment added.
 [exec] d16f01461dc491c2901c75df4e6c7161050f9d98 logged out
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] mv: 
'/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess' 
and 
'/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess' 
are the same file

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build.xml:1605: 
exec returned: 2

Total time: 13 minutes 10 seconds
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Compressed 548.94 KB of artifacts by 40.8% relative to #3441
Recording test results
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
[description-setter] Description set: ZOOKEEPER-2282
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.testDuringLeaderSync

Error Message:
zoo.cfg.dynamic.next is not deleted.

Stack Trace:
junit.framework.AssertionFailedError: zoo.cfg.dynamic.next is not deleted.
at 
org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.testDuringLeaderSync(ReconfigDuringLeaderSyncTest.java:155)
at 
org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79)




[jira] [Commented] (ZOOKEEPER-1045) Support Quorum Peer mutual authentication via SASL

2016-09-22 Thread Michael Han (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514168#comment-15514168
 ] 

Michael Han commented on ZOOKEEPER-1045:


I've done another review pass on the patch. The authorization logic added 
between revision 8 and 9 looks good to me. I've left some notes on the review 
board and I think all those issues are none blocking. 

> Support Quorum Peer mutual authentication via SASL
> --
>
> Key: ZOOKEEPER-1045
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1045
> Project: ZooKeeper
>  Issue Type: New Feature
>  Components: server
>Reporter: Eugene Koontz
>Assignee: Rakesh R
>Priority: Critical
> Fix For: 3.4.10, 3.5.3
>
> Attachments: 0001-ZOOKEEPER-1045-br-3-4.patch, 
> 1045_failing_phunt.tar.gz, HOST_RESOLVER-ZK-1045.patch, 
> TEST-org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.txt, 
> ZK-1045-test-case-failure-logs.zip, ZOOKEEPER-1045-00.patch, 
> ZOOKEEPER-1045-Rolling Upgrade Design Proposal.pdf, 
> ZOOKEEPER-1045-br-3-4.patch, ZOOKEEPER-1045-br-3-4.patch, 
> ZOOKEEPER-1045-br-3-4.patch, ZOOKEEPER-1045-br-3-4.patch, 
> ZOOKEEPER-1045-br-3-4.patch, ZOOKEEPER-1045-br-3-4.patch, 
> ZOOKEEPER-1045-br-3-4.patch, ZOOKEEPER-1045-br-3-4.patch, 
> ZOOKEEPER-1045TestValidationDesign.pdf, 
> org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade.log
>
>
> ZOOKEEPER-938 addresses mutual authentication between clients and servers. 
> This bug, on the other hand, is for authentication among quorum peers. 
> Hopefully much of the work done on SASL integration with Zookeeper for 
> ZOOKEEPER-938 can be used as a foundation for this enhancement.
> Review board: https://reviews.apache.org/r/47354/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2598) Data Inconsistency after power off/on of some nodes

2016-09-22 Thread Srinivas Neginhal (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514087#comment-15514087
 ] 

Srinivas Neginhal commented on ZOOKEEPER-2598:
--

Once the ensemble is in this state:
1. Running "sync /moot/gmle/ActiveControllerCluster"  or "sync  
/moot/gmle/ActiveControllerCluster/member25" via zkCli.sh on ZK3 does 
not help.
2. ZKNodes created on ZK1 or ZK2 show up on ZK3. ZNodes create on ZK3 show up 
on ZK1 and 2. 
3. Ephemeral ZKNodes created on ZK3 show up on ZK1 and 2.

Node of the above got rid of the following stale ephemeral nodes still showing 
on ZK3:
/moot/gmle/ActiveControllerCluster/member25
/moot/gmle/ActiveControllerCluster/member26
/moot/gmle/ActiveControllerCluster/member27
/moot/gmle/ServiceDirectory/ActiveNodes/member25
/moot/gmle/ServiceDirectory/ActiveNodes/member26
/moot/gmle/ServiceDirectory/ActiveNodes/member27


Ephemeral node created on ZK 3:

[zk: 10.0.0.3:1300(CONNECTED) 11] create -e /testEphemeral 
create -e /testEphemeral
Created /testEphemeral
[zk: 10.0.0.3:1300(CONNECTED) 12] ls /
ls /
[bar, f, foo, moot, testEphemeral, transport-nodes, vmware, vnet-sharding, 
zookeeper]
[zk: 10.0.0.3:1300(CONNECTED) 13] ls /moot/gmle/ActiveControllerCluster
ls /moot/gmle/ActiveControllerCluster
[member25, member26, member27, member65, 
member67]
[zk: 10.0.0.3:1300(CONNECTED) 14] 

Shows up on ZK1 and ZK2:

[zk: 10.0.0.1:1300(CONNECTED) 9] ls /
ls /
[bar, f, foo, moot, testEphemeral, transport-nodes, vmware, vnet-sharding, 
zookeeper]
[zk: 10.0.0.1:1300(CONNECTED) 10] ls /moot/gmle/ActiveControllerCluster
ls /moot/gmle/ActiveControllerCluster
[member65, member67]

[zk: 10.0.0.2:1300(CONNECTED) 3] ls /
ls /
[bar, f, foo, moot, testEphemeral, transport-nodes, vmware, vnet-sharding, 
zookeeper]
[zk: 10.0.0.2:1300(CONNECTED) 4] ls /moot/gmle/ActiveControllerCluster
ls /moot/gmle/ActiveControllerCluster
[member65, member67]





> Data Inconsistency after power off/on of some nodes
> ---
>
> Key: ZOOKEEPER-2598
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2598
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: quorum
>Affects Versions: 3.5.1
> Environment: ZK is running in a docker container on a Ubuntu 14.04 VM
>Reporter: Srinivas Neginhal
> Attachments: zk1.tgz, zk2.tgz, zk3.tgz
>
>
> Steps to reproduce:
> 1. Create a three node cluster: Node1, Node2 and Node3.
> Each node is a VM that runs:
> 1. ZK in a docker container
> 2. Two clients, A and B that use ZK for group membership and leader 
> election. The clients create sequential ephemeral nodes when they come up. 
> 2. The three ZK's running in the containers form an ensemble.
> 3. Power off/on Node 2 and Node 3 in a loop
> 4. After a few times, the ephemeral nodes seen by the three nodes are 
> different.
> Here is the output of some four letter commands with the ensemble in the 
> state:
> 1. conf:
> ZK 1:
> # echo conf| nc 10.0.0.1 1300
> clientPort=1300
> secureClientPort=-1
> dataDir=/moot/persistentStore/zkWorkspace/version-2
> dataDirSize=67293721
> dataLogDir=/moot/persistentStore/zkWorkspace/version-2
> dataLogSize=67293721
> tickTime=2000
> maxClientCnxns=60
> minSessionTimeout=4000
> maxSessionTimeout=4
> serverId=1
> initLimit=100
> syncLimit=20
> electionAlg=3
> electionPort=1200
> quorumPort=1100
> peerType=0
> membership: 
> server.1=10.0.0.1:1100:1200:participant;10.0.0.1:1300;8e64c644-d0fa-414f-bab2-3c8c80364410
> server.2=10.0.0.2:1100:1200:participant;10.0.0.2:1300;38bf19b8-d4cb-4dac-b328-7bbf0ee1e2c4
> server.3=10.0.0.3:1100:1200:participant;10.0.0.3:1300;e1415d59-e857-43e6-ba9b-01daeb31a434
> ZK 2:
> # echo conf| nc 10.0.0.2 1300
> clientPort=1300
> secureClientPort=-1
> dataDir=/moot/persistentStore/zkWorkspace/version-2
> dataDirSize=1409480873
> dataLogDir=/moot/persistentStore/zkWorkspace/version-2
> dataLogSize=1409480873
> tickTime=2000
> maxClientCnxns=60
> minSessionTimeout=4000
> maxSessionTimeout=4
> serverId=2
> initLimit=100
> syncLimit=20
> electionAlg=3
> electionPort=1200
> quorumPort=1100
> peerType=0
> membership: 
> server.1=10.0.0.1:1100:1200:participant;10.0.0.1:1300;8e64c644-d0fa-414f-bab2-3c8c80364410
> server.2=10.0.0.2:1100:1200:participant;10.0.0.2:1300;38bf19b8-d4cb-4dac-b328-7bbf0ee1e2c4
> server.3=10.0.0.3:1100:1200:participant;10.0.0.3:1300;e1415d59-e857-43e6-ba9b-01daeb31a434
> ZK 3:
> # echo conf| nc 10.0.0.3 1300
> clientPort=1300
> secureClientPort=-1
> dataDir=/moot/persistentStore/zkWorkspace/version-2
> dataDirSize=1409505467
> dataLogDir=/moot/persistentStore/zkWorkspace/version-2
> dataLogSize=1409505467
> tickTime=2000
> maxClientCnxns=60
> minSessionTimeout=

[jira] [Commented] (ZOOKEEPER-2519) zh->state should not be 0 while handle is active

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514083#comment-15514083
 ] 

Hadoop QA commented on ZOOKEEPER-2519:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12825998/ZOOKEEPER-2519.patch
  against trunk revision ec20c5434cc8a334b3fd25e27d26dccf4793c8f3.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3448//console

This message is automatically generated.

> zh->state should not be 0 while handle is active
> 
>
> Key: ZOOKEEPER-2519
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2519
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6
>Reporter: Andrew Grasso
>Assignee: Andrew Grasso
> Attachments: ZOOKEEPER-2519.patch, ZOOKEEPER-2519.patch
>
>
> 0 does not correspond to any of the defined states for the zookeeper handle, 
> so a client should not expect to see this value. But in the function 
> {{handle_error}}, we set {{zh->state = 0}}, which a client may then see. 
> Instead, we should set our state to be {{ZOO_CONNECTING_STATE}}. 
> At some point the code moved away from 0 as a valid state and introduced the 
> defined states. This broke the fix to ZOOKEEPER-800, which checks if state is 
> 0 to know if the handle has been created but has not yet connected. We now 
> use {{ZOO_NOTCONNECTED_STATE}} to mean this, so the check for this in 
> {{zoo_add_auth}} must be changed.
> We saw this error in 3.4.6, but I believe it remains present in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2325) Data inconsistency if all snapshots empty or missing

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514078#comment-15514078
 ] 

Hadoop QA commented on ZOOKEEPER-2325:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12823508/zk.patch
  against trunk revision ec20c5434cc8a334b3fd25e27d26dccf4793c8f3.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3447//console

This message is automatically generated.

> Data inconsistency if all snapshots empty or missing
> 
>
> Key: ZOOKEEPER-2325
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2325
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.4.6
>Reporter: Andrew Grasso
>Assignee: Andrew Grasso
>Priority: Critical
> Attachments: ZOOKEEPER-2325-test.patch, ZOOKEEPER-2325.001.patch, 
> zk.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> When loading state from snapshots on startup, FileTxnSnapLog.java ignores the 
> result of FileSnap.deserialize, which is -1L if no valid snapshots are found. 
> Recovery proceeds with dt.lastProcessed == 0, its initial value.
> The result is that Zookeeper will process the transaction logs and then begin 
> serving requests with a different state than the rest of the ensemble.
> To reproduce:
> In a healthy zookeeper cluster of size >= 3, shut down one node.
> Either delete all snapshots for this node or change all to be empty files.
> Restart the node.
> We believe this can happen organically if a node runs out of disk space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ZOOKEEPER-2282) chroot not stripped from path in asynchronous callbacks

2016-09-22 Thread Andrew Grasso (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Grasso updated ZOOKEEPER-2282:
-
Attachment: (was: strip_chroot.patch)

> chroot not stripped from path in asynchronous callbacks
> ---
>
> Key: ZOOKEEPER-2282
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6, 3.5.0
> Environment: Centos 6.6
>Reporter: Andrew Grasso
>Priority: Critical
> Attachments: ZOOKEEPER-2282-TEST.patch, ZOOKEEPER-2282.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create 
> ops) are called on paths that include the chroot. This is analagous to issue 
> 1027, which fixed this bug for synchronous calls.
> I've created a patch to fix this in trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Failed: ZOOKEEPER-2519 PreCommit Build #3448

2016-09-22 Thread Apache Jenkins Server
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2519
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3448/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 110 lines...]
 [exec] 
 [exec] 
 [exec] 
 [exec] -1 overall.  Here are the results of testing the latest attachment 
 [exec]   
http://issues.apache.org/jira/secure/attachment/12825998/ZOOKEEPER-2519.patch
 [exec]   against trunk revision ec20c5434cc8a334b3fd25e27d26dccf4793c8f3.
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no new tests are needed 
for this patch.
 [exec] Also please list what manual steps were 
performed to verify this patch.
 [exec] 
 [exec] -1 patch.  The patch command could not apply the patch.
 [exec] 
 [exec] Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3448//console
 [exec] 
 [exec] This message is automatically generated.
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Adding comment to Jira.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] Comment added.
 [exec] f72eabcb98aa7745582f6339d1d2d436eb06b413 logged out
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] mv: 
‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess’ 
and 
‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess’ 
are the same file

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build.xml:1605: 
exec returned: 1

Total time: 46 seconds
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Recording test results
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
[description-setter] Description set: ZOOKEEPER-2519
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7



###
## FAILED TESTS (if any) 
##
No tests ran.

[jira] [Updated] (ZOOKEEPER-2282) chroot not stripped from path in asynchronous callbacks

2016-09-22 Thread Andrew Grasso (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Grasso updated ZOOKEEPER-2282:
-
Attachment: (was: ZOOKEEPER-2282-TEST.patch)

> chroot not stripped from path in asynchronous callbacks
> ---
>
> Key: ZOOKEEPER-2282
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6, 3.5.0
> Environment: Centos 6.6
>Reporter: Andrew Grasso
>Priority: Critical
> Attachments: ZOOKEEPER-2282-TEST.patch, ZOOKEEPER-2282.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create 
> ops) are called on paths that include the chroot. This is analagous to issue 
> 1027, which fixed this bug for synchronous calls.
> I've created a patch to fix this in trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ZOOKEEPER-2282) chroot not stripped from path in asynchronous callbacks

2016-09-22 Thread Andrew Grasso (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Grasso updated ZOOKEEPER-2282:
-
Attachment: ZOOKEEPER-2282.patch
ZOOKEEPER-2282-TEST.patch

Fixed patches to apply cleanly

> chroot not stripped from path in asynchronous callbacks
> ---
>
> Key: ZOOKEEPER-2282
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2282
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: c client
>Affects Versions: 3.4.6, 3.5.0
> Environment: Centos 6.6
>Reporter: Andrew Grasso
>Priority: Critical
> Attachments: ZOOKEEPER-2282-TEST.patch, ZOOKEEPER-2282.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create 
> ops) are called on paths that include the chroot. This is analagous to issue 
> 1027, which fixed this bug for synchronous calls.
> I've created a patch to fix this in trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Failed: ZOOKEEPER-2325 PreCommit Build #3447

2016-09-22 Thread Apache Jenkins Server
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2325
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3447/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 116 lines...]
 [exec] 
 [exec] 
 [exec] 
 [exec] -1 overall.  Here are the results of testing the latest attachment 
 [exec]   http://issues.apache.org/jira/secure/attachment/12823508/zk.patch
 [exec]   against trunk revision ec20c5434cc8a334b3fd25e27d26dccf4793c8f3.
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no new tests are needed 
for this patch.
 [exec] Also please list what manual steps were 
performed to verify this patch.
 [exec] 
 [exec] -1 patch.  The patch command could not apply the patch.
 [exec] 
 [exec] Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3447//console
 [exec] 
 [exec] This message is automatically generated.
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Adding comment to Jira.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] Comment added.
 [exec] 96cacaf03d5fbb7d485c0fd2c1c65fc72adef30e logged out
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] mv: 
‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess’ 
and 
‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess’ 
are the same file

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build.xml:1605: 
exec returned: 1

Total time: 51 seconds
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Recording test results
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
[description-setter] Description set: ZOOKEEPER-2325
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7



###
## FAILED TESTS (if any) 
##
No tests ran.

[jira] [Commented] (ZOOKEEPER-2509) Secure mode leaks memory

2016-09-22 Thread Yuliya Feldman (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514066#comment-15514066
 ] 

Yuliya Feldman commented on ZOOKEEPER-2509:
---

Thank you very much [~hanm] for the review. I thought you are the one :). 

> Secure mode leaks memory
> 
>
> Key: ZOOKEEPER-2509
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2509
> Project: ZooKeeper
>  Issue Type: Bug
>Affects Versions: 3.5.1, 3.5.2
>Reporter: Ted Dunning
>Assignee: Ted Dunning
> Fix For: 3.5.3, 3.6.0
>
> Attachments: 
> 0001-Updated-patch-for-Netty-leak-testing-to-trunk.patch, 
> ZOOKEEPER-2509-1.patch, ZOOKEEPER-2509.patch, ZOOKEEPER-2509.patch, 
> ZOOKPEEPER-2509.patch, leak-patch.patch
>
>
> The Netty connection handling logic fails to clean up watches on connection 
> close. This causes memory to leak.
> I will have a repro script available soon and a fix. I am not sure how to 
> build a unit test since we would need to build an entire server and generate 
> keys and such. Advice on that appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2549) As NettyServerCnxn.sendResponse() allows all the exception to bubble up it can stop main ZK requests processing thread

2016-09-22 Thread Yuliya Feldman (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514063#comment-15514063
 ] 

Yuliya Feldman commented on ZOOKEEPER-2549:
---

Thank you very much [~hanm] for the review. I thought you are the one :). 

> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread
> --
>
> Key: ZOOKEEPER-2549
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2549
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Yuliya Feldman
>Assignee: Yuliya Feldman
> Attachments: ZOOKEEPER-2549-2.patch, ZOOKEEPER-2549.patch, 
> ZOOKEEPER-2549.patch, zookeeper-2549-1.patch
>
>
> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread and make Zookeeper server look 
> like it is hanging, while it just can not process any request anymore.
> Idea is to catch all the exceptions in NettyServerCnxn.sendResponse() , 
> convert them to IOException and allow it propagating up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2549) As NettyServerCnxn.sendResponse() allows all the exception to bubble up it can stop main ZK requests processing thread

2016-09-22 Thread Michael Han (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514044#comment-15514044
 ] 

Michael Han commented on ZOOKEEPER-2549:


Hi [~yufeldman], latest patch LGTM. We need +1 from at least one committer for 
this to be committed.

> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread
> --
>
> Key: ZOOKEEPER-2549
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2549
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Yuliya Feldman
>Assignee: Yuliya Feldman
> Attachments: ZOOKEEPER-2549-2.patch, ZOOKEEPER-2549.patch, 
> ZOOKEEPER-2549.patch, zookeeper-2549-1.patch
>
>
> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread and make Zookeeper server look 
> like it is hanging, while it just can not process any request anymore.
> Idea is to catch all the exceptions in NettyServerCnxn.sendResponse() , 
> convert them to IOException and allow it propagating up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2509) Secure mode leaks memory

2016-09-22 Thread Michael Han (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514046#comment-15514046
 ] 

Michael Han commented on ZOOKEEPER-2509:


Hi [~yufeldman], latest patch LGTM. We need +1 from at least one committer for 
this to be committed.

> Secure mode leaks memory
> 
>
> Key: ZOOKEEPER-2509
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2509
> Project: ZooKeeper
>  Issue Type: Bug
>Affects Versions: 3.5.1, 3.5.2
>Reporter: Ted Dunning
>Assignee: Ted Dunning
> Fix For: 3.5.3, 3.6.0
>
> Attachments: 
> 0001-Updated-patch-for-Netty-leak-testing-to-trunk.patch, 
> ZOOKEEPER-2509-1.patch, ZOOKEEPER-2509.patch, ZOOKEEPER-2509.patch, 
> ZOOKPEEPER-2509.patch, leak-patch.patch
>
>
> The Netty connection handling logic fails to clean up watches on connection 
> close. This causes memory to leak.
> I will have a repro script available soon and a fix. I am not sure how to 
> build a unit test since we would need to build an entire server and generate 
> keys and such. Advice on that appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


ZooKeeper_branch35_solaris - Build # 259 - Failure

2016-09-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper_branch35_solaris/259/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 438461 lines...]
[junit] 2016-09-22 17:19:43,601 [myid:] - INFO  [main:ClientBase@386] - 
CREATING server instance 127.0.0.1:11222
[junit] 2016-09-22 17:19:43,602 [myid:] - INFO  
[main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s 
sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 
kB direct buffers.
[junit] 2016-09-22 17:19:43,602 [myid:] - INFO  
[main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222
[junit] 2016-09-22 17:19:43,603 [myid:] - INFO  [main:ClientBase@361] - 
STARTING server instance 127.0.0.1:11222
[junit] 2016-09-22 17:19:43,603 [myid:] - INFO  [main:ZooKeeperServer@889] 
- minSessionTimeout set to 6000
[junit] 2016-09-22 17:19:43,603 [myid:] - INFO  [main:ZooKeeperServer@898] 
- maxSessionTimeout set to 6
[junit] 2016-09-22 17:19:43,604 [myid:] - INFO  [main:ZooKeeperServer@159] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test1637664655478049704.junit.dir/version-2
 snapdir 
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test1637664655478049704.junit.dir/version-2
[junit] 2016-09-22 17:19:43,604 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test1637664655478049704.junit.dir/version-2/snapshot.b
[junit] 2016-09-22 17:19:43,606 [myid:] - INFO  [main:FileTxnSnapLog@306] - 
Snapshotting: 0xb to 
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test1637664655478049704.junit.dir/version-2/snapshot.b
[junit] 2016-09-22 17:19:43,608 [myid:] - ERROR [main:ZooKeeperServer@501] 
- ZKShutdownHandler is not registered, so ZooKeeper server won't take any 
action on ERROR or SHUTDOWN server state changes
[junit] 2016-09-22 17:19:43,608 [myid:] - INFO  
[main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222
[junit] 2016-09-22 17:19:43,608 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:64511
[junit] 2016-09-22 17:19:43,609 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from 
/127.0.0.1:64511
[junit] 2016-09-22 17:19:43,609 [myid:] - INFO  
[NIOWorkerThread-1:StatCommand@49] - Stat command output
[junit] 2016-09-22 17:19:43,609 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client 
/127.0.0.1:64511 (no session established for client)
[junit] 2016-09-22 17:19:43,610 [myid:] - INFO  [main:JMXEnv@228] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2016-09-22 17:19:43,611 [myid:] - INFO  [main:JMXEnv@245] - 
expect:InMemoryDataTree
[junit] 2016-09-22 17:19:43,611 [myid:] - INFO  [main:JMXEnv@249] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree
[junit] 2016-09-22 17:19:43,611 [myid:] - INFO  [main:JMXEnv@245] - 
expect:StandaloneServer_port
[junit] 2016-09-22 17:19:43,611 [myid:] - INFO  [main:JMXEnv@249] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port11222
[junit] 2016-09-22 17:19:43,612 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 17790
[junit] 2016-09-22 17:19:43,612 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 24
[junit] 2016-09-22 17:19:43,612 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD 
testQuota
[junit] 2016-09-22 17:19:43,612 [myid:] - INFO  [main:ClientBase@543] - 
tearDown starting
[junit] 2016-09-22 17:19:43,692 [myid:] - INFO  [main:ZooKeeper@1313] - 
Session: 0x123cfe43625 closed
[junit] 2016-09-22 17:19:43,692 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for 
session: 0x123cfe43625
[junit] 2016-09-22 17:19:43,692 [myid:] - INFO  [main:ClientBase@513] - 
STOPPING server
[junit] 2016-09-22 17:19:43,692 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2016-09-22 17:19:43,693 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2016-09-22 17:19:43,692 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@4

[jira] [Commented] (ZOOKEEPER-2549) As NettyServerCnxn.sendResponse() allows all the exception to bubble up it can stop main ZK requests processing thread

2016-09-22 Thread Yuliya Feldman (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513854#comment-15513854
 ] 

Yuliya Feldman commented on ZOOKEEPER-2549:
---

[~hanm] - just wonder if anything else we can do here, or it is good to go?

> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread
> --
>
> Key: ZOOKEEPER-2549
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2549
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.5.1
>Reporter: Yuliya Feldman
>Assignee: Yuliya Feldman
> Attachments: ZOOKEEPER-2549-2.patch, ZOOKEEPER-2549.patch, 
> ZOOKEEPER-2549.patch, zookeeper-2549-1.patch
>
>
> As NettyServerCnxn.sendResponse() allows all the exception to bubble up it 
> can stop main ZK requests processing thread and make Zookeeper server look 
> like it is hanging, while it just can not process any request anymore.
> Idea is to catch all the exceptions in NettyServerCnxn.sendResponse() , 
> convert them to IOException and allow it propagating up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2509) Secure mode leaks memory

2016-09-22 Thread Yuliya Feldman (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513850#comment-15513850
 ] 

Yuliya Feldman commented on ZOOKEEPER-2509:
---

[~hanm] - just wonder if anything else we can do here, or it is good to go?

> Secure mode leaks memory
> 
>
> Key: ZOOKEEPER-2509
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2509
> Project: ZooKeeper
>  Issue Type: Bug
>Affects Versions: 3.5.1, 3.5.2
>Reporter: Ted Dunning
>Assignee: Ted Dunning
> Fix For: 3.5.3, 3.6.0
>
> Attachments: 
> 0001-Updated-patch-for-Netty-leak-testing-to-trunk.patch, 
> ZOOKEEPER-2509-1.patch, ZOOKEEPER-2509.patch, ZOOKEEPER-2509.patch, 
> ZOOKPEEPER-2509.patch, leak-patch.patch
>
>
> The Netty connection handling logic fails to clean up watches on connection 
> close. This causes memory to leak.
> I will have a repro script available soon and a fix. I am not sure how to 
> build a unit test since we would need to build an entire server and generate 
> keys and such. Advice on that appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2593) Enforce the quota limit

2016-09-22 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513486#comment-15513486
 ] 

Edward Ribeiro commented on ZOOKEEPER-2593:
---


{quote}
1. enforce the quota limit at more granular level. can we add 
enforce.number.quota and enforce.byte.quota ?
I think there is need to change the implementation as well.
{quote}
Agree.

{quote}
we should check the limit at centralized place like in {{ 
PrepRequestProcessor}}. If we check the limit in DataTree which is at every 
server then the exceptions can not be passed to clients.
{quote}
Agree. 



> Enforce the quota limit
> ---
>
> Key: ZOOKEEPER-2593
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2593
> Project: ZooKeeper
>  Issue Type: New Feature
>  Components: java client, server
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
>
> Currently in ZooKeeper when quota limit exceeds, a warning is logged. There 
> are many user scenarios where it is desired to throw exception in case quota 
> limits exceed.
> We should make it configurable whether to throw exception or just log the 
> warning when quota limits exceed.
> *Implementation:*
> add new properties
> {code}
> enforce.number.quota
> enforce.byte.quota
> {code}
> add new error codes
> {code}
> KeeperException.Code.NUMBERQUOTAEXCEED
> KeeperException.Code.BYTEQUOTAEXCEED
> {code}
> add new exception
> {code}
> KeeperException.NumberQuotaExceedException
> KeeperException.ByteQuotaExceedException
> {code}
> 
> *Basic Scenarios:*
> # If enforce.number.quota=true and number quota exceed, then server should 
> send NUMBERQUOTAEXCEED error code and client should throw 
> NumberQuotaExceedException
> # If enforce.byte.quota=true and byte quota exceed, then server should send 
> BYTEQUOTAEXCEED error code and client should throw ByteQuotaExceedException
> *Impacted APIs:*
> create 
> setData



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZOOKEEPER-2599) Quorum with 3 nodes, stop 2 nodes and let one running, install and configure new quorum where one node details is common but now has configuration of new quorum. comm

2016-09-22 Thread Rakesh Kumar Singh (JIRA)
Rakesh Kumar Singh created ZOOKEEPER-2599:
-

 Summary: Quorum with 3 nodes, stop 2 nodes and let one running, 
install and configure new quorum where one node details is common but now has 
configuration of new quorum. common node getting synced the configuration with 
previous quorum
 Key: ZOOKEEPER-2599
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2599
 Project: ZooKeeper
  Issue Type: Bug
Affects Versions: 3.5.1
Reporter: Rakesh Kumar Singh
Priority: Critical


Start an quorum with 3 (let say A, B, C) zookeepers, stop 2 zookeepers and let 
one running, install and configure new quorum (A, A2, A3, A4, A5) where A is 
common but now has configuration of new quorum. When start A, it getting synced 
the configuration with previous quorum

Steps to reproduce:-
1. Configure and start quorum of 3 nodes (A, B, C) -> 1st quorum
2. stop 2 nodes and let running 3rd node (say C)
3. Create new quorum of 5 nodes (A, A2, A3, A4, A5) where A has same IP and 
port which was used in 1st quorum but A's configuration is as per new quorum 
(where details of A, A2, A3, A4, A5) are present and not B & C.
4. Now start 2nd quorum. Here A's dynamic configuration is getting changed 
according to 1st quorum

Problems:-
1. Now A node is neither syncing all data with 1st quorum nor with 2nd quorum
2. Big security flaw and the whole quorum can be screwed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2593) Enforce the quota limit

2016-09-22 Thread Arshad Mohammad (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513419#comment-15513419
 ] 

Arshad Mohammad commented on ZOOKEEPER-2593:


Thanks [~eribeiro]
yes, this is same as ZOOKEEPER-451. But in that JIRA we needs few improvement
# enforce the quota limit at more granular level. can we add 
enforce.number.quota and enforce.byte.quota ?
# I think there is need to change the implementation as well. 
we should check the limit at centralized place like in {{ 
PrepRequestProcessor}}. 
If we check the limit in DataTree which is at every server then the exceptions 
can not be passed to clients.

> Enforce the quota limit
> ---
>
> Key: ZOOKEEPER-2593
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2593
> Project: ZooKeeper
>  Issue Type: New Feature
>  Components: java client, server
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
>
> Currently in ZooKeeper when quota limit exceeds, a warning is logged. There 
> are many user scenarios where it is desired to throw exception in case quota 
> limits exceed.
> We should make it configurable whether to throw exception or just log the 
> warning when quota limits exceed.
> *Implementation:*
> add new properties
> {code}
> enforce.number.quota
> enforce.byte.quota
> {code}
> add new error codes
> {code}
> KeeperException.Code.NUMBERQUOTAEXCEED
> KeeperException.Code.BYTEQUOTAEXCEED
> {code}
> add new exception
> {code}
> KeeperException.NumberQuotaExceedException
> KeeperException.ByteQuotaExceedException
> {code}
> 
> *Basic Scenarios:*
> # If enforce.number.quota=true and number quota exceed, then server should 
> send NUMBERQUOTAEXCEED error code and client should throw 
> NumberQuotaExceedException
> # If enforce.byte.quota=true and byte quota exceed, then server should send 
> BYTEQUOTAEXCEED error code and client should throw ByteQuotaExceedException
> *Impacted APIs:*
> create 
> setData



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


ZooKeeper_branch35_jdk8 - Build # 241 - Still Failing

2016-09-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper_branch35_jdk8/241/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 480310 lines...]
[junit] 2016-09-22 13:16:06,535 [myid:127.0.0.1:16659] - INFO  
[main-SendThread(127.0.0.1:16659):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:16659. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 13:16:06,535 [myid:127.0.0.1:16659] - WARN  
[main-SendThread(127.0.0.1:16659):ClientCnxn$SendThread@1235] - Session 
0x200fbd2fd93 for server 127.0.0.1/127.0.0.1:16659, unexpected error, 
closing socket connection and attempting reconnect
[junit] java.net.ConnectException: Connection refused
[junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junit] at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
[junit] at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357)
[junit] at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214)
[junit] 2016-09-22 13:16:06,870 [myid:127.0.0.1:16662] - INFO  
[main-SendThread(127.0.0.1:16662):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:16662. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 13:16:06,871 [myid:127.0.0.1:16662] - WARN  
[main-SendThread(127.0.0.1:16662):ClientCnxn$SendThread@1235] - Session 
0x300fbd2fcc4 for server 127.0.0.1/127.0.0.1:16662, unexpected error, 
closing socket connection and attempting reconnect
[junit] java.net.ConnectException: Connection refused
[junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junit] at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
[junit] at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357)
[junit] at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214)
[junit] 2016-09-22 13:16:07,364 [myid:127.0.0.1:16680] - INFO  
[main-SendThread(127.0.0.1:16680):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:16680. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 13:16:07,365 [myid:127.0.0.1:16680] - INFO  
[main-SendThread(127.0.0.1:16680):ClientCnxn$SendThread@948] - Socket 
connection established, initiating session, client: /127.0.0.1:60994, server: 
127.0.0.1/127.0.0.1:16680
[junit] 2016-09-22 13:16:07,365 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:/127.0.0.1:16680:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:60994
[junit] 2016-09-22 13:16:07,365 [myid:] - WARN  
[NIOWorkerThread-22:NIOServerCnxn@369] - Exception causing close of session 
0x0: ZooKeeperServer not running
[junit] 2016-09-22 13:16:07,365 [myid:] - INFO  
[NIOWorkerThread-22:NIOServerCnxn@607] - Closed socket connection for client 
/127.0.0.1:60994 (no session established for client)
[junit] 2016-09-22 13:16:07,365 [myid:127.0.0.1:16680] - INFO  
[main-SendThread(127.0.0.1:16680):ClientCnxn$SendThread@1231] - Unable to read 
additional data from server sessionid 0x0, likely server has closed socket, 
closing socket connection and attempting reconnect
[junit] 2016-09-22 13:16:07,724 [myid:127.0.0.1:16656] - INFO  
[main-SendThread(127.0.0.1:16656):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:16656. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 13:16:07,725 [myid:127.0.0.1:16656] - WARN  
[main-SendThread(127.0.0.1:16656):ClientCnxn$SendThread@1235] - Session 
0x100fbd2fbe2 for server 127.0.0.1/127.0.0.1:16656, unexpected error, 
closing socket connection and attempting reconnect
[junit] java.net.ConnectException: Connection refused
[junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junit] at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
[junit] at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357)
[junit] at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214)
[junit] 2016-09-22 13:16:08,166 [myid:127.0.0.1:16662] - INFO  
[main-SendThread(127.0.0.1:16662):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:16662. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 13:16:08,167 [myid:127.0.0.1:16662] - WARN  
[main-SendThread(127.0.0.1:16662):ClientCnxn$SendThread@1235] - Session 
0x300fbd2fcc4 for server 127.0.0.1/127.0.0.1:16662, unexpected error, 
closing socket connection and attempting reconnect
[junit] java.net.ConnectException: Connection refused
[junit]

[jira] [Commented] (ZOOKEEPER-2496) When inside a transaction, some exceptions do not have path information set.

2016-09-22 Thread Cyrille Artho (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513124#comment-15513124
 ] 

Cyrille Artho commented on ZOOKEEPER-2496:
--

Looks like the same bug to me.

> When inside a transaction, some exceptions do not have path information set.
> 
>
> Key: ZOOKEEPER-2496
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2496
> Project: ZooKeeper
>  Issue Type: Bug
>Affects Versions: 3.4.8, 3.5.1
>Reporter: Kazuaki Banzai
> Attachments: transactionException.patch
>
>
> If a client tries to execute some illegal operations inside a transaction, 
> ZooKeeper throws an exception.
> Some exceptions such as NodeExistsException should have a path to indicate 
> where the exception occurred.
> ZooKeeper clients can get the path by calling method getPath.
> However, this method returns null if the exception occurs inside a 
> transaction.
> For example, when a client calls create /a and create /a in a transaction,
> ZooKeeper throws NodeExistsException but getPath returns null.
> In normal operation (outside transactions), the path information is set 
> correctly.
> The patch only shows this bug occurs with NoNode exception and NodeExists 
> exception,
> but this bug seems to occur with any exception which needs a path information:
> When an error occurred in a transaction, ZooKeeper creates an ErrorResult 
> instance to represent error result.
> However, the ErrorResult class doesn't have a field for a path where an error 
> occurred(See src/java/main/org/apache/zookeeper/OpResult.java for more 
> details).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2461) There is no difference between the observer and the participants in the leader election algorithm

2016-09-22 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512847#comment-15512847
 ] 

Flavio Junqueira commented on ZOOKEEPER-2461:
-

If we end up making any improvement here, let's not change the 3.4 branch.

> There is no difference between the observer and the participants in the 
> leader election algorithm
> -
>
> Key: ZOOKEEPER-2461
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2461
> Project: ZooKeeper
>  Issue Type: Improvement
>  Components: quorum
>Affects Versions: 3.5.0
>Reporter: Ryan Zhang
>Assignee: Ryan Zhang
> Fix For: 3.5.3, 3.6.0
>
>
> We have observed a case that when a leader machine crashes hard, non-voting 
> learners take a long time to detect the new leader. After looking at the 
> details more carefully, we identified one potential improvement (and one bug 
> fixed in the 3.5).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ZOOKEEPER-2461) There is no difference between the observer and the participants in the leader election algorithm

2016-09-22 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated ZOOKEEPER-2461:

Fix Version/s: (was: 3.4.10)

> There is no difference between the observer and the participants in the 
> leader election algorithm
> -
>
> Key: ZOOKEEPER-2461
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2461
> Project: ZooKeeper
>  Issue Type: Improvement
>  Components: quorum
>Affects Versions: 3.5.0
>Reporter: Ryan Zhang
>Assignee: Ryan Zhang
> Fix For: 3.5.3, 3.6.0
>
>
> We have observed a case that when a leader machine crashes hard, non-voting 
> learners take a long time to detect the new leader. After looking at the 
> details more carefully, we identified one potential improvement (and one bug 
> fixed in the 3.5).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2461) There is no difference between the observer and the participants in the leader election algorithm

2016-09-22 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512803#comment-15512803
 ] 

Flavio Junqueira commented on ZOOKEEPER-2461:
-

I'm sorry for taking some time to respond [~nerdyyatrice]

bq. By that time, all the participants haven't agreed on the new leader so they 
all replied with the old leader.

This observation isn't correct. When a server transitions into looking, it sets 
the vote to itself, so those notifications indicating that they are supporting 
4 in this round. It is possible that they are just electing the same leader 
because of some glitch in your instance.

> There is no difference between the observer and the participants in the 
> leader election algorithm
> -
>
> Key: ZOOKEEPER-2461
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2461
> Project: ZooKeeper
>  Issue Type: Improvement
>  Components: quorum
>Affects Versions: 3.5.0
>Reporter: Ryan Zhang
>Assignee: Ryan Zhang
> Fix For: 3.4.10, 3.5.3, 3.6.0
>
>
> We have observed a case that when a leader machine crashes hard, non-voting 
> learners take a long time to detect the new leader. After looking at the 
> details more carefully, we identified one potential improvement (and one bug 
> fixed in the 3.5).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


ZooKeeper_branch35_jdk7 - Build # 673 - Still Failing

2016-09-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper_branch35_jdk7/673/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 436067 lines...]
[junit] java.net.ConnectException: Connection refused
[junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junit] at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
[junit] at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357)
[junit] at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214)
[junit] 2016-09-22 09:48:26,394 [myid:127.0.0.1:13915] - INFO  
[main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:13915. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 09:48:26,395 [myid:127.0.0.1:13915] - WARN  
[main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1235] - Session 
0x100e3b9a440 for server 127.0.0.1/127.0.0.1:13915, unexpected error, 
closing socket connection and attempting reconnect
[junit] java.net.ConnectException: Connection refused
[junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junit] at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
[junit] at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357)
[junit] at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214)
[junit] 2016-09-22 09:48:26,412 [myid:127.0.0.1:14039] - INFO  
[main-SendThread(127.0.0.1:14039):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:14039. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 09:48:26,412 [myid:127.0.0.1:14039] - WARN  
[main-SendThread(127.0.0.1:14039):ClientCnxn$SendThread@1235] - Session 
0x200e3c09f6e for server 127.0.0.1/127.0.0.1:14039, unexpected error, 
closing socket connection and attempting reconnect
[junit] java.net.ConnectException: Connection refused
[junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junit] at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
[junit] at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357)
[junit] at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214)
[junit] 2016-09-22 09:48:26,664 [myid:127.0.0.1:14042] - INFO  
[main-SendThread(127.0.0.1:14042):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:14042. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 09:48:26,664 [myid:127.0.0.1:14042] - WARN  
[main-SendThread(127.0.0.1:14042):ClientCnxn$SendThread@1235] - Session 
0x300e3c09e5e for server 127.0.0.1/127.0.0.1:14042, unexpected error, 
closing socket connection and attempting reconnect
[junit] java.net.ConnectException: Connection refused
[junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junit] at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
[junit] at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357)
[junit] at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214)
[junit] 2016-09-22 09:48:27,297 [myid:127.0.0.1:14036] - INFO  
[main-SendThread(127.0.0.1:14036):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:14036. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 09:48:27,298 [myid:127.0.0.1:14036] - WARN  
[main-SendThread(127.0.0.1:14036):ClientCnxn$SendThread@1235] - Session 
0x100e3c09d50 for server 127.0.0.1/127.0.0.1:14036, unexpected error, 
closing socket connection and attempting reconnect
[junit] java.net.ConnectException: Connection refused
[junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junit] at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
[junit] at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357)
[junit] at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214)
[junit] 2016-09-22 09:48:27,561 [myid:127.0.0.1:13915] - INFO  
[main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1113] - Opening socket 
connection to server 127.0.0.1/127.0.0.1:13915. Will not attempt to 
authenticate using SASL (unknown error)
[junit] 2016-09-22 09:48:27,562 [myid:127.0.0.1:13915] - WARN  
[main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1235] - Session 
0x100e3b9a440 for server 127.0.0.1/127.0.0.1:13915, unexpected error, 
closing socket connection and attempting reconnect
[junit] java.net.ConnectEx

[jira] [Commented] (ZOOKEEPER-2454) Limit Connection Count based on User

2016-09-22 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512763#comment-15512763
 ] 

Flavio Junqueira commented on ZOOKEEPER-2454:
-

I have done a first pass of the patch for ZOOKEEPER-2080. [~botond.hejj], would 
you be interested in having a look at it as well? Please do so if you have a 
chance and leave your comments. This way we can make progress faster with this 
issue.

> Limit Connection Count based on User
> 
>
> Key: ZOOKEEPER-2454
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2454
> Project: ZooKeeper
>  Issue Type: New Feature
>  Components: server
>Reporter: Botond Hejj
>Assignee: Botond Hejj
>Priority: Minor
> Attachments: ZOOKEEPER-2454-br-3-4.patch, ZOOKEEPER-2454.patch, 
> ZOOKEEPER-2454.patch
>
>
> ZooKeeper currently can limit connection count from clients coming from the 
> same ip. It is a great feature to malfunctioning clients DOS-ing the server 
> with many requests.
> I propose additional safegurads for ZooKeeper. 
> It would be great if optionally connection count could be limited for a 
> specific user or a specific user on an ip.
> This is great in cases where ZooKeeper ensemble is shared by multiple users 
> and these users share the same client ips. This can be common in container 
> based cloud deployment where external ip of multiple clients can be the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2580) ErrorMessage is not correct when set IP acl and try to set again from another machine

2016-09-22 Thread Rakesh Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512695#comment-15512695
 ] 

Rakesh Kumar Singh commented on ZOOKEEPER-2580:
---

[~arshad.mohammad] thanks..but i think you need to improve your english to 
understand the problem which will help to understand the problem at 1st level 
itself. Read this issue again.

> ErrorMessage is not correct when set IP acl and try to set again from another 
> machine
> -
>
> Key: ZOOKEEPER-2580
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2580
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: java client
>Affects Versions: 3.5.1
>Reporter: Rakesh Kumar Singh
>Priority: Minor
>
> set IP acl and try to set again from another machine:-
> [zk: localhost:2181(CONNECTED) 11] setAcl /ip_test ip:10.18.101.80:crdwa
> KeeperErrorCode = NoAuth for /ip_test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


ZooKeeper-trunk-solaris - Build # 1322 - Still Failing

2016-09-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk-solaris/1322/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 472335 lines...]
[junit] 2016-09-22 09:02:18,382 [myid:] - INFO  [main:ClientBase@386] - 
CREATING server instance 127.0.0.1:11222
[junit] 2016-09-22 09:02:18,382 [myid:] - INFO  
[main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s 
sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 
kB direct buffers.
[junit] 2016-09-22 09:02:18,383 [myid:] - INFO  
[main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222
[junit] 2016-09-22 09:02:18,384 [myid:] - INFO  [main:ClientBase@361] - 
STARTING server instance 127.0.0.1:11222
[junit] 2016-09-22 09:02:18,384 [myid:] - INFO  [main:ZooKeeperServer@889] 
- minSessionTimeout set to 6000
[junit] 2016-09-22 09:02:18,384 [myid:] - INFO  [main:ZooKeeperServer@898] 
- maxSessionTimeout set to 6
[junit] 2016-09-22 09:02:18,384 [myid:] - INFO  [main:ZooKeeperServer@159] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper-trunk-solaris/build/test/tmp/test3619071954939701295.junit.dir/version-2
 snapdir 
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper-trunk-solaris/build/test/tmp/test3619071954939701295.junit.dir/version-2
[junit] 2016-09-22 09:02:18,385 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper-trunk-solaris/build/test/tmp/test3619071954939701295.junit.dir/version-2/snapshot.b
[junit] 2016-09-22 09:02:18,387 [myid:] - INFO  [main:FileTxnSnapLog@306] - 
Snapshotting: 0xb to 
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper-trunk-solaris/build/test/tmp/test3619071954939701295.junit.dir/version-2/snapshot.b
[junit] 2016-09-22 09:02:18,388 [myid:] - ERROR [main:ZooKeeperServer@501] 
- ZKShutdownHandler is not registered, so ZooKeeper server won't take any 
action on ERROR or SHUTDOWN server state changes
[junit] 2016-09-22 09:02:18,388 [myid:] - INFO  
[main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222
[junit] 2016-09-22 09:02:18,389 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:38493
[junit] 2016-09-22 09:02:18,389 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from 
/127.0.0.1:38493
[junit] 2016-09-22 09:02:18,390 [myid:] - INFO  
[NIOWorkerThread-1:StatCommand@49] - Stat command output
[junit] 2016-09-22 09:02:18,390 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client 
/127.0.0.1:38493 (no session established for client)
[junit] 2016-09-22 09:02:18,390 [myid:] - INFO  [main:JMXEnv@228] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2016-09-22 09:02:18,391 [myid:] - INFO  [main:JMXEnv@245] - 
expect:InMemoryDataTree
[junit] 2016-09-22 09:02:18,392 [myid:] - INFO  [main:JMXEnv@249] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree
[junit] 2016-09-22 09:02:18,392 [myid:] - INFO  [main:JMXEnv@245] - 
expect:StandaloneServer_port
[junit] 2016-09-22 09:02:18,392 [myid:] - INFO  [main:JMXEnv@249] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port11222
[junit] 2016-09-22 09:02:18,392 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 17771
[junit] 2016-09-22 09:02:18,392 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 24
[junit] 2016-09-22 09:02:18,392 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD 
testQuota
[junit] 2016-09-22 09:02:18,393 [myid:] - INFO  [main:ClientBase@543] - 
tearDown starting
[junit] 2016-09-22 09:02:18,913 [myid:] - INFO  [main:ZooKeeper@1313] - 
Session: 0x123ce1ccfb5 closed
[junit] 2016-09-22 09:02:18,913 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for 
session: 0x123ce1ccfb5
[junit] 2016-09-22 09:02:18,913 [myid:] - INFO  [main:ClientBase@513] - 
STOPPING server
[junit] 2016-09-22 09:02:18,914 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2016-09-22 09:02:18,914 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2016-09-22 09:02:18,914 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorT