[jira] [Commented] (ZOOKEEPER-1659) Add JMX support for dynamic reconfiguration

2014-05-31 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014600#comment-14014600
 ] 

Rakesh R commented on ZOOKEEPER-1659:
-

[~michim] As per discussion with [~shralex] its decided to keep the localbean. 
I've attached new patch, could you please review it when you get sometime. 
Thanks!

 Add JMX support for dynamic reconfiguration
 ---

 Key: ZOOKEEPER-1659
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1659
 Project: ZooKeeper
  Issue Type: Bug
  Components: server
Affects Versions: 3.5.0
Reporter: Alexander Shraer
Assignee: Rakesh R
Priority: Blocker
 Fix For: 3.5.0

 Attachments: ZOOKEEPER-1659.patch, ZOOKEEPER-1659.patch, 
 ZOOKEEPER-1659.patch, ZOOKEEPER-1659.patch


 We need to update JMX during reconfigurations. Currently, reconfiguration 
 changes are not reflected in JConsole.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


ZooKeeper_branch34_openjdk7 - Build # 538 - Failure

2014-05-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper_branch34_openjdk7/538/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 171934 lines...]
[junit] 2014-05-31 10:41:42,124 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-05-31 10:41:42,124 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@224] - 
NIOServerCnxn factory exited run method
[junit] 2014-05-31 10:41:42,125 [myid:] - INFO  [main:ZooKeeperServer@441] 
- shutting down
[junit] 2014-05-31 10:41:42,125 [myid:] - INFO  
[main:SessionTrackerImpl@225] - Shutting down
[junit] 2014-05-31 10:41:42,125 [myid:] - INFO  
[main:PrepRequestProcessor@761] - Shutting down
[junit] 2014-05-31 10:41:42,125 [myid:] - INFO  
[main:SyncRequestProcessor@209] - Shutting down
[junit] 2014-05-31 10:41:42,125 [myid:] - INFO  [ProcessThread(sid:0 
cport:-1)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop!
[junit] 2014-05-31 10:41:42,125 [myid:] - INFO  
[SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited!
[junit] 2014-05-31 10:41:42,126 [myid:] - INFO  
[main:FinalRequestProcessor@415] - shutdown of request processor complete
[junit] 2014-05-31 10:41:42,126 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-05-31 10:41:42,127 [myid:] - INFO  [main:JMXEnv@146] - 
ensureOnly:[]
[junit] 2014-05-31 10:41:42,128 [myid:] - INFO  [main:ClientBase@443] - 
STARTING server
[junit] 2014-05-31 10:41:42,128 [myid:] - INFO  [main:ClientBase@364] - 
CREATING server instance 127.0.0.1:11221
[junit] 2014-05-31 10:41:42,129 [myid:] - INFO  
[main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-05-31 10:41:42,129 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-05-31 10:41:42,129 [myid:] - INFO  [main:ZooKeeperServer@162] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_openjdk7/branch-3.4/build/test/tmp/test7883727489500993221.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_openjdk7/branch-3.4/build/test/tmp/test7883727489500993221.junit.dir/version-2
[junit] 2014-05-31 10:41:42,133 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-05-31 10:41:42,133 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@197] - 
Accepted socket connection from /127.0.0.1:59705
[junit] 2014-05-31 10:41:42,134 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxn@827] - Processing 
stat command from /127.0.0.1:59705
[junit] 2014-05-31 10:41:42,134 [myid:] - INFO  
[Thread-4:NIOServerCnxn$StatCommand@663] - Stat command output
[junit] 2014-05-31 10:41:42,135 [myid:] - INFO  
[Thread-4:NIOServerCnxn@1007] - Closed socket connection for client 
/127.0.0.1:59705 (no session established for client)
[junit] 2014-05-31 10:41:42,135 [myid:] - INFO  [main:JMXEnv@229] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-05-31 10:41:42,137 [myid:] - INFO  [main:JMXEnv@246] - 
expect:InMemoryDataTree
[junit] 2014-05-31 10:41:42,137 [myid:] - INFO  [main:JMXEnv@250] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-05-31 10:41:42,137 [myid:] - INFO  [main:JMXEnv@246] - 
expect:StandaloneServer_port
[junit] 2014-05-31 10:41:42,137 [myid:] - INFO  [main:JMXEnv@250] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-05-31 10:41:42,138 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 19618
[junit] 2014-05-31 10:41:42,138 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 20
[junit] 2014-05-31 10:41:42,138 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-05-31 10:41:42,138 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-05-31 10:41:42,209 [myid:] - INFO  [main:ZooKeeper@684] - 
Session: 0x14651dee975 closed
[junit] 2014-05-31 10:41:42,209 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@512] - EventThread shut down
[junit] 2014-05-31 10:41:42,210 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-05-31 10:41:42,210 [myid:] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@224] - 
NIOServerCnxn factory exited run method
[junit] 2014-05-31 10:41:42,210 [myid:] - INFO  [main:ZooKeeperServer@441] 
- shutting down
[junit] 2014-05-31 10:41:42,210 [myid:] - INFO  
[main:SessionTrackerImpl@225] - Shutting 

ZooKeeper-trunk - Build # 2319 - Failure

2014-05-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk/2319/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 238479 lines...]
[junit] 2014-05-31 11:07:16,240 [myid:] - INFO  
[main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-05-31 11:07:16,240 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-05-31 11:07:16,240 [myid:] - INFO  [main:ZooKeeperServer@766] 
- minSessionTimeout set to 6000
[junit] 2014-05-31 11:07:16,240 [myid:] - INFO  [main:ZooKeeperServer@775] 
- maxSessionTimeout set to 6
[junit] 2014-05-31 11:07:16,241 [myid:] - INFO  [main:ZooKeeperServer@149] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test2882118990173405084.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test2882118990173405084.junit.dir/version-2
[junit] 2014-05-31 11:07:16,242 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test2882118990173405084.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 11:07:16,244 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test2882118990173405084.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 11:07:16,246 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-05-31 11:07:16,247 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:36958
[junit] 2014-05-31 11:07:16,247 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:36958
[junit] 2014-05-31 11:07:16,248 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-05-31 11:07:16,248 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:36958 (no session established for client)
[junit] 2014-05-31 11:07:16,248 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-05-31 11:07:16,249 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-05-31 11:07:16,250 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-05-31 11:07:16,250 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-05-31 11:07:16,250 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-05-31 11:07:16,250 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 14757
[junit] 2014-05-31 11:07:16,251 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24
[junit] 2014-05-31 11:07:16,251 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-05-31 11:07:16,251 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-05-31 11:07:16,314 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x14651f6528a closed
[junit] 2014-05-31 11:07:16,314 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-05-31 11:07:16,315 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-05-31 11:07:16,315 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2014-05-31 11:07:16,315 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-05-31 11:07:16,315 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-05-31 11:07:16,315 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-05-31 11:07:16,316 [myid:] - INFO  [main:ZooKeeperServer@428] 
- shutting down
[junit] 2014-05-31 11:07:16,316 [myid:] - INFO  
[main:SessionTrackerImpl@184] - Shutting down
[junit] 2014-05-31 11:07:16,316 [myid:] - INFO  
[main:PrepRequestProcessor@981] - Shutting down
[junit] 2014-05-31 11:07:16,316 [myid:] - INFO  
[main:SyncRequestProcessor@191] - Shutting down
[junit] 2014-05-31 11:07:16,316 [myid:] - INFO  [ProcessThread(sid:0 

ZooKeeper-trunk-jdk7 - Build # 867 - Failure

2014-05-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk-jdk7/867/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 238740 lines...]
[junit] 2014-05-31 11:15:17,520 [myid:] - INFO  [main:ClientBase@443] - 
STARTING server
[junit] 2014-05-31 11:15:17,520 [myid:] - INFO  [main:ClientBase@364] - 
CREATING server instance 127.0.0.1:11221
[junit] 2014-05-31 11:15:17,520 [myid:] - INFO  
[main:NIOServerCnxnFactory@670] - Configuring NIO connection handler with 10s 
sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 
kB direct buffers.
[junit] 2014-05-31 11:15:17,521 [myid:] - INFO  
[main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-05-31 11:15:17,521 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-05-31 11:15:17,521 [myid:] - INFO  [main:ZooKeeperServer@766] 
- minSessionTimeout set to 6000
[junit] 2014-05-31 11:15:17,521 [myid:] - INFO  [main:ZooKeeperServer@775] 
- maxSessionTimeout set to 6
[junit] 2014-05-31 11:15:17,522 [myid:] - INFO  [main:ZooKeeperServer@149] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk7/trunk/build/test/tmp/test1637586911021993580.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk7/trunk/build/test/tmp/test1637586911021993580.junit.dir/version-2
[junit] 2014-05-31 11:15:17,522 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk7/trunk/build/test/tmp/test1637586911021993580.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 11:15:17,524 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk7/trunk/build/test/tmp/test1637586911021993580.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 11:15:17,526 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-05-31 11:15:17,526 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:54103
[junit] 2014-05-31 11:15:17,527 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:54103
[junit] 2014-05-31 11:15:17,527 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-05-31 11:15:17,527 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:54103 (no session established for client)
[junit] 2014-05-31 11:15:17,527 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-05-31 11:15:17,528 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-05-31 11:15:17,528 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-05-31 11:15:17,529 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-05-31 11:15:17,529 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-05-31 11:15:17,529 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 15047
[junit] 2014-05-31 11:15:17,529 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24
[junit] 2014-05-31 11:15:17,529 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-05-31 11:15:17,529 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-05-31 11:15:17,603 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x14651fdaafd closed
[junit] 2014-05-31 11:15:17,603 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-05-31 11:15:17,603 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-05-31 11:15:17,603 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-05-31 11:15:17,603 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2014-05-31 11:15:17,603 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-05-31 11:15:17,603 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 

ZooKeeper-trunk-jdk8 - Build # 30 - Still Failing

2014-05-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk-jdk8/30/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 246417 lines...]
[junit] 2014-05-31 12:01:07,108 [myid:] - INFO  [main:ClientBase@443] - 
STARTING server
[junit] 2014-05-31 12:01:07,109 [myid:] - INFO  [main:ClientBase@364] - 
CREATING server instance 127.0.0.1:11221
[junit] 2014-05-31 12:01:07,109 [myid:] - INFO  
[main:NIOServerCnxnFactory@670] - Configuring NIO connection handler with 10s 
sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 
kB direct buffers.
[junit] 2014-05-31 12:01:07,109 [myid:] - INFO  
[main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-05-31 12:01:07,110 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-05-31 12:01:07,110 [myid:] - INFO  [main:ZooKeeperServer@766] 
- minSessionTimeout set to 6000
[junit] 2014-05-31 12:01:07,110 [myid:] - INFO  [main:ZooKeeperServer@775] 
- maxSessionTimeout set to 6
[junit] 2014-05-31 12:01:07,110 [myid:] - INFO  [main:ZooKeeperServer@149] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk8/trunk/build/test/tmp/test7042027428110551122.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk8/trunk/build/test/tmp/test7042027428110551122.junit.dir/version-2
[junit] 2014-05-31 12:01:07,111 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk8/trunk/build/test/tmp/test7042027428110551122.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 12:01:07,113 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk8/trunk/build/test/tmp/test7042027428110551122.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 12:01:07,114 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-05-31 12:01:07,114 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:42002
[junit] 2014-05-31 12:01:07,115 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:42002
[junit] 2014-05-31 12:01:07,115 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-05-31 12:01:07,116 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:42002 (no session established for client)
[junit] 2014-05-31 12:01:07,116 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-05-31 12:01:07,117 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-05-31 12:01:07,118 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-05-31 12:01:07,118 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-05-31 12:01:07,118 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-05-31 12:01:07,118 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 4299
[junit] 2014-05-31 12:01:07,118 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24
[junit] 2014-05-31 12:01:07,119 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-05-31 12:01:07,119 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-05-31 12:01:07,184 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x14652279e98 closed
[junit] 2014-05-31 12:01:07,184 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-05-31 12:01:07,184 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-05-31 12:01:07,184 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2014-05-31 12:01:07,184 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-05-31 12:01:07,184 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-05-31 12:01:07,185 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 

[jira] [Commented] (ZOOKEEPER-102) Need to replace Jute with supported code

2014-05-31 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014709#comment-14014709
 ] 

Flavio Junqueira commented on ZOOKEEPER-102:


I think we should leave this open. It does seem like a tough one at this point, 
but I still have hope that it can happen some day. We should consider doing a 
4.0.0 version so that we can break compatibility.

 Need to replace Jute with supported code
 

 Key: ZOOKEEPER-102
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-102
 Project: ZooKeeper
  Issue Type: Improvement
Reporter: Benjamin Reed

 ZooKeeper currently uses Jute to serialize objects to put on the wire and on 
 disk. We pulled Jute out of Hadoop and added a C binding. Both versions of 
 Jute have evolved (although Hadoop still doesn't have a C binding). It would 
 be nice to use a more standard serialization library. Some options include 
 Thrift or Google's protocol buffers.
 Our main requirements would be Java and C bindings and good performance. (For 
 example, serializing to XML would give us incredibly bad performance and 
 would not be acceptible!)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (ZOOKEEPER-102) Need to replace Jute with supported code

2014-05-31 Thread Michi Mutsuzaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michi Mutsuzaki updated ZOOKEEPER-102:
--

Fix Version/s: 4.0.0

 Need to replace Jute with supported code
 

 Key: ZOOKEEPER-102
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-102
 Project: ZooKeeper
  Issue Type: Improvement
Reporter: Benjamin Reed
 Fix For: 4.0.0


 ZooKeeper currently uses Jute to serialize objects to put on the wire and on 
 disk. We pulled Jute out of Hadoop and added a C binding. Both versions of 
 Jute have evolved (although Hadoop still doesn't have a C binding). It would 
 be nice to use a more standard serialization library. Some options include 
 Thrift or Google's protocol buffers.
 Our main requirements would be Java and C bindings and good performance. (For 
 example, serializing to XML would give us incredibly bad performance and 
 would not be acceptible!)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (ZOOKEEPER-102) Need to replace Jute with supported code

2014-05-31 Thread Michi Mutsuzaki (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014742#comment-14014742
 ] 

Michi Mutsuzaki commented on ZOOKEEPER-102:
---

Reopened the issue for 4.0.0. I am worried about breaking compatibility 
through, given that it'll break third party client libraries like kazoo.

 Need to replace Jute with supported code
 

 Key: ZOOKEEPER-102
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-102
 Project: ZooKeeper
  Issue Type: Improvement
Reporter: Benjamin Reed
 Fix For: 4.0.0


 ZooKeeper currently uses Jute to serialize objects to put on the wire and on 
 disk. We pulled Jute out of Hadoop and added a C binding. Both versions of 
 Jute have evolved (although Hadoop still doesn't have a C binding). It would 
 be nice to use a more standard serialization library. Some options include 
 Thrift or Google's protocol buffers.
 Our main requirements would be Java and C bindings and good performance. (For 
 example, serializing to XML would give us incredibly bad performance and 
 would not be acceptible!)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (ZOOKEEPER-1659) Add JMX support for dynamic reconfiguration

2014-05-31 Thread Michi Mutsuzaki (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014746#comment-14014746
 ] 

Michi Mutsuzaki commented on ZOOKEEPER-1659:


+1. Alex, could you also take a look at the latest patch?

 Add JMX support for dynamic reconfiguration
 ---

 Key: ZOOKEEPER-1659
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1659
 Project: ZooKeeper
  Issue Type: Bug
  Components: server
Affects Versions: 3.5.0
Reporter: Alexander Shraer
Assignee: Rakesh R
Priority: Blocker
 Fix For: 3.5.0

 Attachments: ZOOKEEPER-1659.patch, ZOOKEEPER-1659.patch, 
 ZOOKEEPER-1659.patch, ZOOKEEPER-1659.patch


 We need to update JMX during reconfigurations. Currently, reconfiguration 
 changes are not reflected in JConsole.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (ZOOKEEPER-1810) Add version to FLE notifications for trunk

2014-05-31 Thread Michi Mutsuzaki (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014761#comment-14014761
 ] 

Michi Mutsuzaki commented on ZOOKEEPER-1810:


I'll check this in after running the build one more time. Right now Jenkins 
seems to be down...

 Add version to FLE notifications for trunk
 --

 Key: ZOOKEEPER-1810
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1810
 Project: ZooKeeper
  Issue Type: Bug
  Components: leaderElection
Affects Versions: 3.5.0
Reporter: Flavio Junqueira
Assignee: Germán Blanco
 Fix For: 3.5.0

 Attachments: ZOOKEEPER-1810.patch, ZOOKEEPER-1810.patch, 
 ZOOKEEPER-1810.patch, ZOOKEEPER-1810.patch, ZOOKEEPER-1810.patch, 
 ZOOKEEPER-1810.patch, ZOOKEEPER-1810.patch


 The same as ZOOKEEPER-1808 but for trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (ZOOKEEPER-1810) Add version to FLE notifications for trunk

2014-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014796#comment-14014796
 ] 

Hadoop QA commented on ZOOKEEPER-1810:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12643983/ZOOKEEPER-1810.patch
  against trunk revision 1598087.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 33 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2118//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2118//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2118//console

This message is automatically generated.

 Add version to FLE notifications for trunk
 --

 Key: ZOOKEEPER-1810
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1810
 Project: ZooKeeper
  Issue Type: Bug
  Components: leaderElection
Affects Versions: 3.5.0
Reporter: Flavio Junqueira
Assignee: Germán Blanco
 Fix For: 3.5.0

 Attachments: ZOOKEEPER-1810.patch, ZOOKEEPER-1810.patch, 
 ZOOKEEPER-1810.patch, ZOOKEEPER-1810.patch, ZOOKEEPER-1810.patch, 
 ZOOKEEPER-1810.patch, ZOOKEEPER-1810.patch


 The same as ZOOKEEPER-1808 but for trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Failed: ZOOKEEPER-1810 PreCommit Build #2118

2014-05-31 Thread Apache Jenkins Server
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-1810
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2118/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 236185 lines...]
 [exec] 
 [exec] 
 [exec] 
 [exec] -1 overall.  Here are the results of testing the latest attachment 
 [exec]   
http://issues.apache.org/jira/secure/attachment/12643983/ZOOKEEPER-1810.patch
 [exec]   against trunk revision 1598087.
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 33 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
(version 1.3.9) warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] -1 core tests.  The patch failed core unit tests.
 [exec] 
 [exec] +1 contrib tests.  The patch passed contrib unit tests.
 [exec] 
 [exec] Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2118//testReport/
 [exec] Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2118//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
 [exec] Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2118//console
 [exec] 
 [exec] This message is automatically generated.
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Adding comment to Jira.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] Comment added.
 [exec] 12711d6ff39046e18b49a96e3cb5aeaf13eef43e logged out
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build.xml:1696:
 exec returned: 1

Total time: 38 minutes 54 seconds
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Description set: ZOOKEEPER-1810
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  
org.apache.zookeeper.server.quorum.StandaloneDisabledTest.startSingleServerTest

Error Message:
client could not connect to reestablished quorum: giving up after 30+ seconds.

Stack Trace:
junit.framework.AssertionFailedError: client could not connect to reestablished 
quorum: giving up after 30+ seconds.
at 
org.apache.zookeeper.test.ReconfigTest.testNormalOperation(ReconfigTest.java:154)
at 
org.apache.zookeeper.server.quorum.StandaloneDisabledTest.startSingleServerTest(StandaloneDisabledTest.java:75)
at 
org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52)


FAILED:  org.apache.zookeeper.test.LETest.testLE

Error Message:
Threads didn't join

Stack Trace:
junit.framework.AssertionFailedError: Threads didn't join
at org.apache.zookeeper.test.LETest.testLE(LETest.java:123)
at 
org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52)




ZooKeeper-trunk - Build # 2320 - Still Failing

2014-05-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk/2320/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 235051 lines...]
[junit] 2014-05-31 20:24:01,427 [myid:] - INFO  
[main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-05-31 20:24:01,428 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-05-31 20:24:01,428 [myid:] - INFO  [main:ZooKeeperServer@766] 
- minSessionTimeout set to 6000
[junit] 2014-05-31 20:24:01,428 [myid:] - INFO  [main:ZooKeeperServer@775] 
- maxSessionTimeout set to 6
[junit] 2014-05-31 20:24:01,428 [myid:] - INFO  [main:ZooKeeperServer@149] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test8258013134571286208.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test8258013134571286208.junit.dir/version-2
[junit] 2014-05-31 20:24:01,429 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test8258013134571286208.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 20:24:01,432 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test8258013134571286208.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 20:24:01,434 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-05-31 20:24:01,434 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:55249
[junit] 2014-05-31 20:24:01,435 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:55249
[junit] 2014-05-31 20:24:01,435 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-05-31 20:24:01,436 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:55249 (no session established for client)
[junit] 2014-05-31 20:24:01,436 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-05-31 20:24:01,437 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-05-31 20:24:01,437 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-05-31 20:24:01,438 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-05-31 20:24:01,438 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-05-31 20:24:01,438 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 14757
[junit] 2014-05-31 20:24:01,438 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24
[junit] 2014-05-31 20:24:01,438 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-05-31 20:24:01,439 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-05-31 20:24:01,503 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x14653f40b7b closed
[junit] 2014-05-31 20:24:01,503 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-05-31 20:24:01,503 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-05-31 20:24:01,504 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2014-05-31 20:24:01,504 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-05-31 20:24:01,504 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-05-31 20:24:01,504 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-05-31 20:24:01,504 [myid:] - INFO  [main:ZooKeeperServer@428] 
- shutting down
[junit] 2014-05-31 20:24:01,504 [myid:] - INFO  
[main:SessionTrackerImpl@184] - Shutting down
[junit] 2014-05-31 20:24:01,505 [myid:] - INFO  
[main:PrepRequestProcessor@981] - Shutting down
[junit] 2014-05-31 20:24:01,505 [myid:] - INFO  
[main:SyncRequestProcessor@191] - Shutting down
[junit] 2014-05-31 20:24:01,505 [myid:] - INFO  [ProcessThread(sid:0 

ZooKeeper-trunk-openjdk7 - Build # 472 - Failure

2014-05-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk-openjdk7/472/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 243027 lines...]
[junit] 2014-05-31 20:42:07,854 [myid:] - INFO  [main:JMXEnv@142] - 
ensureOnly:[]
[junit] 2014-05-31 20:42:07,855 [myid:] - INFO  [main:ClientBase@443] - 
STARTING server
[junit] 2014-05-31 20:42:07,855 [myid:] - INFO  [main:ClientBase@364] - 
CREATING server instance 127.0.0.1:11221
[junit] 2014-05-31 20:42:07,855 [myid:] - INFO  
[main:NIOServerCnxnFactory@670] - Configuring NIO connection handler with 10s 
sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 
kB direct buffers.
[junit] 2014-05-31 20:42:07,856 [myid:] - INFO  
[main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-05-31 20:42:07,856 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-05-31 20:42:07,857 [myid:] - INFO  [main:ZooKeeperServer@766] 
- minSessionTimeout set to 6000
[junit] 2014-05-31 20:42:07,857 [myid:] - INFO  [main:ZooKeeperServer@775] 
- maxSessionTimeout set to 6
[junit] 2014-05-31 20:42:07,857 [myid:] - INFO  [main:ZooKeeperServer@149] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test363606040929653769.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test363606040929653769.junit.dir/version-2
[junit] 2014-05-31 20:42:07,858 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test363606040929653769.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 20:42:07,861 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test363606040929653769.junit.dir/version-2/snapshot.b
[junit] 2014-05-31 20:42:07,863 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-05-31 20:42:07,863 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:57567
[junit] 2014-05-31 20:42:07,864 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:57567
[junit] 2014-05-31 20:42:07,864 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-05-31 20:42:07,865 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:57567 (no session established for client)
[junit] 2014-05-31 20:42:07,865 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-05-31 20:42:07,867 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-05-31 20:42:07,867 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-05-31 20:42:07,867 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-05-31 20:42:07,867 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-05-31 20:42:07,868 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 35153
[junit] 2014-05-31 20:42:07,868 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24
[junit] 2014-05-31 20:42:07,868 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-05-31 20:42:07,868 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-05-31 20:42:07,931 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x14654049eca closed
[junit] 2014-05-31 20:42:07,931 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-05-31 20:42:07,931 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-05-31 20:42:07,931 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2014-05-31 20:42:07,931 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-05-31 20:42:07,931 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-05-31 20:42:07,932 [myid:] - INFO  

intern project idea: decouple zab from zookeeper

2014-05-31 Thread Michi Mutsuzaki
Hi,

I'm hosting an intern this summer. One project I've been thinking
about is to decouple zab from zookeeper. There are many use cases
where you need a quorum based replication, but the hierarchical data
model doesn't work well. A smallish (~1GB?) replicated key-value store
with millions of entires is one such example. The goal of the project
is to decouple the consensus algorithm (zab) from the data model
(zookeeper) more cleanly so that the users can define their own data
models and use zab to replicate the data.

I have 2 questions:

1. Are there any caveats that I should be aware of? For example,
transactions need to be idempotent to allow fuzzy snapshotting.
2. Is this useful? Personally I've seen many use cases where this
would be very useful, but I'd like to hear what you guys think.

Thanks!
--Michi


Re: intern project idea: decouple zab from zookeeper

2014-05-31 Thread Flavio Junqueira
I can see two reasons for decoupling Zab:

1- You'd like to be able to plug in new algorithms or at least make a clear 
separation of the replication protocol and the logic of the service. 
2- You'd like to have an implementation of Zab that you could use for other 
things, like a kv store.

I think you're focusing more on 2. You can definitely use Zab for other things, 
and I'm all for it. It would probably be better to just implement the protocol 
from scratch rather than extract it from ZooKeeper. In fact, it might be worth 
having a look at ZK-30 (old one, huh?).

In the case of reimplementing it, it might be worth doing it outside ZooKeeper, 
as a separate project. It could be an incubated project.

Hope it helps!

-Flavio


On 31 May 2014, at 22:29, Michi Mutsuzaki mi...@cs.stanford.edu wrote:

 Hi,
 
 I'm hosting an intern this summer. One project I've been thinking
 about is to decouple zab from zookeeper. There are many use cases
 where you need a quorum based replication, but the hierarchical data
 model doesn't work well. A smallish (~1GB?) replicated key-value store
 with millions of entires is one such example. The goal of the project
 is to decouple the consensus algorithm (zab) from the data model
 (zookeeper) more cleanly so that the users can define their own data
 models and use zab to replicate the data.
 
 I have 2 questions:
 
 1. Are there any caveats that I should be aware of? For example,
 transactions need to be idempotent to allow fuzzy snapshotting.
 2. Is this useful? Personally I've seen many use cases where this
 would be very useful, but I'd like to hear what you guys think.
 
 Thanks!
 --Michi



Re: intern project idea: decouple zab from zookeeper

2014-05-31 Thread Raúl Gutiérrez Segalés
Hi Michi,

On 31 May 2014 14:29, Michi Mutsuzaki mi...@cs.stanford.edu wrote:

 Hi,

 I'm hosting an intern this summer. One project I've been thinking
 about is to decouple zab from zookeeper. There are many use cases
 where you need a quorum based replication, but the hierarchical data
 model doesn't work well. A smallish (~1GB?) replicated key-value store
 with millions of entires is one such example. The goal of the project
 is to decouple the consensus algorithm (zab) from the data model
 (zookeeper) more cleanly so that the users can define their own data
 models and use zab to replicate the data.

 I have 2 questions:

 1. Are there any caveats that I should be aware of? For example,
 transactions need to be idempotent to allow fuzzy snapshotting.
 2. Is this useful? Personally I've seen many use cases where this
 would be very useful, but I'd like to hear what you guys think.


I think this is super useful. As Flavio said, I think there are two
approaches: having ZAB as a library first or
carving out the ZAB bits and having a generic interface to plug in other
protocols.

From the ZooKeeper's project PoV, I think that the latter would be awesome,
because we can clean
up a lot of code as it happens.

From an intern project's PoV, it sounds like working on an independent ZAB
implementation (libzab?) from scratch
is easier to target (and will have no impedance, getting huge changes
merged into ZooKeeper takes times...).


-rgs


[jira] [Commented] (ZOOKEEPER-1659) Add JMX support for dynamic reconfiguration

2014-05-31 Thread Alexander Shraer (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014857#comment-14014857
 ] 

Alexander Shraer commented on ZOOKEEPER-1659:
-

Thanks for adding more tests! Looking at the new tests verifyRemoteBean(String 
remoteBean) seems to be verifying that the bean is not registered. Please 
change the function name to reflect that. 

I don't fully understand the remote and local bean naming convention. Can you 
please explain ?

 Add JMX support for dynamic reconfiguration
 ---

 Key: ZOOKEEPER-1659
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1659
 Project: ZooKeeper
  Issue Type: Bug
  Components: server
Affects Versions: 3.5.0
Reporter: Alexander Shraer
Assignee: Rakesh R
Priority: Blocker
 Fix For: 3.5.0

 Attachments: ZOOKEEPER-1659.patch, ZOOKEEPER-1659.patch, 
 ZOOKEEPER-1659.patch, ZOOKEEPER-1659.patch


 We need to update JMX during reconfigurations. Currently, reconfiguration 
 changes are not reflected in JConsole.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: intern project idea: decouple zab from zookeeper

2014-05-31 Thread Michi Mutsuzaki
Thank you Flavio and Raul.

 1- You'd like to be able to plug in new algorithms or at least make a clear 
 separation of the replication protocol and the logic of the service.
 2- You'd like to have an implementation of Zab that you could use for other 
 things, like a kv store.

Thank you for pointing me to ZOOKEEPER-30. Yes, I was focused more on
2, but it's definitely a good idea to have a generic interface for
atomic broadcast so that you can plug in different algorithms. It
seems like the project can be broken into 3 pieces:

1. Define an interface for atomic broadcast. I'm not sure how things
like session tracker and dynamic reconfig fits into this.
2. Add a ZAB implementation of the interface.
3. Create a simple reference implementation of a service (maybe a
simple key-value store or a benchmark tool).

I agree with both of you that it's better to do this as a separate
project. Also, It might be better to do this as an incubator project
from the beginning. I think it makes it easier for people from
different organizations to collaborate. I'm willing to champion the
project.

I'll open a JIRA once the intern is committed to the project.

Thanks!
--Michi


[jira] [Commented] (BOOKKEEPER-745) Fix for false reports of ledger unreplication during rolling restarts.

2014-05-31 Thread Ivan Kelly (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014593#comment-14014593
 ] 

Ivan Kelly commented on BOOKKEEPER-745:
---

 To avoid this, I think simple approach is just reverse these statements or we 
 could find some other way ?
Ah I see. This is a good point. But the other case also stands. How about if I 
reverse the statements, but then if ledger replication is disabled, abort the 
current invokation and schedule a new one?

 Fix for false reports of ledger unreplication during rolling restarts.
 --

 Key: BOOKKEEPER-745
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-745
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-auto-recovery
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 4.3.0, 4.2.3

 Attachments: 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0002-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0004-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0006-Fix-for-false-reports-of-ledger-unreplicat.branch4.2.patch


 The bug occurred because there was no check if rereplication was enabled or 
 not when the auditor came online. When the auditor comes online it does a 
 check of which bookies are up and marks the ledgers on missing bookies as 
 underreplicated. In the false report case, the auditor was running after each 
 bookie was bounced due to the way leader election for the auditor works. And 
 since one bookie was down since you're bouncing the server, all ledgers on 
 that bookie will get marked as underreplicated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (BOOKKEEPER-745) Fix for false reports of ledger unreplication during rolling restarts.

2014-05-31 Thread Ivan Kelly (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014594#comment-14014594
 ] 

Ivan Kelly commented on BOOKKEEPER-745:
---

To clarify, if we discover it is disabled, after building the index, abort and 
reschedule.

 Fix for false reports of ledger unreplication during rolling restarts.
 --

 Key: BOOKKEEPER-745
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-745
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-auto-recovery
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 4.3.0, 4.2.3

 Attachments: 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0002-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0004-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0006-Fix-for-false-reports-of-ledger-unreplicat.branch4.2.patch


 The bug occurred because there was no check if rereplication was enabled or 
 not when the auditor came online. When the auditor comes online it does a 
 check of which bookies are up and marks the ledgers on missing bookies as 
 underreplicated. In the false report case, the auditor was running after each 
 bookie was bounced due to the way leader election for the auditor works. And 
 since one bookie was down since you're bouncing the server, all ledgers on 
 that bookie will get marked as underreplicated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (BOOKKEEPER-745) Fix for false reports of ledger unreplication during rolling restarts.

2014-05-31 Thread Ivan Kelly (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014595#comment-14014595
 ] 

Ivan Kelly commented on BOOKKEEPER-745:
---

 Adding one more point. In build test took more than 60secs, pls use bigger 
 value 2 or 2.5mins.
Sure. This highlights another issue we have. The test is defined as a junit1 
test, so all these timeouts we have are useless. I'll up the timeout, but we 
need to fix this other issue also (though in trunk), for 4.2.3 it's not 
important. I'm not sure if we have a jira for it.

 Fix for false reports of ledger unreplication during rolling restarts.
 --

 Key: BOOKKEEPER-745
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-745
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-auto-recovery
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 4.3.0, 4.2.3

 Attachments: 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0002-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0004-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0006-Fix-for-false-reports-of-ledger-unreplicat.branch4.2.patch


 The bug occurred because there was no check if rereplication was enabled or 
 not when the auditor came online. When the auditor comes online it does a 
 check of which bookies are up and marks the ledgers on missing bookies as 
 underreplicated. In the false report case, the auditor was running after each 
 bookie was bounced due to the way leader election for the auditor works. And 
 since one bookie was down since you're bouncing the server, all ledgers on 
 that bookie will get marked as underreplicated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (BOOKKEEPER-745) Fix for false reports of ledger unreplication during rolling restarts.

2014-05-31 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014599#comment-14014599
 ] 

Rakesh R commented on BOOKKEEPER-745:
-

bq.To clarify, if we discover it is disabled, after building the index, abort 
and reschedule.
yup this looks fine.

bq.but we need to fix this other issue also (though in trunk), for 4.2.3 it's 
not important. I'm not sure if we have a jira for it.
I could see BOOKKEEPER-739


 Fix for false reports of ledger unreplication during rolling restarts.
 --

 Key: BOOKKEEPER-745
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-745
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-auto-recovery
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 4.3.0, 4.2.3

 Attachments: 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0002-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0004-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0006-Fix-for-false-reports-of-ledger-unreplicat.branch4.2.patch


 The bug occurred because there was no check if rereplication was enabled or 
 not when the auditor came online. When the auditor comes online it does a 
 check of which bookies are up and marks the ledgers on missing bookies as 
 underreplicated. In the false report case, the auditor was running after each 
 bookie was bounced due to the way leader election for the auditor works. And 
 since one bookie was down since you're bouncing the server, all ledgers on 
 that bookie will get marked as underreplicated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (BOOKKEEPER-745) Fix for false reports of ledger unreplication during rolling restarts.

2014-05-31 Thread Ivan Kelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/BOOKKEEPER-745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Kelly updated BOOKKEEPER-745:
--

Attachment: 0001-BOOKKEEPER-745-Fix-for-false-reports-of-ledger-unrep.patch

New patch addresses final comment, puts timeout at 10minutes also.

 Fix for false reports of ledger unreplication during rolling restarts.
 --

 Key: BOOKKEEPER-745
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-745
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-auto-recovery
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 4.3.0, 4.2.3

 Attachments: 
 0001-BOOKKEEPER-745-Fix-for-false-reports-of-ledger-unrep.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0002-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0004-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0006-Fix-for-false-reports-of-ledger-unreplicat.branch4.2.patch


 The bug occurred because there was no check if rereplication was enabled or 
 not when the auditor came online. When the auditor comes online it does a 
 check of which bookies are up and marks the ledgers on missing bookies as 
 underreplicated. In the false report case, the auditor was running after each 
 bookie was bounced due to the way leader election for the auditor works. And 
 since one bookie was down since you're bouncing the server, all ledgers on 
 that bookie will get marked as underreplicated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (BOOKKEEPER-745) Fix for false reports of ledger unreplication during rolling restarts.

2014-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014628#comment-14014628
 ] 

Hadoop QA commented on BOOKKEEPER-745:
--

Testing JIRA BOOKKEEPER-745


Patch 
[0001-BOOKKEEPER-745-Fix-for-false-reports-of-ledger-unrep.patch|https://issues.apache.org/jira/secure/attachment/12647769/0001-BOOKKEEPER-745-Fix-for-false-reports-of-ledger-unrep.patch]
 downloaded at Sat May 31 11:11:41 UTC 2014



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:green}+1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any line longer than 
120
.{color:green}+1{color} the patch does adds/modifies 4 testcase(s)
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warnings
.{color:red}WARNING{color}: the current HEAD has 23 Javadoc warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:green}+1 FINDBUGS{color}
.{color:green}+1{color} the patch does not seem to introduce new Findbugs 
warnings
{color:green}+1 TESTS{color}
.Tests run: 920
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 


{color:green}*+1 Overall result, good!, no -1s*{color}

{color:red}.   There is at least one warning, please check{color}

The full output of the test-patch run is available at

.   https://builds.apache.org/job/bookkeeper-trunk-precommit-build/646/

 Fix for false reports of ledger unreplication during rolling restarts.
 --

 Key: BOOKKEEPER-745
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-745
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-auto-recovery
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 4.3.0, 4.2.3

 Attachments: 
 0001-BOOKKEEPER-745-Fix-for-false-reports-of-ledger-unrep.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0002-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0004-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0006-Fix-for-false-reports-of-ledger-unreplicat.branch4.2.patch


 The bug occurred because there was no check if rereplication was enabled or 
 not when the auditor came online. When the auditor comes online it does a 
 check of which bookies are up and marks the ledgers on missing bookies as 
 underreplicated. In the false report case, the auditor was running after each 
 bookie was bounced due to the way leader election for the auditor works. And 
 since one bookie was down since you're bouncing the server, all ledgers on 
 that bookie will get marked as underreplicated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (BOOKKEEPER-745) Fix for false reports of ledger unreplication during rolling restarts.

2014-05-31 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14014822#comment-14014822
 ] 

Flavio Junqueira commented on BOOKKEEPER-745:
-

The patch looks good to me. The latest patch does not apply to the 4.2 branch, 
but I noticed that there is an older patch for 4.2. Did you mean to update it?

I also don't quite understand the javadoc warning in the latest QA build. Do 
you know why it started showing up now?  

 Fix for false reports of ledger unreplication during rolling restarts.
 --

 Key: BOOKKEEPER-745
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-745
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-auto-recovery
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 4.3.0, 4.2.3

 Attachments: 
 0001-BOOKKEEPER-745-Fix-for-false-reports-of-ledger-unrep.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0001-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0002-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0004-Fix-for-false-reports-of-ledger-unreplication-.trunk.patch, 
 0006-Fix-for-false-reports-of-ledger-unreplicat.branch4.2.patch


 The bug occurred because there was no check if rereplication was enabled or 
 not when the auditor came online. When the auditor comes online it does a 
 check of which bookies are up and marks the ledgers on missing bookies as 
 underreplicated. In the false report case, the auditor was running after each 
 bookie was bounced due to the way leader election for the auditor works. And 
 since one bookie was down since you're bouncing the server, all ledgers on 
 that bookie will get marked as underreplicated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)