[jira] [Commented] (ZOOKEEPER-2251) Add Client side packet response timeout to avoid infinite wait.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820232#comment-15820232 ] Michael Han commented on ZOOKEEPER-2251: Thanks! > Add Client side packet response timeout to avoid infinite wait. > --- > > Key: ZOOKEEPER-2251 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2251 > Project: ZooKeeper > Issue Type: Bug > Components: java client >Affects Versions: 3.4.9, 3.5.2 >Reporter: nijel >Assignee: Arshad Mohammad >Priority: Critical > Labels: fault > Fix For: 3.4.10, 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2251-01.patch, ZOOKEEPER-2251-02.patch, > ZOOKEEPER-2251-03.patch, ZOOKEEPER-2251-04.patch > > > I came across one issue related to Client side packet response timeout In my > cluster many packet drops happened for some time. > One observation is the zookeeper client got hanged. As per the thread dump it > is waiting for the response/ACK for the operation performed (synchronous API > used here). > I am using > zookeeper.serverCnxnFactory=org.apache.zookeeper.server.NIOServerCnxnFactory > Since only few packets missed there is no DISCONNECTED event occurred. > Need add a "response time out" for the operations or packets. > *Comments from [~rakeshr]* > My observation about the problem:- > * Can use tools like 'Wireshark' to simulate the artificial packet loss. > * Assume there is only one packet in the 'outgoingQueue' and unfortunately > the server response packet lost. Now, client will enter into infinite > waiting. > https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/ClientCnxn.java#L1515 > * Probably we can discuss more about this problem and possible solutions(add > packet ACK timeout or another better approach) in the jira. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820065#comment-15820065 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95721046 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -175,11 +193,20 @@ public long restore(DataTree dt, Mapsessions, "No snapshot found, but there are log entries. " + "Something is broken!"); } -/* TODO: (br33d) we should either put a ConcurrentHashMap on restore() - * or use Map on save() */ -save(dt, (ConcurrentHashMap )sessions); -/* return a zxid of zero, since we the database is empty */ -return 0; + +if (suspectEmptyDB) { +/* return a zxid of -1, since we are possibly missing data */ +LOG.warn("Unexpected empty data tree, setting zxid to -1"); --- End diff -- Are we 100% sure the data tree is empty? Couldn't it be only partially complete? I mean the machine recorded up to transaction n, but lost transactions n+1, n+2, n+3, etc? > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820066#comment-15820066 ] Edward Ribeiro commented on ZOOKEEPER-261: -- I wrote the comment below on GH, but for whatever reason it was not posted here, so I am duplicating just to see where/if I am mistaken. :) "Hi @enixon, I think your approach is very cool, for real. I only had time to give a first pass on your patch now (hope to look closer soon, esp. the tests), but I would like to ask a dumb question. What if we change the approach and, instead of the initialize file being used for normal execution, we use a recover (or rejoin) file whose presence denote an exceptional restart of a ZK node? That way, if and only if, this file is present we delete it and return -1L so that it cannot take part in the elections until it catches up with the ensemble, etc. If this file is not present then we proceed as usual (i.e. returns 0L). This way, we are dealing with the exceptional case by using the initialize/recover. For example: node C (from a 3 node ensemble) crashes due to disk full exceptions. Then the operator delete the data/ directory and put the recovering file there. In my humble (and naive) option, it would avoid some headaches for ops people who would forget to include the initialize file in a node or two, during rolling upgrades or other cases I can't think of right now. The presence of this file for normal execution changes the ordinal operation of a ZK node. So, we don't have to deal with changing the standard way of starting a ZK node. The recover file is for exceptional cases, where we want to make sure the restarting node cannot take part in an election. PS: I didn't get the autocreateDB stuff also. But it's late at night here. Wdyt? /cc [~hanm] [~breed] [~fpj] " PS2: The scenario described in the JIRA is a good point in favor of a {{initialize}} file, because when B & C came back **automatically** then the {{initialize}} file would be missing from both nodes, and the ensemble would grind to a halt because no one would be leader, right? Otherwise, if there was an script to **automatically* create those files on each node once the machine was turned up then B & C would have the file created and then we could come back to square one, right? Does it make any sense what I am writing? Please, lecture me. :) > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95721046 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -175,11 +193,20 @@ public long restore(DataTree dt, Mapsessions, "No snapshot found, but there are log entries. " + "Something is broken!"); } -/* TODO: (br33d) we should either put a ConcurrentHashMap on restore() - * or use Map on save() */ -save(dt, (ConcurrentHashMap )sessions); -/* return a zxid of zero, since we the database is empty */ -return 0; + +if (suspectEmptyDB) { +/* return a zxid of -1, since we are possibly missing data */ +LOG.warn("Unexpected empty data tree, setting zxid to -1"); --- End diff -- Are we 100% sure the data tree is empty? Couldn't it be only partially complete? I mean the machine recorded up to transaction n, but lost transactions n+1, n+2, n+3, etc? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819918#comment-15819918 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user enixon commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95714487 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { +if (!initFile.delete()) { +throw new IOException("Unable to delete initialization file " + initFile.toString()); +} +suspectEmptyDB = false; +} else { +suspectEmptyDB = !autoCreateDB; --- End diff -- I tempted to do put the log line on the other side of the conditional since this side is the expected case. We should only delete an initialize file once in the lifecycle of a given server while the check against `autoCreateDB` will happen every other time the server is restarted. > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user enixon commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95714487 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { +if (!initFile.delete()) { +throw new IOException("Unable to delete initialization file " + initFile.toString()); +} +suspectEmptyDB = false; +} else { +suspectEmptyDB = !autoCreateDB; --- End diff -- I tempted to do put the log line on the other side of the conditional since this side is the expected case. We should only delete an initialize file once in the lifecycle of a given server while the check against `autoCreateDB` will happen every other time the server is restarted. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819907#comment-15819907 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user enixon commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95714170 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { --- End diff -- Nice optimization, I like it! > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user enixon commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95714170 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { --- End diff -- Nice optimization, I like it! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819905#comment-15819905 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user enixon commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95714094 --- Diff: bin/zkServer-initialize.sh --- @@ -113,6 +113,8 @@ initialize() { else echo "No myid provided, be sure to specify it in $ZOO_DATADIR/myid if using non-standalone" fi + +date > "$ZOO_DATADIR/initialize" --- End diff -- True enough, `touch` is sufficient. Using `date` is an optimization I've included in other scripts in the past as a way of sneaking a bit more information into an otherwise meaningless file but in this context it's probably just confusing. > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user enixon commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95714094 --- Diff: bin/zkServer-initialize.sh --- @@ -113,6 +113,8 @@ initialize() { else echo "No myid provided, be sure to specify it in $ZOO_DATADIR/myid if using non-standalone" fi + +date > "$ZOO_DATADIR/initialize" --- End diff -- True enough, `touch` is sufficient. Using `date` is an optimization I've included in other scripts in the past as a way of sneaking a bit more information into an otherwise meaningless file but in this context it's probably just confusing. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819849#comment-15819849 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95711916 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { --- End diff -- Disclaimer: I am not used to `Files` class so you may have to make sure it doesn't alter the current behaviour if you decide to use it. > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95711916 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { --- End diff -- Disclaimer: I am not used to `Files` class so you may have to make sure it doesn't alter the current behaviour if you decide to use it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819823#comment-15819823 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95710948 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { +if (!initFile.delete()) { +throw new IOException("Unable to delete initialization file " + initFile.toString()); +} +suspectEmptyDB = false; +} else { +suspectEmptyDB = !autoCreateDB; --- End diff -- IMO, it would be nice to put a `debug` (warn?) log message here. Something along the lines of "Initialize file doesn't found! Using autoCreateDB attribute." > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95710948 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { +if (!initFile.delete()) { +throw new IOException("Unable to delete initialization file " + initFile.toString()); +} +suspectEmptyDB = false; +} else { +suspectEmptyDB = !autoCreateDB; --- End diff -- IMO, it would be nice to put a `debug` (warn?) log message here. Something along the lines of "Initialize file doesn't found! Using autoCreateDB attribute." --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819801#comment-15819801 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95707659 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -132,6 +137,9 @@ public FileTxnSnapLog(File dataDir, File snapDir) throws IOException { txnLog = new FileTxnLog(this.dataDir); snapLog = new FileSnap(this.snapDir); + +autoCreateDB = Boolean.parseBoolean(System.getProperty(ZOOKEEPER_DB_AUTOCREATE, --- End diff -- +1 with @hanm > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819800#comment-15819800 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95707294 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { --- End diff -- As Java 7 is the default we could use the code below? The benefits are that it automatically throws the `IOException` if an I/O error happens or return `false` if the file doesn't exists. ``` if (Files.deleteIfExists(initFile.toPath()) { suspectEmptyDB = false; } else { (...) ``` > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819803#comment-15819803 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95709489 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; --- End diff -- Could we rename this to `recoveringDB` or `recoveringNode`? My rationale is: `suspectEmptyDB` looks vague to me, **plus** __if I understood it right__ a node could have been shutdown and restarted after some time. So, not necessarily its DB will be empty, but it is in a recovering process so we want to avoid that it becoming the leader and messing up with transactions performed while it was offline, right? Could we rename this to `recoveringDB` or `recoveringNode`? My rationale is: `suspectEmptyDB` looks vague to me, **plus** because __if I understood it right__ a node could have been shutdown and restarted after some time. So, not necessarily its DB will be empty, but it is in a recovering process so we want to avoid that it becoming the leader and messing up with transactions performed while it was offline, right? > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819802#comment-15819802 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95703179 --- Diff: bin/zkServer-initialize.sh --- @@ -113,6 +113,8 @@ initialize() { else echo "No myid provided, be sure to specify it in $ZOO_DATADIR/myid if using non-standalone" fi + +date > "$ZOO_DATADIR/initialize" --- End diff -- Nit: If the sole purpose of this file is to act as a marker, in spite of its content, then a ```touch $ZOO_DATADIR/initialize``` would be enough, wouldn't it? Of course, `date` is fine as well, no problem. > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95703179 --- Diff: bin/zkServer-initialize.sh --- @@ -113,6 +113,8 @@ initialize() { else echo "No myid provided, be sure to specify it in $ZOO_DATADIR/myid if using non-standalone" fi + +date > "$ZOO_DATADIR/initialize" --- End diff -- Nit: If the sole purpose of this file is to act as a marker, in spite of its content, then a ```touch $ZOO_DATADIR/initialize``` would be enough, wouldn't it? Of course, `date` is fine as well, no problem. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95707294 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; +File initFile = new File(dataDir.getParent(), "initialize"); +if (initFile.exists()) { --- End diff -- As Java 7 is the default we could use the code below? The benefits are that it automatically throws the `IOException` if an I/O error happens or return `false` if the file doesn't exists. ``` if (Files.deleteIfExists(initFile.toPath()) { suspectEmptyDB = false; } else { (...) ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95707659 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -132,6 +137,9 @@ public FileTxnSnapLog(File dataDir, File snapDir) throws IOException { txnLog = new FileTxnLog(this.dataDir); snapLog = new FileSnap(this.snapDir); + +autoCreateDB = Boolean.parseBoolean(System.getProperty(ZOOKEEPER_DB_AUTOCREATE, --- End diff -- +1 with @hanm --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] zookeeper pull request #120: ZOOKEEPER-261
Github user eribeiro commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/120#discussion_r95709489 --- Diff: src/java/main/org/apache/zookeeper/server/persistence/FileTxnSnapLog.java --- @@ -167,6 +175,16 @@ public long restore(DataTree dt, Mapsessions, PlayBackListener listener) throws IOException { long deserializeResult = snapLog.deserialize(dt, sessions); FileTxnLog txnLog = new FileTxnLog(dataDir); +boolean suspectEmptyDB; --- End diff -- Could we rename this to `recoveringDB` or `recoveringNode`? My rationale is: `suspectEmptyDB` looks vague to me, **plus** __if I understood it right__ a node could have been shutdown and restarted after some time. So, not necessarily its DB will be empty, but it is in a recovering process so we want to avoid that it becoming the leader and messing up with transactions performed while it was offline, right? Could we rename this to `recoveringDB` or `recoveringNode`? My rationale is: `suspectEmptyDB` looks vague to me, **plus** because __if I understood it right__ a node could have been shutdown and restarted after some time. So, not necessarily its DB will be empty, but it is in a recovering process so we want to avoid that it becoming the leader and messing up with transactions performed while it was offline, right? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819633#comment-15819633 ] Hadoop QA commented on ZOOKEEPER-261: - +1 overall. GitHub Pull Request Build +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 20 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 3.0.1) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/205//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/205//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/205//console This message is automatically generated. > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Success: ZOOKEEPER- PreCommit Build #205
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/205/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 475805 lines...] [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 20 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 3.0.1) warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] +1 core tests. The patch passed core unit tests. [exec] [exec] +1 contrib tests. The patch passed contrib unit tests. [exec] [exec] Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/205//testReport/ [exec] Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/205//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html [exec] Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/205//console [exec] [exec] This message is automatically generated. [exec] [exec] [exec] == [exec] == [exec] Adding comment to Jira. [exec] == [exec] == [exec] [exec] [exec] Comment added. [exec] f957c9d37b759aa42b8951df0fb487160905f779 logged out [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] [exec] mv: ‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/patchprocess’ and ‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/patchprocess’ are the same file BUILD SUCCESSFUL Total time: 18 minutes 22 seconds Archiving artifacts Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Recording test results Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 [description-setter] Description set: ZOOKEEPER-261 Putting comment on the pull request Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Email was triggered for: Success Sending email for trigger: Success Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 ### ## FAILED TESTS (if any) ## All tests passed
[jira] [Commented] (ZOOKEEPER-2251) Add Client side packet response timeout to avoid infinite wait.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819610#comment-15819610 ] Mel Martinez commented on ZOOKEEPER-2251: - I'll have to go through Lab protocols for approval to submit a patch. I'll look into it. I'll need to set up to build with Ivy against our internal Nexus as well since I'm behind a firewall.Sigh ... busywork. :D > Add Client side packet response timeout to avoid infinite wait. > --- > > Key: ZOOKEEPER-2251 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2251 > Project: ZooKeeper > Issue Type: Bug > Components: java client >Affects Versions: 3.4.9, 3.5.2 >Reporter: nijel >Assignee: Arshad Mohammad >Priority: Critical > Labels: fault > Fix For: 3.4.10, 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2251-01.patch, ZOOKEEPER-2251-02.patch, > ZOOKEEPER-2251-03.patch, ZOOKEEPER-2251-04.patch > > > I came across one issue related to Client side packet response timeout In my > cluster many packet drops happened for some time. > One observation is the zookeeper client got hanged. As per the thread dump it > is waiting for the response/ACK for the operation performed (synchronous API > used here). > I am using > zookeeper.serverCnxnFactory=org.apache.zookeeper.server.NIOServerCnxnFactory > Since only few packets missed there is no DISCONNECTED event occurred. > Need add a "response time out" for the operations or packets. > *Comments from [~rakeshr]* > My observation about the problem:- > * Can use tools like 'Wireshark' to simulate the artificial packet loss. > * Assume there is only one packet in the 'outgoingQueue' and unfortunately > the server response packet lost. Now, client will enter into infinite > waiting. > https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/ClientCnxn.java#L1515 > * Probably we can discuss more about this problem and possible solutions(add > packet ACK timeout or another better approach) in the jira. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
[ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819558#comment-15819558 ] ASF GitHub Bot commented on ZOOKEEPER-261: -- Github user enixon commented on the issue: https://github.com/apache/zookeeper/pull/120 Rebased on to latest master to avoid any potential conflicts with @breed 's changes for 2325. > Reinitialized servers should not participate in leader election > --- > > Key: ZOOKEEPER-261 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261 > Project: ZooKeeper > Issue Type: Improvement > Components: leaderElection, quorum >Reporter: Benjamin Reed > > A server that has lost its data should not participate in leader election > until it has resynced with a leader. Our leader election algorithm and > NEW_LEADER commit assumes that the followers voting on a leader have not lost > any of their data. We should have a flag in the data directory saying whether > or not the data is preserved so that the the flag will be cleared if the data > is ever cleared. > Here is the problematic scenario: you have have ensemble of machines A, B, > and C. C is down. the last transaction seen by C is z. a transaction, z+1, is > committed on A and B. Now there is a power outage. B's data gets > reinitialized. when power comes back up, B and C comes up, but A does not. C > will be elected leader and transaction z+1 is lost. (note, this can happen > even if all three machines are up and C just responds quickly. in that case C > would tell A to truncate z+1 from its log.) in theory we haven't violated our > 2f+1 guarantee, since A is failed and B still hasn't recovered from failure, > but it would be nice if when we don't have quorum that system stops working > rather than works incorrectly if we lose quorum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper issue #120: ZOOKEEPER-261
Github user enixon commented on the issue: https://github.com/apache/zookeeper/pull/120 Rebased on to latest master to avoid any potential conflicts with @breed 's changes for 2325. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Failed: ZOOKEEPER- PreCommit Build #204
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/204/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 75 lines...] [exec] Pull request id: 120 [exec] % Total% Received % Xferd Average Speed TimeTime Time CurrentPull request title: ZOOKEEPER-261 [exec] [exec] Defect number: ZOOKEEPER-261 [exec] - Parsed args, going to checkout - [exec] [exec] [exec] == [exec] == [exec] Testing patch for pull request 120. [exec] == [exec] == [exec] [exec] [exec] [exec] Dload Upload Total Spent Left Speed [exec] [exec] 0 00 00 0 0 0 --:--:-- --:--:-- --:--:-- 0100 1410 1410 0347 0 --:--:-- --:--:-- --:--:-- 348 [exec] [exec] [exec] == [exec] == [exec] Pre-build trunk to verify trunk stability and javac warnings [exec] == [exec] == [exec] [exec] [exec] /home/jenkins/tools/ant/latest/bin/ant -Djavac.args=-Xlint -Xmaxwarns 1000 -Djava5.home=/home/jenkins/tools/java5/latest -Dforrest.home=/home/jenkins/tools/forrest/latest -DZookeeperPatchProcess= clean tar > /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/patchprocess/trunkJavacWarnings.txt 2>&1 [exec] Trunk compilation is broken? [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] [exec] 0 00 00 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 00 280210 0 25470 0 --:--:-- 0:00:01 --:--:-- 314kmv: ‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/patchprocess’ and ‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/patchprocess’ are the same file BUILD FAILED /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/build.xml:1630: exec returned: 1 Total time: 17 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Recording test results Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error? Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 [description-setter] Description set: ZOOKEEPER-261 Putting comment on the pull request Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 ### ## FAILED TESTS (if any) ## No tests ran.
ZooKeeper-trunk - Build # 3234 - Still Failing
See https://builds.apache.org/job/ZooKeeper-trunk/3234/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 490934 lines...] [junit] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) [junit] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) [junit] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) [junit] 2017-01-11 23:31:26,883 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16608. Will not attempt to authenticate using SASL (unknown error) [junit] 2017-01-11 23:31:26,884 [myid:127.0.0.1:16608] - WARN [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1235] - Session 0x102204817ce for server 127.0.0.1/127.0.0.1:16608, unexpected error, closing socket connection and attempting reconnect [junit] java.net.ConnectException: Connection refused [junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) [junit] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) [junit] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) [junit] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) [junit] 2017-01-11 23:31:26,962 [myid:] - INFO [ProcessThread(sid:0 cport:16854)::PrepRequestProcessor@618] - Processed session termination for sessionid: 0x10220507017 [junit] 2017-01-11 23:31:26,974 [myid:] - INFO [main:ZooKeeper@1324] - Session: 0x10220507017 closed [junit] 2017-01-11 23:31:26,974 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 163711 [junit] 2017-01-11 23:31:26,975 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 1644 [junit] 2017-01-11 23:31:26,975 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testWatcherAutoResetWithLocal [junit] 2017-01-11 23:31:26,975 [myid:] - INFO [main:ClientBase@543] - tearDown starting [junit] 2017-01-11 23:31:26,975 [myid:] - INFO [main:ClientBase@513] - STOPPING server [junit] 2017-01-11 23:31:26,974 [myid:] - INFO [SyncThread:0:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16854,name1=Connections,name2=127.0.0.1,name3=0x10220507017] [junit] 2017-01-11 23:31:26,975 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10220507017 [junit] 2017-01-11 23:31:26,975 [myid:] - INFO [main:NettyServerCnxnFactory@464] - shutdown called 0.0.0.0/0.0.0.0:16854 [junit] 2017-01-11 23:31:26,979 [myid:] - INFO [main:ZooKeeperServer@534] - shutting down [junit] 2017-01-11 23:31:26,979 [myid:] - ERROR [main:ZooKeeperServer@506] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit] 2017-01-11 23:31:26,979 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down [junit] 2017-01-11 23:31:26,979 [myid:] - INFO [main:PrepRequestProcessor@1009] - Shutting down [junit] 2017-01-11 23:31:26,979 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down [junit] 2017-01-11 23:31:26,979 [myid:] - INFO [ProcessThread(sid:0 cport:16854)::PrepRequestProcessor@157] - PrepRequestProcessor exited loop! [junit] 2017-01-11 23:31:26,979 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! [junit] 2017-01-11 23:31:26,980 [myid:] - INFO [main:FinalRequestProcessor@481] - shutdown of request processor complete [junit] 2017-01-11 23:31:26,980 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16854,name1=InMemoryDataTree] [junit] 2017-01-11 23:31:26,980 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16854] [junit] 2017-01-11 23:31:26,980 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16854 [junit] 2017-01-11 23:31:26,980 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] [junit] 2017-01-11 23:31:26,985 [myid:] - INFO [main:ClientBase@568] - fdcount after test is: 4835 at start it was 4831 [junit] 2017-01-11 23:31:26,986 [myid:] - INFO [main:ClientBase@570] - sleeping for 20 secs [junit] 2017-01-11 23:31:26,986 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testWatcherAutoResetWithLocal [junit] 2017-01-11 23:31:26,986 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testWatcherAutoResetWithLocal [junit] Tests run: 103, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 554.475 sec, Thread: 3, Class:
ZooKeeper_branch34_jdk8 - Build # 839 - Failure
See https://builds.apache.org/job/ZooKeeper_branch34_jdk8/839/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 239136 lines...] [junit] 2017-01-11 23:25:19,243 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@219] - NIOServerCnxn factory exited run method [junit] 2017-01-11 23:25:19,243 [myid:] - INFO [main:ZooKeeperServer@497] - shutting down [junit] 2017-01-11 23:25:19,244 [myid:] - ERROR [main:ZooKeeperServer@472] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit] 2017-01-11 23:25:19,244 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down [junit] 2017-01-11 23:25:19,244 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down [junit] 2017-01-11 23:25:19,244 [myid:] - INFO [main:SyncRequestProcessor@208] - Shutting down [junit] 2017-01-11 23:25:19,244 [myid:] - INFO [ProcessThread(sid:0 cport:11221)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! [junit] 2017-01-11 23:25:19,244 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@186] - SyncRequestProcessor exited! [junit] 2017-01-11 23:25:19,245 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete [junit] 2017-01-11 23:25:19,245 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 [junit] 2017-01-11 23:25:19,246 [myid:] - INFO [main:JMXEnv@147] - ensureOnly:[] [junit] 2017-01-11 23:25:19,247 [myid:] - INFO [main:ClientBase@445] - STARTING server [junit] 2017-01-11 23:25:19,247 [myid:] - INFO [main:ClientBase@366] - CREATING server instance 127.0.0.1:11221 [junit] 2017-01-11 23:25:19,248 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11221 [junit] 2017-01-11 23:25:19,248 [myid:] - INFO [main:ClientBase@341] - STARTING server instance 127.0.0.1:11221 [junit] 2017-01-11 23:25:19,248 [myid:] - INFO [main:ZooKeeperServer@173] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 6 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/build/test/tmp/test4829376183710831538.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/build/test/tmp/test4829376183710831538.junit.dir/version-2 [junit] 2017-01-11 23:25:19,252 [myid:] - ERROR [main:ZooKeeperServer@472] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit] 2017-01-11 23:25:19,252 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 [junit] 2017-01-11 23:25:19,253 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:53627 [junit] 2017-01-11 23:25:19,253 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxn@827] - Processing stat command from /127.0.0.1:53627 [junit] 2017-01-11 23:25:19,253 [myid:] - INFO [Thread-4:NIOServerCnxn$StatCommand@663] - Stat command output [junit] 2017-01-11 23:25:19,254 [myid:] - INFO [Thread-4:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:53627 (no session established for client) [junit] 2017-01-11 23:25:19,254 [myid:] - INFO [main:JMXEnv@230] - ensureParent:[InMemoryDataTree, StandaloneServer_port] [junit] 2017-01-11 23:25:19,256 [myid:] - INFO [main:JMXEnv@247] - expect:InMemoryDataTree [junit] 2017-01-11 23:25:19,257 [myid:] - INFO [main:JMXEnv@251] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11221,name1=InMemoryDataTree [junit] 2017-01-11 23:25:19,257 [myid:] - INFO [main:JMXEnv@247] - expect:StandaloneServer_port [junit] 2017-01-11 23:25:19,257 [myid:] - INFO [main:JMXEnv@251] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11221 [junit] 2017-01-11 23:25:19,258 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 33644 [junit] 2017-01-11 23:25:19,258 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 20 [junit] 2017-01-11 23:25:19,258 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testQuota [junit] 2017-01-11 23:25:19,258 [myid:] - INFO [main:ClientBase@522] - tearDown starting [junit] 2017-01-11 23:25:19,329 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1598fd995ec closed [junit] 2017-01-11 23:25:19,329 [myid:] - INFO [main:ClientBase@492] - STOPPING server [junit] 2017-01-11 23:25:19,329 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1598fd995ec [junit] 2017-01-11 23:25:19,330
[jira] [Commented] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819294#comment-15819294 ] ASF GitHub Bot commented on ZOOKEEPER-2642: --- Github user hanm commented on the issue: https://github.com/apache/zookeeper/pull/122 >> when are we going to be removing these deprecated methods, in trunk Maybe when we get a stable release of 3.5? >> it sounds like we don't need to bring back the old reconfig methods. Agree, for trunk the change would be just rename ZooKeeperAdmin::reconfig to ZooKeeperAdmin::reconfigure so it's consistent with branch-3.5 (with some documentation updates and tests update.). > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper issue #122: [ZOOKEEPER-2642] Resurrect the reconfig() methods that...
Github user hanm commented on the issue: https://github.com/apache/zookeeper/pull/122 >> when are we going to be removing these deprecated methods, in trunk Maybe when we get a stable release of 3.5? >> it sounds like we don't need to bring back the old reconfig methods. Agree, for trunk the change would be just rename ZooKeeperAdmin::reconfig to ZooKeeperAdmin::reconfigure so it's consistent with branch-3.5 (with some documentation updates and tests update.). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819230#comment-15819230 ] Hadoop QA commented on ZOOKEEPER-2642: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12847074/ZOOKEEPER-2642.patch against trunk revision 5f60374d060c18ccad322c7f18883284dbac0fed. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 21 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 3.0.1) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3560//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3560//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3560//console This message is automatically generated. > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Failed: ZOOKEEPER-2642 PreCommit Build #3560
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3560/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 485554 lines...] [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 3.0.1) warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] -1 core tests. The patch failed core unit tests. [exec] [exec] +1 contrib tests. The patch passed contrib unit tests. [exec] [exec] Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3560//testReport/ [exec] Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3560//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html [exec] Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3560//console [exec] [exec] This message is automatically generated. [exec] [exec] [exec] == [exec] == [exec] Adding comment to Jira. [exec] == [exec] == [exec] [exec] [exec] Comment added. [exec] ec9d0308941ea2d637800093cd6e765d7ca212a4 logged out [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] [exec] mv: '/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess' and '/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess' are the same file BUILD FAILED /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build.xml:1609: exec returned: 1 Total time: 17 minutes 44 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Compressed 557.52 KB of artifacts by 45.9% relative to #3558 Recording test results Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 [description-setter] Description set: ZOOKEEPER-2642 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 ### ## FAILED TESTS (if any) ## 2 tests failed. FAILED: org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.testDuringLeaderSync Error Message: zoo.cfg.dynamic.next is not deleted. Stack Trace: junit.framework.AssertionFailedError: zoo.cfg.dynamic.next is not deleted. at org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.testDuringLeaderSync(ReconfigDuringLeaderSyncTest.java:165) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) FAILED: org.apache.zookeeper.test.SSLTest.testSecureQuorumServer Error Message: waiting for server 0 being up Stack Trace: junit.framework.AssertionFailedError: waiting for server 0 being up at org.apache.zookeeper.test.SSLTest.testSecureQuorumServer(SSLTest.java:96) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79)
[jira] [Commented] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819183#comment-15819183 ] Hadoop QA commented on ZOOKEEPER-2642: -- +1 overall. GitHub Pull Request Build +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 21 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 3.0.1) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/203//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/203//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/203//console This message is automatically generated. > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Success: ZOOKEEPER- PreCommit Build #203
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/203/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 479200 lines...] [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 21 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 3.0.1) warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] +1 core tests. The patch passed core unit tests. [exec] [exec] +1 contrib tests. The patch passed contrib unit tests. [exec] [exec] Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/203//testReport/ [exec] Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/203//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html [exec] Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/203//console [exec] [exec] This message is automatically generated. [exec] [exec] [exec] == [exec] == [exec] Adding comment to Jira. [exec] == [exec] == [exec] [exec] [exec] Comment added. [exec] 4c707a9fcc89f9ddf757eb524e5f367067e67937 logged out [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] [exec] mv: ‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/patchprocess’ and ‘/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/patchprocess’ are the same file BUILD SUCCESSFUL Total time: 19 minutes 8 seconds Archiving artifacts Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Recording test results Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 [description-setter] Description set: ZOOKEEPER-2642 Putting comment on the pull request Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Email was triggered for: Success Sending email for trigger: Success Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 ### ## FAILED TESTS (if any) ## All tests passed
[jira] [Commented] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819160#comment-15819160 ] Hadoop QA commented on ZOOKEEPER-2642: -- +1 overall. GitHub Pull Request Build +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 21 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 3.0.1) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/202//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/202//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/202//console This message is automatically generated. > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Success: ZOOKEEPER- PreCommit Build #202
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/202/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 466527 lines...] [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 21 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 3.0.1) warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] +1 core tests. The patch passed core unit tests. [exec] [exec] +1 contrib tests. The patch passed contrib unit tests. [exec] [exec] Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/202//testReport/ [exec] Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/202//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html [exec] Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/202//console [exec] [exec] This message is automatically generated. [exec] [exec] [exec] == [exec] == [exec] Adding comment to Jira. [exec] == [exec] == [exec] [exec] [exec] Comment added. [exec] fd27c38931e45d63d3c3aa191a675aac9a59ee19 logged out [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] [exec] mv: '/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/patchprocess' and '/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/patchprocess' are the same file BUILD SUCCESSFUL Total time: 19 minutes 27 seconds Archiving artifacts Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Recording test results Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 [description-setter] Description set: ZOOKEEPER-2642 Putting comment on the pull request Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Email was triggered for: Success Sending email for trigger: Success Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 ### ## FAILED TESTS (if any) ## All tests passed
[jira] [Updated] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2642: Attachment: ZOOKEEPER-2642.patch Rebased against master > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819126#comment-15819126 ] Hadoop QA commented on ZOOKEEPER-2642: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12847073/ZOOKEEPER-2642.patch against trunk revision 5f60374d060c18ccad322c7f18883284dbac0fed. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 40 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3559//console This message is automatically generated. > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Failed: ZOOKEEPER-2642 PreCommit Build #3559
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3559/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 216 lines...] [exec] PATCH APPLICATION FAILED [exec] [exec] [exec] [exec] [exec] -1 overall. Here are the results of testing the latest attachment [exec] http://issues.apache.org/jira/secure/attachment/12847073/ZOOKEEPER-2642.patch [exec] against trunk revision 5f60374d060c18ccad322c7f18883284dbac0fed. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 40 new or modified tests. [exec] [exec] -1 patch. The patch command could not apply the patch. [exec] [exec] Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3559//console [exec] [exec] This message is automatically generated. [exec] [exec] [exec] == [exec] == [exec] Adding comment to Jira. [exec] == [exec] == [exec] [exec] [exec] Comment added. [exec] 456ab55e483a7ed3c9efbdcfcaec55ccd6547e56 logged out [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] [exec] mv: '/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess' and '/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/patchprocess' are the same file BUILD FAILED /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build.xml:1609: exec returned: 1 Total time: 46 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Recording test results Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 ERROR: Step ?Publish JUnit test result report? failed: No test report files were found. Configuration error? Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 [description-setter] Description set: ZOOKEEPER-2642 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7 ### ## FAILED TESTS (if any) ## No tests ran.
[jira] [Updated] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2642: Attachment: ZOOKEEPER-2642.patch Fixed doc typo > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
ZooKeeper_branch35_solaris - Build # 393 - Still Failing
See https://builds.apache.org/job/ZooKeeper_branch35_solaris/393/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 468377 lines...] [junit] 2017-01-11 17:16:23,433 [myid:] - INFO [main:ClientBase@386] - CREATING server instance 127.0.0.1:11222 [junit] 2017-01-11 17:16:23,433 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. [junit] 2017-01-11 17:16:23,434 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 [junit] 2017-01-11 17:16:23,435 [myid:] - INFO [main:ClientBase@361] - STARTING server instance 127.0.0.1:11222 [junit] 2017-01-11 17:16:23,435 [myid:] - INFO [main:ZooKeeperServer@893] - minSessionTimeout set to 6000 [junit] 2017-01-11 17:16:23,435 [myid:] - INFO [main:ZooKeeperServer@902] - maxSessionTimeout set to 6 [junit] 2017-01-11 17:16:23,435 [myid:] - INFO [main:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 6 datadir /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test6887420828422298759.junit.dir/version-2 snapdir /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test6887420828422298759.junit.dir/version-2 [junit] 2017-01-11 17:16:23,436 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test6887420828422298759.junit.dir/version-2/snapshot.b [junit] 2017-01-11 17:16:23,438 [myid:] - INFO [main:FileTxnSnapLog@320] - Snapshotting: 0xb to /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test6887420828422298759.junit.dir/version-2/snapshot.b [junit] 2017-01-11 17:16:23,439 [myid:] - ERROR [main:ZooKeeperServer@505] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit] 2017-01-11 17:16:23,439 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 [junit] 2017-01-11 17:16:23,440 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:38626 [junit] 2017-01-11 17:16:23,440 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:38626 [junit] 2017-01-11 17:16:23,441 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output [junit] 2017-01-11 17:16:23,441 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@614] - Closed socket connection for client /127.0.0.1:38626 (no session established for client) [junit] 2017-01-11 17:16:23,441 [myid:] - INFO [main:JMXEnv@228] - ensureParent:[InMemoryDataTree, StandaloneServer_port] [junit] 2017-01-11 17:16:23,443 [myid:] - INFO [main:JMXEnv@245] - expect:InMemoryDataTree [junit] 2017-01-11 17:16:23,443 [myid:] - INFO [main:JMXEnv@249] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree [junit] 2017-01-11 17:16:23,443 [myid:] - INFO [main:JMXEnv@245] - expect:StandaloneServer_port [junit] 2017-01-11 17:16:23,443 [myid:] - INFO [main:JMXEnv@249] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11222 [junit] 2017-01-11 17:16:23,443 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 17888 [junit] 2017-01-11 17:16:23,443 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 24 [junit] 2017-01-11 17:16:23,444 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testQuota [junit] 2017-01-11 17:16:23,444 [myid:] - INFO [main:ClientBase@543] - tearDown starting [junit] 2017-01-11 17:16:23,522 [myid:] - INFO [main:ZooKeeper@1322] - Session: 0x1260b8141ba closed [junit] 2017-01-11 17:16:23,522 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x1260b8141ba [junit] 2017-01-11 17:16:23,522 [myid:] - INFO [main:ClientBase@513] - STOPPING server [junit] 2017-01-11 17:16:23,523 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted [junit] 2017-01-11 17:16:23,523 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method [junit] 2017-01-11 17:16:23,523 [myid:] - INFO
[GitHub] zookeeper issue #122: [ZOOKEEPER-2642] Resurrect the reconfig() methods that...
Github user fpj commented on the issue: https://github.com/apache/zookeeper/pull/122 Just so that I understand, when are we going to be removing this methods, in trunk? I'm asking this for two reasons: 1. So that users know in which version these methods are going away 2. What changes we need to apply to trunk. In trunk, we would need to at least change in the admin class `reconfig()` to `reconfigure()`, but it sounds like we don't need to bring back the old `reconfig` methods. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15818534#comment-15818534 ] ASF GitHub Bot commented on ZOOKEEPER-2642: --- Github user fpj commented on the issue: https://github.com/apache/zookeeper/pull/122 Just so that I understand, when are we going to be removing this methods, in trunk? I'm asking this for two reasons: 1. So that users know in which version these methods are going away 2. What changes we need to apply to trunk. In trunk, we would need to at least change in the admin class `reconfig()` to `reconfigure()`, but it sounds like we don't need to bring back the old `reconfig` methods. > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
ZooKeeper_branch34_openjdk7 - Build # 1345 - Failure
See https://builds.apache.org/job/ZooKeeper_branch34_openjdk7/1345/ ### ## LAST 60 LINES OF THE CONSOLE ### Started by timer [EnvInject] - Loading node environment variables. Building remotely on H13 (ubuntu) in workspace /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_openjdk7 > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url git://git.apache.org/zookeeper.git # timeout=10 Cleaning workspace > git rev-parse --verify HEAD # timeout=10 Resetting working tree > git reset --hard # timeout=10 > git clean -fdx # timeout=10 Fetching upstream changes from git://git.apache.org/zookeeper.git > git --version # timeout=10 > git -c core.askpass=true fetch --tags --progress > git://git.apache.org/zookeeper.git +refs/heads/*:refs/remotes/origin/* > git rev-parse refs/remotes/origin/branch-3.4^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/branch-3.4^{commit} # timeout=10 Checking out Revision cded802708fac417369affbd25bf9ad2016a904d (refs/remotes/origin/branch-3.4) > git config core.sparsecheckout # timeout=10 > git checkout -f cded802708fac417369affbd25bf9ad2016a904d > git rev-list cded802708fac417369affbd25bf9ad2016a904d # timeout=10 No emails were triggered. [ZooKeeper_branch34_openjdk7] $ /home/jenkins/tools/ant/latest/bin/ant -Dtest.output=yes -Dtest.junit.threads=8 -Dtest.junit.output.format=xml -Djavac.target=1.7 clean test-core-java Error: JAVA_HOME is not defined correctly. We cannot execute /usr/lib/jvm/java-7-openjdk-amd64//bin/java Build step 'Invoke Ant' marked build as failure Recording test results ERROR: Step ?Publish JUnit test result report? failed: No test report files were found. Configuration error? Email was triggered for: Failure - Any Sending email for trigger: Failure - Any ### ## FAILED TESTS (if any) ## No tests ran.
[jira] [Commented] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15818527#comment-15818527 ] ASF GitHub Bot commented on ZOOKEEPER-2642: --- Github user fpj commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/122#discussion_r95591971 --- Diff: src/docs/src/documentation/content/xdocs/zookeeperReconfig.xml --- @@ -300,6 +300,11 @@ server.3=125.23.63.25:2782:2785:participant from ZooKeeper class, and use of this API requires ACL setup and user authentication (see for more information.). + +Note: for temporary backward compatibility, the reconfig() APIs will remain in ZooKeeper.java + where they were for a few alpha versions of 3.5.x. However, these APIs are deprecated and users + should move to the reconfig() APIs in ZooKeeperAdmin.java. --- End diff -- Small typo: `reconfig()` should be `reconfigure()`. > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] zookeeper pull request #122: [ZOOKEEPER-2642] Resurrect the reconfig() metho...
Github user fpj commented on a diff in the pull request: https://github.com/apache/zookeeper/pull/122#discussion_r95591971 --- Diff: src/docs/src/documentation/content/xdocs/zookeeperReconfig.xml --- @@ -300,6 +300,11 @@ server.3=125.23.63.25:2782:2785:participant from ZooKeeper class, and use of this API requires ACL setup and user authentication (see for more information.). + +Note: for temporary backward compatibility, the reconfig() APIs will remain in ZooKeeper.java + where they were for a few alpha versions of 3.5.x. However, these APIs are deprecated and users + should move to the reconfig() APIs in ZooKeeperAdmin.java. --- End diff -- Small typo: `reconfig()` should be `reconfigure()`. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
ZooKeeper_branch34_solaris - Build # 1428 - Still Failing
See https://builds.apache.org/job/ZooKeeper_branch34_solaris/1428/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 198358 lines...] [junit] 2017-01-11 13:54:41,603 [myid:] - INFO [main:ZooKeeperServer@497] - shutting down [junit] 2017-01-11 13:54:41,603 [myid:] - ERROR [main:ZooKeeperServer@472] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit] 2017-01-11 13:54:41,604 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down [junit] 2017-01-11 13:54:41,604 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down [junit] 2017-01-11 13:54:41,604 [myid:] - INFO [main:SyncRequestProcessor@208] - Shutting down [junit] 2017-01-11 13:54:41,604 [myid:] - INFO [ProcessThread(sid:0 cport:11221)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! [junit] 2017-01-11 13:54:41,604 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@186] - SyncRequestProcessor exited! [junit] 2017-01-11 13:54:41,604 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete [junit] 2017-01-11 13:54:41,605 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 [junit] 2017-01-11 13:54:41,605 [myid:] - INFO [main:JMXEnv@147] - ensureOnly:[] [junit] 2017-01-11 13:54:41,606 [myid:] - INFO [main:ClientBase@445] - STARTING server [junit] 2017-01-11 13:54:41,606 [myid:] - INFO [main:ClientBase@366] - CREATING server instance 127.0.0.1:11221 [junit] 2017-01-11 13:54:41,607 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11221 [junit] 2017-01-11 13:54:41,607 [myid:] - INFO [main:ClientBase@341] - STARTING server instance 127.0.0.1:11221 [junit] 2017-01-11 13:54:41,607 [myid:] - INFO [main:ZooKeeperServer@173] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 6 datadir /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch34_solaris/build/test/tmp/test1372076089982539975.junit.dir/version-2 snapdir /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch34_solaris/build/test/tmp/test1372076089982539975.junit.dir/version-2 [junit] 2017-01-11 13:54:41,610 [myid:] - ERROR [main:ZooKeeperServer@472] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit] 2017-01-11 13:54:41,610 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 [junit] 2017-01-11 13:54:41,610 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52192 [junit] 2017-01-11 13:54:41,610 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxn@827] - Processing stat command from /127.0.0.1:52192 [junit] 2017-01-11 13:54:41,611 [myid:] - INFO [Thread-5:NIOServerCnxn$StatCommand@663] - Stat command output [junit] 2017-01-11 13:54:41,611 [myid:] - INFO [Thread-5:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52192 (no session established for client) [junit] 2017-01-11 13:54:41,611 [myid:] - INFO [main:JMXEnv@230] - ensureParent:[InMemoryDataTree, StandaloneServer_port] [junit] 2017-01-11 13:54:41,612 [myid:] - INFO [main:JMXEnv@247] - expect:InMemoryDataTree [junit] 2017-01-11 13:54:41,613 [myid:] - INFO [main:JMXEnv@251] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11221,name1=InMemoryDataTree [junit] 2017-01-11 13:54:41,613 [myid:] - INFO [main:JMXEnv@247] - expect:StandaloneServer_port [junit] 2017-01-11 13:54:41,613 [myid:] - INFO [main:JMXEnv@251] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11221 [junit] 2017-01-11 13:54:41,613 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 11380 [junit] 2017-01-11 13:54:41,614 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 20 [junit] 2017-01-11 13:54:41,614 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testQuota [junit] 2017-01-11 13:54:41,614 [myid:] - INFO [main:ClientBase@522] - tearDown starting [junit] 2017-01-11 13:54:41,691 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1598dcf27ad closed [junit] 2017-01-11 13:54:41,691 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1598dcf27ad [junit] 2017-01-11 13:54:41,692 [myid:] - INFO [main:ClientBase@492] - STOPPING server [junit] 2017-01-11 13:54:41,693 [myid:] - INFO [main:ZooKeeperServer@497] - shutting down [junit] 2017-01-11 13:54:41,693 [myid:] -
[jira] [Created] (ZOOKEEPER-2663) Enable remote jmx, zkCli.sh start failed with jmx communication error
linbo.liao created ZOOKEEPER-2663: - Summary: Enable remote jmx, zkCli.sh start failed with jmx communication error Key: ZOOKEEPER-2663 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2663 Project: ZooKeeper Issue Type: Bug Components: jmx Affects Versions: 3.4.9 Environment: OS: Centos 6.7 x86_64 Zookeeper: 3.4.9 Java: HotSpot 1.8.0_65-b17 Reporter: linbo.liao My laptop is Macbook Pro with macOS Sierra (IP: 192.168.2.102). An VM (IP: 192.168.2.107) is running on VirtualBox. Deploy zookeeper-3.4.9 on VM, enable the remote JMX with option: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8415 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.rmi.port=8415 -Djava.rmi.server.hostname=192.168.2.107 Test with jconsole on Mac, connect 192.168.2.107:8415 works fine. Runnign zkCli.sh failed $ bin/zkCli.sh Error: JMX connector server communication error: service:jmx:rmi://localhost.localdomain:8415 $ cat /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
ZooKeeper-trunk-jdk8 - Build # 897 - Still Failing
See https://builds.apache.org/job/ZooKeeper-trunk-jdk8/897/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 464034 lines...] [junit] 2017-01-11 12:00:29,266 [myid:127.0.0.1:14044] - INFO [main-SendThread(127.0.0.1:14044):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:14044. Will not attempt to authenticate using SASL (unknown error) [junit] 2017-01-11 12:00:29,266 [myid:127.0.0.1:14044] - WARN [main-SendThread(127.0.0.1:14044):ClientCnxn$SendThread@1235] - Session 0x30221193cce for server 127.0.0.1/127.0.0.1:14044, unexpected error, closing socket connection and attempting reconnect [junit] java.net.ConnectException: Connection refused [junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) [junit] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) [junit] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) [junit] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) [junit] 2017-01-11 12:00:29,294 [myid:127.0.0.1:13915] - INFO [main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:13915. Will not attempt to authenticate using SASL (unknown error) [junit] 2017-01-11 12:00:29,294 [myid:127.0.0.1:13915] - WARN [main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1235] - Session 0x1022113d492 for server 127.0.0.1/127.0.0.1:13915, unexpected error, closing socket connection and attempting reconnect [junit] java.net.ConnectException: Connection refused [junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) [junit] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) [junit] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) [junit] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) [junit] 2017-01-11 12:01:49,665 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 133351 [junit] 2017-01-11 12:01:49,666 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 55 [junit] 2017-01-11 12:01:49,666 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testManyChildWatchersAutoReset [junit] 2017-01-11 12:01:49,666 [myid:] - INFO [main:ClientBase@543] - tearDown starting [junit] 2017-01-11 12:01:49,667 [myid:] - INFO [ProcessThread(sid:0 cport:16611)::PrepRequestProcessor@618] - Processed session termination for sessionid: 0x1022111c767 [junit] 2017-01-11 12:01:49,717 [myid:] - INFO [main:ZooKeeper@1324] - Session: 0x1022111c767 closed [junit] 2017-01-11 12:01:49,718 [myid:] - INFO [NIOWorkerThread-3:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16611,name1=Connections,name2=127.0.0.1,name3=0x1022111c767] [junit] 2017-01-11 12:01:49,718 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x1022111c767 [junit] 2017-01-11 12:01:49,718 [myid:] - INFO [ProcessThread(sid:0 cport:16611)::PrepRequestProcessor@618] - Processed session termination for sessionid: 0x1022111c7670001 [junit] 2017-01-11 12:01:49,719 [myid:] - INFO [NIOWorkerThread-3:NIOServerCnxn@614] - Closed socket connection for client /127.0.0.1:52682 which had sessionid 0x1022111c767 [junit] 2017-01-11 12:01:49,750 [myid:] - INFO [NIOWorkerThread-8:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16611,name1=Connections,name2=127.0.0.1,name3=0x1022111c7670001] [junit] 2017-01-11 12:01:49,750 [myid:] - INFO [main:ZooKeeper@1324] - Session: 0x1022111c7670001 closed [junit] 2017-01-11 12:01:49,750 [myid:] - INFO [NIOWorkerThread-8:NIOServerCnxn@614] - Closed socket connection for client /127.0.0.1:52689 which had sessionid 0x1022111c7670001 [junit] 2017-01-11 12:01:49,751 [myid:] - INFO [main:ClientBase@513] - STOPPING server [junit] 2017-01-11 12:01:49,751 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x1022111c7670001 [junit] 2017-01-11 12:01:49,752 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted [junit] 2017-01-11 12:01:49,755 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:16611:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method [junit] 2017-01-11 12:01:49,756 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method [junit]
ZooKeeper_branch35_solaris - Build # 392 - Still Failing
See https://builds.apache.org/job/ZooKeeper_branch35_solaris/392/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 469263 lines...] [junit] 2017-01-11 10:47:57,204 [myid:] - INFO [main:ClientBase@386] - CREATING server instance 127.0.0.1:11222 [junit] 2017-01-11 10:47:57,204 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. [junit] 2017-01-11 10:47:57,205 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 [junit] 2017-01-11 10:47:57,206 [myid:] - INFO [main:ClientBase@361] - STARTING server instance 127.0.0.1:11222 [junit] 2017-01-11 10:47:57,206 [myid:] - INFO [main:ZooKeeperServer@893] - minSessionTimeout set to 6000 [junit] 2017-01-11 10:47:57,206 [myid:] - INFO [main:ZooKeeperServer@902] - maxSessionTimeout set to 6 [junit] 2017-01-11 10:47:57,207 [myid:] - INFO [main:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 6 datadir /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test4005900108130889269.junit.dir/version-2 snapdir /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test4005900108130889269.junit.dir/version-2 [junit] 2017-01-11 10:47:57,207 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test4005900108130889269.junit.dir/version-2/snapshot.b [junit] 2017-01-11 10:47:57,209 [myid:] - INFO [main:FileTxnSnapLog@320] - Snapshotting: 0xb to /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper_branch35_solaris/build/test/tmp/test4005900108130889269.junit.dir/version-2/snapshot.b [junit] 2017-01-11 10:47:57,210 [myid:] - ERROR [main:ZooKeeperServer@505] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit] 2017-01-11 10:47:57,211 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 [junit] 2017-01-11 10:47:57,211 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:44044 [junit] 2017-01-11 10:47:57,212 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:44044 [junit] 2017-01-11 10:47:57,212 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output [junit] 2017-01-11 10:47:57,212 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@614] - Closed socket connection for client /127.0.0.1:44044 (no session established for client) [junit] 2017-01-11 10:47:57,212 [myid:] - INFO [main:JMXEnv@228] - ensureParent:[InMemoryDataTree, StandaloneServer_port] [junit] 2017-01-11 10:47:57,214 [myid:] - INFO [main:JMXEnv@245] - expect:InMemoryDataTree [junit] 2017-01-11 10:47:57,214 [myid:] - INFO [main:JMXEnv@249] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree [junit] 2017-01-11 10:47:57,214 [myid:] - INFO [main:JMXEnv@245] - expect:StandaloneServer_port [junit] 2017-01-11 10:47:57,214 [myid:] - INFO [main:JMXEnv@249] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11222 [junit] 2017-01-11 10:47:57,214 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 18085 [junit] 2017-01-11 10:47:57,214 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 24 [junit] 2017-01-11 10:47:57,215 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testQuota [junit] 2017-01-11 10:47:57,215 [myid:] - INFO [main:ClientBase@543] - tearDown starting [junit] 2017-01-11 10:47:57,292 [myid:] - INFO [main:ZooKeeper@1322] - Session: 0x1260a1daede closed [junit] 2017-01-11 10:47:57,292 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x1260a1daede [junit] 2017-01-11 10:47:57,292 [myid:] - INFO [main:ClientBase@513] - STOPPING server [junit] 2017-01-11 10:47:57,292 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method [junit] 2017-01-11 10:47:57,293 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted [junit] 2017-01-11 10:47:57,293 [myid:] - INFO
ZooKeeper-trunk-solaris - Build # 1462 - Still Failing
See https://builds.apache.org/job/ZooKeeper-trunk-solaris/1462/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 464921 lines...] [junit] 2017-01-11 09:17:41,214 [myid:] - INFO [main:ClientBase@386] - CREATING server instance 127.0.0.1:11222 [junit] 2017-01-11 09:17:41,214 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. [junit] 2017-01-11 09:17:41,215 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 [junit] 2017-01-11 09:17:41,215 [myid:] - INFO [main:ClientBase@361] - STARTING server instance 127.0.0.1:11222 [junit] 2017-01-11 09:17:41,216 [myid:] - INFO [main:ZooKeeperServer@894] - minSessionTimeout set to 6000 [junit] 2017-01-11 09:17:41,216 [myid:] - INFO [main:ZooKeeperServer@903] - maxSessionTimeout set to 6 [junit] 2017-01-11 09:17:41,216 [myid:] - INFO [main:ZooKeeperServer@160] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 6 datadir /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper-trunk-solaris/build/test/tmp/test7008304457253983285.junit.dir/version-2 snapdir /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper-trunk-solaris/build/test/tmp/test7008304457253983285.junit.dir/version-2 [junit] 2017-01-11 09:17:41,217 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper-trunk-solaris/build/test/tmp/test7008304457253983285.junit.dir/version-2/snapshot.b [junit] 2017-01-11 09:17:41,218 [myid:] - INFO [main:FileTxnSnapLog@320] - Snapshotting: 0xb to /zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/ZooKeeper-trunk-solaris/build/test/tmp/test7008304457253983285.junit.dir/version-2/snapshot.b [junit] 2017-01-11 09:17:41,220 [myid:] - ERROR [main:ZooKeeperServer@506] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit] 2017-01-11 09:17:41,220 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 [junit] 2017-01-11 09:17:41,220 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:57934 [junit] 2017-01-11 09:17:41,221 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:57934 [junit] 2017-01-11 09:17:41,221 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output [junit] 2017-01-11 09:17:41,222 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@614] - Closed socket connection for client /127.0.0.1:57934 (no session established for client) [junit] 2017-01-11 09:17:41,222 [myid:] - INFO [main:JMXEnv@228] - ensureParent:[InMemoryDataTree, StandaloneServer_port] [junit] 2017-01-11 09:17:41,223 [myid:] - INFO [main:JMXEnv@245] - expect:InMemoryDataTree [junit] 2017-01-11 09:17:41,223 [myid:] - INFO [main:JMXEnv@249] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree [junit] 2017-01-11 09:17:41,223 [myid:] - INFO [main:JMXEnv@245] - expect:StandaloneServer_port [junit] 2017-01-11 09:17:41,224 [myid:] - INFO [main:JMXEnv@249] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11222 [junit] 2017-01-11 09:17:41,224 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 17906 [junit] 2017-01-11 09:17:41,224 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 24 [junit] 2017-01-11 09:17:41,224 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testQuota [junit] 2017-01-11 09:17:41,224 [myid:] - INFO [main:ClientBase@543] - tearDown starting [junit] 2017-01-11 09:17:41,302 [myid:] - INFO [main:ZooKeeper@1324] - Session: 0x12609cb0ad0 closed [junit] 2017-01-11 09:17:41,302 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x12609cb0ad0 [junit] 2017-01-11 09:17:41,302 [myid:] - INFO [main:ClientBase@513] - STOPPING server [junit] 2017-01-11 09:17:41,302 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted [junit] 2017-01-11 09:17:41,302 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method [junit] 2017-01-11 09:17:41,302 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] -
[jira] [Comment Edited] (ZOOKEEPER-2661) It costs about 5055 ms to create Zookeeper object for the first time.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817722#comment-15817722 ] Yaohui Wu edited comment on ZOOKEEPER-2661 at 1/11/17 9:07 AM: --- I solve it by adding one line to /etc/hosts: ::1 YaohuideMacBook-Air.local 'YaohuideMacBook-Air.local' is my host name. Now the problem disappears. It only cost 86 ms to create Zookeeper the first time. 16:47:43.106 109 INFO org.example.zk.curator.ZookeeperTest - cost 86 ms see this article: http://blog.csdn.net/puma_dong/article/details/53096149 was (Author: yaohui): I solve it by adding one line to /etc/hosts: ::1 YaohuideMacBook-Air.local 'YaohuideMacBook-Air.local' is my host name. Now the problem disappears. It only cost 86 ms to create Zookeeper the first time. 16:47:43.106 109 INFO org.example.zk.curator.ZookeeperTest - cost 86 ms > It costs about 5055 ms to create Zookeeper object for the first time. > - > > Key: ZOOKEEPER-2661 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2661 > Project: ZooKeeper > Issue Type: Bug > Components: java client >Affects Versions: 3.4.6 > Environment: See the description below. >Reporter: Yaohui Wu > Attachments: ZookeeperTest.java, log_output.txt > > > I create and close ZooKeeper for 10 times. It costs about 5055 ms for the > first time. > See attached files for some test code and output. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (ZOOKEEPER-2661) It costs about 5055 ms to create Zookeeper object for the first time.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yaohui Wu updated ZOOKEEPER-2661: - Comment: was deleted (was: I solve it by adding one line to /etc/hosts: ::1 YaohuideMacBook-Air.local 'YaohuideMacBook-Air.local' is my host name. Now the problem disappears. It only cost 86 ms to create Zookeeper the first time. 16:47:43.106 109 INFO org.example.zk.curator.ZookeeperTest - cost 86 ms ) > It costs about 5055 ms to create Zookeeper object for the first time. > - > > Key: ZOOKEEPER-2661 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2661 > Project: ZooKeeper > Issue Type: Bug > Components: java client >Affects Versions: 3.4.6 > Environment: See the description below. >Reporter: Yaohui Wu > Attachments: ZookeeperTest.java, log_output.txt > > > I create and close ZooKeeper for 10 times. It costs about 5055 ms for the > first time. > See attached files for some test code and output. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (ZOOKEEPER-2661) It costs about 5055 ms to create Zookeeper object for the first time.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yaohui Wu resolved ZOOKEEPER-2661. -- Resolution: Not A Problem I solve it by adding one line to /etc/hosts: ::1 YaohuideMacBook-Air.local 'YaohuideMacBook-Air.local' is my host name. Now the problem disappears. It only cost 86 ms to create Zookeeper the first time. 16:47:43.106 109 INFO org.example.zk.curator.ZookeeperTest - cost 86 ms > It costs about 5055 ms to create Zookeeper object for the first time. > - > > Key: ZOOKEEPER-2661 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2661 > Project: ZooKeeper > Issue Type: Bug > Components: java client >Affects Versions: 3.4.6 > Environment: See the description below. >Reporter: Yaohui Wu > Attachments: ZookeeperTest.java, log_output.txt > > > I create and close ZooKeeper for 10 times. It costs about 5055 ms for the > first time. > See attached files for some test code and output. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2661) It costs about 5055 ms to create Zookeeper object for the first time.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817705#comment-15817705 ] Yaohui Wu commented on ZOOKEEPER-2661: -- I solve it by add one line to /etc/hosts: ::1 YaohuideMacBook-Air.local the 'YaohuideMacBook-Air.local' is my host name. Now the problem disappears. It only cost 86 ms to create Zookeeper the first time. 16:47:43.106 109 INFO org.example.zk.curator.ZookeeperTest - cost 86 ms > It costs about 5055 ms to create Zookeeper object for the first time. > - > > Key: ZOOKEEPER-2661 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2661 > Project: ZooKeeper > Issue Type: Bug > Components: java client >Affects Versions: 3.4.6 > Environment: See the description below. >Reporter: Yaohui Wu > Attachments: ZookeeperTest.java, log_output.txt > > > I create and close ZooKeeper for 10 times. It costs about 5055 ms for the > first time. > See attached files for some test code and output. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (ZOOKEEPER-2661) It costs about 5055 ms to create Zookeeper object for the first time.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817705#comment-15817705 ] Yaohui Wu edited comment on ZOOKEEPER-2661 at 1/11/17 8:52 AM: --- I solve it by adding one line to /etc/hosts: ::1 YaohuideMacBook-Air.local 'YaohuideMacBook-Air.local' is my host name. Now the problem disappears. It only cost 86 ms to create Zookeeper the first time. 16:47:43.106 109 INFO org.example.zk.curator.ZookeeperTest - cost 86 ms was (Author: yaohui): I solve it by adding one line to /etc/hosts: ::1 YaohuideMacBook-Air.local the 'YaohuideMacBook-Air.local' is my host name. Now the problem disappears. It only cost 86 ms to create Zookeeper the first time. 16:47:43.106 109 INFO org.example.zk.curator.ZookeeperTest - cost 86 ms > It costs about 5055 ms to create Zookeeper object for the first time. > - > > Key: ZOOKEEPER-2661 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2661 > Project: ZooKeeper > Issue Type: Bug > Components: java client >Affects Versions: 3.4.6 > Environment: See the description below. >Reporter: Yaohui Wu > Attachments: ZookeeperTest.java, log_output.txt > > > I create and close ZooKeeper for 10 times. It costs about 5055 ms for the > first time. > See attached files for some test code and output. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (ZOOKEEPER-2661) It costs about 5055 ms to create Zookeeper object for the first time.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817705#comment-15817705 ] Yaohui Wu edited comment on ZOOKEEPER-2661 at 1/11/17 8:52 AM: --- I solve it by adding one line to /etc/hosts: ::1 YaohuideMacBook-Air.local the 'YaohuideMacBook-Air.local' is my host name. Now the problem disappears. It only cost 86 ms to create Zookeeper the first time. 16:47:43.106 109 INFO org.example.zk.curator.ZookeeperTest - cost 86 ms was (Author: yaohui): I solve it by add one line to /etc/hosts: ::1 YaohuideMacBook-Air.local the 'YaohuideMacBook-Air.local' is my host name. Now the problem disappears. It only cost 86 ms to create Zookeeper the first time. 16:47:43.106 109 INFO org.example.zk.curator.ZookeeperTest - cost 86 ms > It costs about 5055 ms to create Zookeeper object for the first time. > - > > Key: ZOOKEEPER-2661 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2661 > Project: ZooKeeper > Issue Type: Bug > Components: java client >Affects Versions: 3.4.6 > Environment: See the description below. >Reporter: Yaohui Wu > Attachments: ZookeeperTest.java, log_output.txt > > > I create and close ZooKeeper for 10 times. It costs about 5055 ms for the > first time. > See attached files for some test code and output. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2661) It costs about 5055 ms to create Zookeeper object for the first time.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817664#comment-15817664 ] Yaohui Wu commented on ZOOKEEPER-2661: -- It seems that the call InetAddress.getLocalHost() at line 62 is slow. code from class org.apache.zookeeper.Environment at line 56~65: public static List list() { ArrayList l = new ArrayList(); put(l, "zookeeper.version", Version.getFullVersion()); try { put(l, "host.name",//line 61 InetAddress.getLocalHost().getCanonicalHostName());//line 62 } catch (UnknownHostException e) { put(l, "host.name", ""); } > It costs about 5055 ms to create Zookeeper object for the first time. > - > > Key: ZOOKEEPER-2661 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2661 > Project: ZooKeeper > Issue Type: Bug > Components: java client >Affects Versions: 3.4.6 > Environment: See the description below. >Reporter: Yaohui Wu > Attachments: ZookeeperTest.java, log_output.txt > > > I create and close ZooKeeper for 10 times. It costs about 5055 ms for the > first time. > See attached files for some test code and output. -- This message was sent by Atlassian JIRA (v6.3.4#6332)