Re: Recommended session timeout
That's very interesting results, a good job sleuthing. You might try the concurrent collector? http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html#available_collectors.selecting specifically item 4 -XX:+UseConcMarkSweepGC I've never used this before myself but it's supposed to reduce the gc pauses to less than a second. Might require some tuning though... Patrick Joey Echeverria wrote: I've answered the questions you asked previously below, but I thought I would open with the actual culprit now that we found it. When I said loading data before, what I was talking about was sending data via Thrift to the machine that was getting disconnected from zookeeper. This turned out to be the problem. Too much data was being sent in short span of time and this caused memory pressure on the heap. This increased the fraction of the time that the GC had to run to keep up. During a 143 second test, the GC was running for 33 seconds. We found this by running tcpdump on both the machine running the ensemble server and the machine connecting to zookeeper as a client. We deduced it wasn't a network (lost packet) issue, as we never saw unmatched packets in our tests. What did see were long 2-7 second pauses with no packets being sent. We first attempted to up the priority of the zookeeper threads to see if that would help. When it didn't, we started monitoring the GC time. We don't have a work around yet, other than sending data in smaller batches and using a longer sessionTimeout. Thanks for all your help! -Joey As an experiment try increasing the timeout to say 30 seconds and re-run your tests. Any change? 30 seconds and higher works fine. loading data - could you explain a bit more about what you mean by this? If you are able to provide enough information for us to replicate we could try it out (also provide info on your ensemble configuration as Mahadev suggested) The ensemble config file looks as follows: tickTime=2000 dataDir=/data/zk clientPort=2181 initLimit=5 syncLimit=2 skipACL=true server.1=server1:2888:3888 ... server.7=server7:2888:3888 You are referring to startConnect in SendThread? We randomly sleep up to 1 second to ensure that the clients don't all storm the server(s) after a bounce. That makes some sense, but it might be worth tweaking that parameter based on sessionTimeout since 1 second can easily be 10-20% of sessionTimeout. 1) configure your test client to connect to 1 server in the ensemble 2) run the srst command on that server 3) run your client test 4) run the stat command on that server 5) if the test takes some time, run the stat a few times during the test to get more data points The problem doesn't appear to be on the server end as max latency never went above 5ms. Also, no messages are shown as queued.
RE: Recommended session timeout
just a quick sanity check. are you sure your memory is not overcommitted? in other words you aren't swapping. since the gc does a bunch of random memory accesses if you swap at all things will go very slow. ben From: Joey Echeverria [joe...@gmail.com] Sent: Thursday, February 26, 2009 1:31 PM To: zookeeper-user@hadoop.apache.org Subject: Re: Recommended session timeout I've answered the questions you asked previously below, but I thought I would open with the actual culprit now that we found it. When I said loading data before, what I was talking about was sending data via Thrift to the machine that was getting disconnected from zookeeper. This turned out to be the problem. Too much data was being sent in short span of time and this caused memory pressure on the heap. This increased the fraction of the time that the GC had to run to keep up. During a 143 second test, the GC was running for 33 seconds. We found this by running tcpdump on both the machine running the ensemble server and the machine connecting to zookeeper as a client. We deduced it wasn't a network (lost packet) issue, as we never saw unmatched packets in our tests. What did see were long 2-7 second pauses with no packets being sent. We first attempted to up the priority of the zookeeper threads to see if that would help. When it didn't, we started monitoring the GC time. We don't have a work around yet, other than sending data in smaller batches and using a longer sessionTimeout. Thanks for all your help! -Joey As an experiment try increasing the timeout to say 30 seconds and re-run your tests. Any change? 30 seconds and higher works fine. loading data - could you explain a bit more about what you mean by this? If you are able to provide enough information for us to replicate we could try it out (also provide info on your ensemble configuration as Mahadev suggested) The ensemble config file looks as follows: tickTime=2000 dataDir=/data/zk clientPort=2181 initLimit=5 syncLimit=2 skipACL=true server.1=server1:2888:3888 ... server.7=server7:2888:3888 You are referring to startConnect in SendThread? We randomly sleep up to 1 second to ensure that the clients don't all storm the server(s) after a bounce. That makes some sense, but it might be worth tweaking that parameter based on sessionTimeout since 1 second can easily be 10-20% of sessionTimeout. 1) configure your test client to connect to 1 server in the ensemble 2) run the srst command on that server 3) run your client test 4) run the stat command on that server 5) if the test takes some time, run the stat a few times during the test to get more data points The problem doesn't appear to be on the server end as max latency never went above 5ms. Also, no messages are shown as queued.
Re: Recommended session timeout
Hi Joey, here is a link to information on session timeouts. http://hadoop.apache.org/zookeeper/docs/r3.0.1/zookeeperProgrammers.html#ch_ zkSessions The session timeouts depends on how sensitive you want your application to be. A very low session timeout like (1-2 seconds) might lead to your application being very sensitive to events like minor network problems etc., a higher values of say (30 seconds) on the other hand might lead to slow detection of client failures -- example one of the zookeeper client which has ephemeral node goes down, in this case the ephemeral nodes will only go away after session timeout. I have seen some users using 10-15 seconds of session timeout, but you should use as per your application requirements. Hope this helps. mahadev On 2/22/09 3:09 AM, Joey Echeverria joe...@gmail.com wrote: Is there a recommended session timeout? Does it change based on the ensemble size? Thanks, -Joey
Re: Recommended session timeout
The latest docs (3.1.0 has some updates to that section) can be found here: http://hadoop.apache.org/zookeeper/docs/r3.1.0/zookeeperProgrammers.html#ch_zkSessions Patrick Mahadev Konar wrote: Hi Joey, here is a link to information on session timeouts. http://hadoop.apache.org/zookeeper/docs/r3.0.1/zookeeperProgrammers.html#ch_ zkSessions The session timeouts depends on how sensitive you want your application to be. A very low session timeout like (1-2 seconds) might lead to your application being very sensitive to events like minor network problems etc., a higher values of say (30 seconds) on the other hand might lead to slow detection of client failures -- example one of the zookeeper client which has ephemeral node goes down, in this case the ephemeral nodes will only go away after session timeout. I have seen some users using 10-15 seconds of session timeout, but you should use as per your application requirements. Hope this helps. mahadev On 2/22/09 3:09 AM, Joey Echeverria joe...@gmail.com wrote: Is there a recommended session timeout? Does it change based on the ensemble size? Thanks, -Joey
Re: Recommended session timeout
Thanks for the link to the documentation. I've been running tests with a 5 second session timeout and disconnect events appear frequent. The network they're operating on is generally quite, but the disconnects to correlate with an increase in activity (e.g. loading data into the system). Does this seem normal to you or does it imply a potential configuration problem on my network? On a related topic, I was reading the 3.1 client source code, particularly the reconnect source, and noticed that the client sleeps for up to 1 second before trying to reconnect. This seems excessive and with a 5 second session timeout leads to more frequent session expirations. Almost every time it sleeps for more than about 800 ms, a disconnect is followed by an expiration. Is this a bug, or desirable behavior? Thanks, -Joey On Mon, Feb 23, 2009 at 10:37 PM, Patrick Hunt ph...@apache.org wrote: The latest docs (3.1.0 has some updates to that section) can be found here: http://hadoop.apache.org/zookeeper/docs/r3.1.0/zookeeperProgrammers.html#ch_zkSessions Patrick Mahadev Konar wrote: Hi Joey, here is a link to information on session timeouts. http://hadoop.apache.org/zookeeper/docs/r3.0.1/zookeeperProgrammers.html#ch_ zkSessions The session timeouts depends on how sensitive you want your application to be. A very low session timeout like (1-2 seconds) might lead to your application being very sensitive to events like minor network problems etc., a higher values of say (30 seconds) on the other hand might lead to slow detection of client failures -- example one of the zookeeper client which has ephemeral node goes down, in this case the ephemeral nodes will only go away after session timeout. I have seen some users using 10-15 seconds of session timeout, but you should use as per your application requirements. Hope this helps. mahadev On 2/22/09 3:09 AM, Joey Echeverria joe...@gmail.com wrote: Is there a recommended session timeout? Does it change based on the ensemble size? Thanks, -Joey