Re: cassandra node jvm stall intermittently

2015-03-10 Thread Jason Wee
heh on the midst of upgrading , Rob ;-)

Jason

On Tue, Mar 10, 2015 at 2:04 AM, Robert Coli rc...@eventbrite.com wrote:
 On Sat, Mar 7, 2015 at 1:44 AM, Jason Wee peich...@gmail.com wrote:

 hey Ali, 1.0.8

 On Sat, Mar 7, 2015 at 5:20 PM, Ali Akhtar ali.rac...@gmail.com wrote:

 What version are you running?


 Upgrade your very old version to at least 1.2.x (via 1.1.x) ASAP.

 =Rob



Re: cassandra node jvm stall intermittently

2015-03-07 Thread Jason Wee
Hi Jan, thanks for your time to prepare the question and answer below,


   - How many nodes do you have on the ring ?
   12

   - What is the activity when this occurs  - reads / writes/ compactions
?
   This cluster has a lot of writes and read. off peak period, ops center
   shown cluster write is about 5k/sec, read about 1k/sec and during peak
   period, write could be 22k/sec and read about 10k/sec. So this particular
   one node hang like every moment irrespect if it is peak or non peak, or
   compaction.

   - Is there anything that is unique about this node that makes it
   different from the other nodes ?
   Our nodes are same in term of operating system (centos 6) and cassandra
   configuration settings. Other than that, there are no other resources
   intensive application running in cassandra nodes.


   - Is this a periodic occurance OR a single occurence -   I am trying to
   determine a pattern about when this shows up.
   it *always* happened and in fact, it is happening now.

   - What is the load distribution the ring (ie: is this node carrying more
   load than the others).
   As of this moment,

   - Address DC  RackStatus State   Load
OwnsToken
   -
155962751505430129087380028406227096910
   - node1  us-east 1e  Up Normal  498.66 GB
   8.33%   0
   - node2  us-east 1e  Up Normal  503.36 GB
   8.33%   14178431955039102644307275309657008810
   - node3  us-east 1e  Up Normal  492.08 GB
   8.33%   28356863910078205288614550619314017619
   - node4  us-east 1e  Up Normal  499.54 GB
   8.33%   42535295865117307932921825928971026430
   - node5  us-east 1e  Up Normal  523.76 GB
   8.33%   56713727820156407428984779325531226109
   - node6  us-east 1e  Up Normal  515.36 GB
   8.33%   70892159775195513221536376548285044050
   - node7  us-east 1e  Up Normal  588.93 GB
   8.33%   85070591730234615865843651857942052860
   - node8  us-east 1e  Up Normal  498.51 GB
   8.33%   99249023685273718510150927167599061670
   - node9  us-east 1e  Up Normal  531.81 GB
   8.33%   113427455640312814857969558651062452221
   - node10 us-east 1e  Up Normal  501.85 GB
   8.33%   127605887595351923798765477786913079290
   - node11 us-east 1e  Up Normal  501.13 GB
   8.33%   141784319550391026443072753096570088100
   - node12 us-east 1e  Up Normal  508.45 GB
   8.33%   155962751505430129087380028406227096910

   so that one node is node5. At this instance ring output, yea, it is
   second highest in the ring but unlikely this is the cause.


Jason

On Sat, Mar 7, 2015 at 3:35 PM, Jan cne...@yahoo.com wrote:

 HI Jason;

 The single node showing the anomaly is a hint that the problem is probably
 local to a node (as you suspected).

- How many nodes do you have on the ring ?
- What is the activity when this occurs  - reads / writes/ compactions
 ?
- Is there anything that is unique about this node that makes it
different from the other nodes ?
- Is this a periodic occurance OR a single occurence -   I am trying
to determine a pattern about when this shows up.
- What is the load distribution the ring (ie: is this node carrying
more load than the others).


 The system.log should have  more info.,about it.

 hope this helps
 Jan/





   On Friday, March 6, 2015 4:50 AM, Jason Wee peich...@gmail.com wrote:


 well, StatusLogger.java started shown in cassandra
 system.log, MessagingService.java also shown some stage (e.g. read,
 mutation) dropped.

 It's strange it only happen in this node but this type of message does not
 shown in other node log file at the same time...

 Jason

 On Thu, Mar 5, 2015 at 4:26 AM, Jan cne...@yahoo.com wrote:

 HI Jason;

 Whats in the log files at the moment jstat shows 100%.
 What is the activity on the cluster  the node at the specific point in
 time (reads/ writes/ joins etc)

 Jan/


   On Wednesday, March 4, 2015 5:59 AM, Jason Wee peich...@gmail.com
 wrote:


 Hi, our cassandra node using java 7 update 72 and we ran jstat on one of
 the node, and notice some strange behaviour as indicated by output below.
 any idea why when eden space stay the same for few seconds like 100% and
 18.02% for few seconds? we suspect such stalling cause timeout to our
 cluster.

 any idea what happened, what went wrong and what could cause this?


 $ jstat -gcutil 32276 1s

   0.00   5.78  91.21  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   

Re: cassandra node jvm stall intermittently

2015-03-07 Thread Jason Wee
hey Ali, 1.0.8

On Sat, Mar 7, 2015 at 5:20 PM, Ali Akhtar ali.rac...@gmail.com wrote:

 What version are you running?

 On Sat, Mar 7, 2015 at 2:14 PM, Jason Wee peich...@gmail.com wrote:


 Hi Jan, thanks for your time to prepare the question and answer below,


- How many nodes do you have on the ring ?
12

- What is the activity when this occurs  - reads / writes/
compactions  ?
This cluster has a lot of writes and read. off peak period, ops
center shown cluster write is about 5k/sec, read about 1k/sec and during
peak period, write could be 22k/sec and read about 10k/sec. So this
particular one node hang like every moment irrespect if it is peak or non
peak, or compaction.

- Is there anything that is unique about this node that makes it
different from the other nodes ?
Our nodes are same in term of operating system (centos 6) and
cassandra configuration settings. Other than that, there are no other
resources intensive application running in cassandra nodes.


- Is this a periodic occurance OR a single occurence -   I am trying
to determine a pattern about when this shows up.
it *always* happened and in fact, it is happening now.

- What is the load distribution the ring (ie: is this node carrying
more load than the others).
As of this moment,

- Address DC  RackStatus State   Load
   OwnsToken
-
   155962751505430129087380028406227096910
- node1  us-east 1e  Up Normal  498.66 GB
8.33%   0
- node2  us-east 1e  Up Normal  503.36 GB
8.33%   14178431955039102644307275309657008810
- node3  us-east 1e  Up Normal  492.08 GB
8.33%   28356863910078205288614550619314017619
- node4  us-east 1e  Up Normal  499.54 GB
8.33%   42535295865117307932921825928971026430
- node5  us-east 1e  Up Normal  523.76 GB
8.33%   56713727820156407428984779325531226109
- node6  us-east 1e  Up Normal  515.36 GB
8.33%   70892159775195513221536376548285044050
- node7  us-east 1e  Up Normal  588.93 GB
8.33%   85070591730234615865843651857942052860
- node8  us-east 1e  Up Normal  498.51 GB
8.33%   99249023685273718510150927167599061670
- node9  us-east 1e  Up Normal  531.81 GB
8.33%   113427455640312814857969558651062452221
- node10 us-east 1e  Up Normal  501.85 GB
8.33%   127605887595351923798765477786913079290
- node11 us-east 1e  Up Normal  501.13 GB
8.33%   141784319550391026443072753096570088100
- node12 us-east 1e  Up Normal  508.45 GB
8.33%   155962751505430129087380028406227096910

so that one node is node5. At this instance ring output, yea, it is
second highest in the ring but unlikely this is the cause.


 Jason

 On Sat, Mar 7, 2015 at 3:35 PM, Jan cne...@yahoo.com wrote:

 HI Jason;

 The single node showing the anomaly is a hint that the problem is
 probably local to a node (as you suspected).

- How many nodes do you have on the ring ?
- What is the activity when this occurs  - reads / writes/
compactions  ?
- Is there anything that is unique about this node that makes it
different from the other nodes ?
- Is this a periodic occurance OR a single occurence -   I am trying
to determine a pattern about when this shows up.
- What is the load distribution the ring (ie: is this node carrying
more load than the others).


 The system.log should have  more info.,about it.

 hope this helps
 Jan/





   On Friday, March 6, 2015 4:50 AM, Jason Wee peich...@gmail.com
 wrote:


 well, StatusLogger.java started shown in cassandra
 system.log, MessagingService.java also shown some stage (e.g. read,
 mutation) dropped.

 It's strange it only happen in this node but this type of message does
 not shown in other node log file at the same time...

 Jason

 On Thu, Mar 5, 2015 at 4:26 AM, Jan cne...@yahoo.com wrote:

 HI Jason;

 Whats in the log files at the moment jstat shows 100%.
 What is the activity on the cluster  the node at the specific point in
 time (reads/ writes/ joins etc)

 Jan/


   On Wednesday, March 4, 2015 5:59 AM, Jason Wee peich...@gmail.com
 wrote:


 Hi, our cassandra node using java 7 update 72 and we ran jstat on one of
 the node, and notice some strange behaviour as indicated by output below.
 any idea why when eden space stay the same for few seconds like 100% and
 18.02% for few seconds? we suspect such stalling cause timeout to our
 cluster.

 any idea what happened, what went wrong and what could cause this?


 $ jstat -gcutil 32276 1s

   0.00   5.78  91.21  70.94  60.07   2657   73.437 40.056
 73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056
 73.493
   0.00   5.78 

Re: cassandra node jvm stall intermittently

2015-03-07 Thread Ali Akhtar
What version are you running?

On Sat, Mar 7, 2015 at 2:14 PM, Jason Wee peich...@gmail.com wrote:


 Hi Jan, thanks for your time to prepare the question and answer below,


- How many nodes do you have on the ring ?
12

- What is the activity when this occurs  - reads / writes/ compactions
 ?
This cluster has a lot of writes and read. off peak period, ops center
shown cluster write is about 5k/sec, read about 1k/sec and during peak
period, write could be 22k/sec and read about 10k/sec. So this particular
one node hang like every moment irrespect if it is peak or non peak, or
compaction.

- Is there anything that is unique about this node that makes it
different from the other nodes ?
Our nodes are same in term of operating system (centos 6) and
cassandra configuration settings. Other than that, there are no other
resources intensive application running in cassandra nodes.


- Is this a periodic occurance OR a single occurence -   I am trying
to determine a pattern about when this shows up.
it *always* happened and in fact, it is happening now.

- What is the load distribution the ring (ie: is this node carrying
more load than the others).
As of this moment,

- Address DC  RackStatus State   Load
   OwnsToken
-
   155962751505430129087380028406227096910
- node1  us-east 1e  Up Normal  498.66 GB
8.33%   0
- node2  us-east 1e  Up Normal  503.36 GB
8.33%   14178431955039102644307275309657008810
- node3  us-east 1e  Up Normal  492.08 GB
8.33%   28356863910078205288614550619314017619
- node4  us-east 1e  Up Normal  499.54 GB
8.33%   42535295865117307932921825928971026430
- node5  us-east 1e  Up Normal  523.76 GB
8.33%   56713727820156407428984779325531226109
- node6  us-east 1e  Up Normal  515.36 GB
8.33%   70892159775195513221536376548285044050
- node7  us-east 1e  Up Normal  588.93 GB
8.33%   85070591730234615865843651857942052860
- node8  us-east 1e  Up Normal  498.51 GB
8.33%   99249023685273718510150927167599061670
- node9  us-east 1e  Up Normal  531.81 GB
8.33%   113427455640312814857969558651062452221
- node10 us-east 1e  Up Normal  501.85 GB
8.33%   127605887595351923798765477786913079290
- node11 us-east 1e  Up Normal  501.13 GB
8.33%   141784319550391026443072753096570088100
- node12 us-east 1e  Up Normal  508.45 GB
8.33%   155962751505430129087380028406227096910

so that one node is node5. At this instance ring output, yea, it is
second highest in the ring but unlikely this is the cause.


 Jason

 On Sat, Mar 7, 2015 at 3:35 PM, Jan cne...@yahoo.com wrote:

 HI Jason;

 The single node showing the anomaly is a hint that the problem is
 probably local to a node (as you suspected).

- How many nodes do you have on the ring ?
- What is the activity when this occurs  - reads / writes/
compactions  ?
- Is there anything that is unique about this node that makes it
different from the other nodes ?
- Is this a periodic occurance OR a single occurence -   I am trying
to determine a pattern about when this shows up.
- What is the load distribution the ring (ie: is this node carrying
more load than the others).


 The system.log should have  more info.,about it.

 hope this helps
 Jan/





   On Friday, March 6, 2015 4:50 AM, Jason Wee peich...@gmail.com wrote:


 well, StatusLogger.java started shown in cassandra
 system.log, MessagingService.java also shown some stage (e.g. read,
 mutation) dropped.

 It's strange it only happen in this node but this type of message does
 not shown in other node log file at the same time...

 Jason

 On Thu, Mar 5, 2015 at 4:26 AM, Jan cne...@yahoo.com wrote:

 HI Jason;

 Whats in the log files at the moment jstat shows 100%.
 What is the activity on the cluster  the node at the specific point in
 time (reads/ writes/ joins etc)

 Jan/


   On Wednesday, March 4, 2015 5:59 AM, Jason Wee peich...@gmail.com
 wrote:


 Hi, our cassandra node using java 7 update 72 and we ran jstat on one of
 the node, and notice some strange behaviour as indicated by output below.
 any idea why when eden space stay the same for few seconds like 100% and
 18.02% for few seconds? we suspect such stalling cause timeout to our
 cluster.

 any idea what happened, what went wrong and what could cause this?


 $ jstat -gcutil 32276 1s

   0.00   5.78  91.21  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  

Re: cassandra node jvm stall intermittently

2015-03-06 Thread Jason Wee
well, StatusLogger.java started shown in cassandra
system.log, MessagingService.java also shown some stage (e.g. read,
mutation) dropped.

It's strange it only happen in this node but this type of message does not
shown in other node log file at the same time...

Jason

On Thu, Mar 5, 2015 at 4:26 AM, Jan cne...@yahoo.com wrote:

 HI Jason;

 Whats in the log files at the moment jstat shows 100%.
 What is the activity on the cluster  the node at the specific point in
 time (reads/ writes/ joins etc)

 Jan/


   On Wednesday, March 4, 2015 5:59 AM, Jason Wee peich...@gmail.com
 wrote:


 Hi, our cassandra node using java 7 update 72 and we ran jstat on one of
 the node, and notice some strange behaviour as indicated by output below.
 any idea why when eden space stay the same for few seconds like 100% and
 18.02% for few seconds? we suspect such stalling cause timeout to our
 cluster.

 any idea what happened, what went wrong and what could cause this?


 $ jstat -gcutil 32276 1s

   0.00   5.78  91.21  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
   0.00   4.65  29.66  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  70.88  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  71.58  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  72.15  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  72.33  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  72.73  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  73.20  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  73.71  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  73.84  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  73.91  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  74.18  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
   0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
   0.00   5.43  12.64  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
   0.00   5.43  18.02  71.09  60.07   2661   73.534 

Re: cassandra node jvm stall intermittently

2015-03-06 Thread Jan
HI Jason; 
The single node showing the anomaly is a hint that the problem is probably 
local to a node (as you suspected).    
   - How many nodes do you have on the ring ?    

   - What is the activity when this occurs  - reads / writes/ compactions  ?    
  

   - Is there anything that is unique about this node that makes it different 
from the other nodes ?    

   - Is this a periodic occurance OR a single occurence -   I am trying to 
determine a pattern about when this shows up.    

   - What is the load distribution the ring (ie: is this node carrying more 
load than the others).   


The system.log should have  more info.,    about it.       
hope this helpsJan/


 

 On Friday, March 6, 2015 4:50 AM, Jason Wee peich...@gmail.com wrote:
   

 well, StatusLogger.java started shown in cassandra system.log, 
MessagingService.java also shown some stage (e.g. read, mutation) dropped. 
It's strange it only happen in this node but this type of message does not 
shown in other node log file at the same time... 
Jason
On Thu, Mar 5, 2015 at 4:26 AM, Jan cne...@yahoo.com wrote:

HI Jason; 
Whats in the log files at the moment jstat shows 100%. What is the activity on 
the cluster  the node at the specific point in time (reads/ writes/ joins etc)
Jan/ 

 On Wednesday, March 4, 2015 5:59 AM, Jason Wee peich...@gmail.com wrote:
   

 Hi, our cassandra node using java 7 update 72 and we ran jstat on one of the 
node, and notice some strange behaviour as indicated by output below. any idea 
why when eden space stay the same for few seconds like 100% and 18.02% for few 
seconds? we suspect such stalling cause timeout to our cluster.
any idea what happened, what went wrong and what could cause this?

$ jstat -gcutil 32276 1s
  0.00   5.78  91.21  70.94  60.07   2657   73.437     4    0.056   73.493  
0.00   5.78 100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00  
 5.78 100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   4.65  
29.66  71.00  60.07   2659   73.488     4    0.056   73.544  0.00   4.65  70.88 
 71.00  60.07   2659   73.488     4    0.056   73.544  0.00   4.65  71.58  
71.00  60.07   2659   73.488     4    0.056   73.544  0.00   4.65  72.15  71.00 
 60.07   2659   73.488     4    0.056   73.544  0.00   4.65  72.33  71.00  
60.07   2659   73.488     4    0.056   73.544  0.00   4.65  72.73  71.00  60.07 
  2659   73.488     4    0.056   73.544  0.00   4.65  73.20  71.00  60.07   
2659   73.488     4    0.056   73.544  0.00   4.65  73.71  71.00  60.07   2659  
 73.488     4    0.056   73.544  0.00   4.65  73.84  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  73.91  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.18  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   5.43  12.64  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  

Re: cassandra node jvm stall intermittently

2015-03-04 Thread Jan
HI Jason; 
Whats in the log files at the moment jstat shows 100%. What is the activity on 
the cluster  the node at the specific point in time (reads/ writes/ joins etc)
Jan/ 

 On Wednesday, March 4, 2015 5:59 AM, Jason Wee peich...@gmail.com wrote:
   

 Hi, our cassandra node using java 7 update 72 and we ran jstat on one of the 
node, and notice some strange behaviour as indicated by output below. any idea 
why when eden space stay the same for few seconds like 100% and 18.02% for few 
seconds? we suspect such stalling cause timeout to our cluster.
any idea what happened, what went wrong and what could cause this?

$ jstat -gcutil 32276 1s
  0.00   5.78  91.21  70.94  60.07   2657   73.437     4    0.056   73.493  
0.00   5.78 100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00  
 5.78 100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   5.78 
100.00  70.94  60.07   2657   73.437     4    0.056   73.493  0.00   4.65  
29.66  71.00  60.07   2659   73.488     4    0.056   73.544  0.00   4.65  70.88 
 71.00  60.07   2659   73.488     4    0.056   73.544  0.00   4.65  71.58  
71.00  60.07   2659   73.488     4    0.056   73.544  0.00   4.65  72.15  71.00 
 60.07   2659   73.488     4    0.056   73.544  0.00   4.65  72.33  71.00  
60.07   2659   73.488     4    0.056   73.544  0.00   4.65  72.73  71.00  60.07 
  2659   73.488     4    0.056   73.544  0.00   4.65  73.20  71.00  60.07   
2659   73.488     4    0.056   73.544  0.00   4.65  73.71  71.00  60.07   2659  
 73.488     4    0.056   73.544  0.00   4.65  73.84  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  73.91  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.18  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   4.65  74.29  71.00  60.07   2659   
73.488     4    0.056   73.544  0.00   5.43  12.64  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  18.02  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  69.24  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  78.05  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  78.97  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  79.07  71.09  60.07   2661   
73.534     4    0.056   73.590  0.00   5.43  79.18  71.09  60.07   2661   
73.534     4    0.056   

cassandra node jvm stall intermittently

2015-03-04 Thread Jason Wee
Hi, our cassandra node using java 7 update 72 and we ran jstat on one of
the node, and notice some strange behaviour as indicated by output below.
any idea why when eden space stay the same for few seconds like 100% and
18.02% for few seconds? we suspect such stalling cause timeout to our
cluster.

any idea what happened, what went wrong and what could cause this?


$ jstat -gcutil 32276 1s

  0.00   5.78  91.21  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   5.78 100.00  70.94  60.07   2657   73.437 40.056   73.493
  0.00   4.65  29.66  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  70.88  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  71.58  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  72.15  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  72.33  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  72.73  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  73.20  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  73.71  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  73.84  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  73.91  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  74.18  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
  0.00   4.65  74.29  71.00  60.07   2659   73.488 40.056   73.544
  0.00   5.43  12.64  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  18.02  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  69.24  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  78.05  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  78.97  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  79.07  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  79.18  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  80.09  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  80.36  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  80.51  71.09  60.07   2661   73.534 40.056   73.590
  0.00   5.43  80.70  71.09