[jira] [Commented] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-11-05 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14991380#comment-14991380
 ] 

Study Hsueh commented on CASSANDRA-10534:
-

This bug also happened in 2.1.10.

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the hard reboot:
> {noformat}
> $ ls -l 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
> total 60
> -rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
> system-sstable_activity-ka-1-CompressionInfo.db
> -rw-r--r-- 1 cassandra cassandra  9740 Oct 15 09:31 
> system-sstable_activity-ka-1-Data.db
> -rw-r--r-- 1 cassandra cassandra 0 

[jira] [Comment Edited] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-30 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592119#comment-14592119
 ] 

Study Hsueh edited comment on CASSANDRA-9607 at 7/1/15 2:23 AM:


2015-06-15 13:40:41,200 upgrade to 2.1.6
2015-06-17 18:32:40,740 whole cluster went down




was (Author: study):
I had uploaded heap dump when OOM occurred: 
http://54.199.247.66/java_1434380208.hprof

2015-06-15 13:40:41,200 upgrade to 2.1.6
2015-06-17 18:32:40,740 whole cluster went down



 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.x, 2.2.x

 Attachments: GC_state.png, cassandra.yaml, client_blocked_thread.png, 
 cpu_profile.png, dump.tdump, load.png, log.zip, schema.zip, vm_monitor.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-24 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600666#comment-14600666
 ] 

Study Hsueh edited comment on CASSANDRA-9607 at 6/25/15 4:37 AM:
-

My colleague has repeated the query in 2.1.3 again, and the cluster went down 
again. So the root cause should be the query.


was (Author: study):
My colleague have repeated the query in 2.1.3 again, and the cluster went down 
again. So the root cause should be the query.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.x, 2.2.x

 Attachments: GC_state.png, cassandra.yaml, client_blocked_thread.png, 
 cpu_profile.png, dump.tdump, load.png, log.zip, schema.zip, vm_monitor.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-24 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600657#comment-14600657
 ] 

Study Hsueh commented on CASSANDRA-9607:


My colleague said he performed the following queries before the whole cluster 
went down.
1. SELECT * FROM ginger.supply_ad_log - timeout
2. SELECT rmaxSpaceId FROM Supply_Ad_Log WHERE rmaxSpaceId = ? AND salesChannel 
= ? AND hourstamp = ? - query was hanged and cluster did not have responsed

The client tool was gocql and he did not specify `pagesize` (default: 0).

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.x, 2.2.x

 Attachments: GC_state.png, cassandra.yaml, client_blocked_thread.png, 
 cpu_profile.png, dump.tdump, load.png, log.zip, schema.zip, vm_monitor.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-24 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600666#comment-14600666
 ] 

Study Hsueh commented on CASSANDRA-9607:


My colleague have repeated the query in 2.1.3 again, and the cluster went down 
again. So the root cause should be the query.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.x, 2.2.x

 Attachments: GC_state.png, cassandra.yaml, client_blocked_thread.png, 
 cpu_profile.png, dump.tdump, load.png, log.zip, schema.zip, vm_monitor.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593719#comment-14593719
 ] 

Study Hsueh commented on CASSANDRA-9607:


We do not use static columns. Do you need my schema for analysis?

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593753#comment-14593753
 ] 

Study Hsueh commented on CASSANDRA-9607:


Ok, I have uploaded my schema.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip, schema.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Study Hsueh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Study Hsueh updated CASSANDRA-9607:
---
Attachment: schema.zip

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip, schema.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-18 Thread Study Hsueh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Study Hsueh updated CASSANDRA-9607:
---
Environment: 
OS: 
CentOS 6 * 4
Ubuntu 14.04 * 2

JDK: Oracle JDK 7

VM: Azure VM Standard A3 * 6
RAM: 7 GB
Cores: 4

  was:
OS: 
CentOS 6 * 4
Ubuntu 14.04 * 2

JDK: Oracle JDK 7


 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Priority: Critical
 Attachments: load.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-18 Thread Study Hsueh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Study Hsueh updated CASSANDRA-9607:
---
Attachment: cassandra.yaml

cassandra configuration

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-18 Thread Study Hsueh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Study Hsueh updated CASSANDRA-9607:
---
Attachment: log.zip

cassandra log
2015-06-15 13:40:41,200 upgrade to 2.1.6
2015-06-17 18:32:40,740 whole cluster went down

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-18 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592119#comment-14592119
 ] 

Study Hsueh edited comment on CASSANDRA-9607 at 6/18/15 5:41 PM:
-

cassandra configuration
heap dump when oom: http://54.199.247.66/java_1434380208.hprof


was (Author: study):
cassandra configuration

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-18 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592119#comment-14592119
 ] 

Study Hsueh edited comment on CASSANDRA-9607 at 6/18/15 5:43 PM:
-

I had uploaded heap dump when OOM occurred: 
http://54.199.247.66/java_1434380208.hprof

2015-06-15 13:40:41,200 upgrade to 2.1.6
2015-06-17 18:32:40,740 whole cluster went down




was (Author: study):
cassandra configuration
heap dump when oom: http://54.199.247.66/java_1434380208.hprof

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-18 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592306#comment-14592306
 ] 

Study Hsueh commented on CASSANDRA-9607:


Thanks, but I think we have no plan to try 2.1.6 and 2.1.7 in the near future.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-18 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592117#comment-14592117
 ] 

Study Hsueh edited comment on CASSANDRA-9607 at 6/18/15 5:41 PM:
-

cassandra log
2015-06-15 13:40:41,200 upgrade to 2.1.6
2015-06-17 18:32:40,740 whole cluster went down


was (Author: study):
cassandra log
2015-06-15 13:40:41,200 upgrade to 2.1.6
2015-06-17 18:32:40,740 whole cluster went down

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-18 Thread Study Hsueh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Study Hsueh updated CASSANDRA-9607:
---
Comment: was deleted

(was: cassandra log
2015-06-15 13:40:41,200 upgrade to 2.1.6
2015-06-17 18:32:40,740 whole cluster went down)

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-17 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589589#comment-14589589
 ] 

Study Hsueh commented on CASSANDRA-9607:


This issues cause all of nodes downs.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7
Reporter: Study Hsueh
Priority: Critical
 Attachments: load.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-17 Thread Study Hsueh (JIRA)
Study Hsueh created CASSANDRA-9607:
--

 Summary: Get high load after upgrading from 2.1.3 to cassandra 
2.1.6
 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
CentOS 6 * 4
Ubuntu 14.04 * 2

JDK: Oracle JDK 7
Reporter: Study Hsueh
Priority: Critical
 Attachments: load.png

After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
cassandra cluster grows from 0.x~1.x to 3.x~6.x. 

What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-17 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589589#comment-14589589
 ] 

Study Hsueh edited comment on CASSANDRA-9607 at 6/17/15 10:27 AM:
--

This issues cause all of nodes downs. I will attach log later after I downgrade 
to 2.1.3...


was (Author: study):
This issues cause all of nodes downs.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7
Reporter: Study Hsueh
Priority: Critical
 Attachments: load.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-6542) nodetool removenode hangs

2015-04-20 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502397#comment-14502397
 ] 

Study Hsueh edited comment on CASSANDRA-6542 at 4/20/15 2:03 PM:
-

Also observed in 2.1.3 on CentOS 6.6

The nodes status Log.

Host: 192.168.1.13

$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  OwnsHost ID  
 Rack
UN  192.168.1.29  22.95 GB   256 ?   
506910ae-a07b-4f74-8feb-a3f2b141dea5  rack1
UN  192.168.1.28  19.68 GB   256 ?   
ed79b6ee-cae0-48f9-a420-338058e1f2c5  rack1
UN  192.168.1.13  25.72 GB   256 ?   
595ea5ef-cecf-44c7-aa7f-424648791751  rack1
DN  192.168.1.27  ?  256 ?   
2ca22f3d-f8d8-4bde-8cdc-de649056cf9c  rack1
UN  192.168.1.26  20.71 GB   256 ?   
3c880801-8499-4b16-bce4-2bfbc79bed43  rack1

$ nodetool removenode force 2ca22f3d-f8d8-4bde-8cdc-de649056cf9c # nodetool 
removenode hangs

$ nodetool removenode status
RemovalStatus: Removing token (-9132940871846770123). Waiting for replication 
confirmation from [/192.168.1.29,/192.168.1.28,/192.168.1.26].



Host: 192.168.1.28

$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  OwnsHost ID  
 Rack
UN  192.168.1.29  22.96 GB   256 ?   
506910ae-a07b-4f74-8feb-a3f2b141dea5  rack1
UN  192.168.1.28  19.69 GB   256 ?   
ed79b6ee-cae0-48f9-a420-338058e1f2c5  rack1
UN  192.168.1.13  30.43 GB   256 ?   
595ea5ef-cecf-44c7-aa7f-424648791751  rack1
UN  192.168.1.26  20.72 GB   256 ?   
3c880801-8499-4b16-bce4-2bfbc79bed43  rack1

$ nodetool removenode status
RemovalStatus: No token removals in process.


Host: 192.168.1.29

$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  OwnsHost ID  
 Rack
UN  192.168.1.29  22.96 GB   256 ?   
506910ae-a07b-4f74-8feb-a3f2b141dea5  rack1
UN  192.168.1.28  19.69 GB   256 ?   
ed79b6ee-cae0-48f9-a420-338058e1f2c5  rack1
UN  192.168.1.13  30.43 GB   256 ?   
595ea5ef-cecf-44c7-aa7f-424648791751  rack1
UN  192.168.1.26  20.72 GB   256 ?   
3c880801-8499-4b16-bce4-2bfbc79bed43  rack1

$ nodetool removenode status
RemovalStatus: No token removals in process.



Host: 192.168.1.26

nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  OwnsHost ID  
 Rack
UN  192.168.1.29  22.96 GB   256 ?   
506910ae-a07b-4f74-8feb-a3f2b141dea5  rack1
UN  192.168.1.28  19.69 GB   256 ?   
ed79b6ee-cae0-48f9-a420-338058e1f2c5  rack1
UN  192.168.1.13  30.43 GB   256 ?   
595ea5ef-cecf-44c7-aa7f-424648791751  rack1
UN  192.168.1.26  20.72 GB   256 ?   
3c880801-8499-4b16-bce4-2bfbc79bed43  rack1

$ nodetool removenode status
RemovalStatus: No token removals in process.





was (Author: study):
Also observed in 2.1.13 on CentOS 6.6

The nodes status Log.

Host: 192.168.1.13

$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  OwnsHost ID  
 Rack
UN  192.168.1.29  22.95 GB   256 ?   
506910ae-a07b-4f74-8feb-a3f2b141dea5  rack1
UN  192.168.1.28  19.68 GB   256 ?   
ed79b6ee-cae0-48f9-a420-338058e1f2c5  rack1
UN  

[jira] [Commented] (CASSANDRA-6542) nodetool removenode hangs

2015-04-20 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502397#comment-14502397
 ] 

Study Hsueh commented on CASSANDRA-6542:


Also observed in 2.1.13 on CentOS 6.6

The nodes status Log.

Host: 192.168.1.13

$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  OwnsHost ID  
 Rack
UN  192.168.1.29  22.95 GB   256 ?   
506910ae-a07b-4f74-8feb-a3f2b141dea5  rack1
UN  192.168.1.28  19.68 GB   256 ?   
ed79b6ee-cae0-48f9-a420-338058e1f2c5  rack1
UN  192.168.1.13  25.72 GB   256 ?   
595ea5ef-cecf-44c7-aa7f-424648791751  rack1
DN  192.168.1.27  ?  256 ?   
2ca22f3d-f8d8-4bde-8cdc-de649056cf9c  rack1
UN  192.168.1.26  20.71 GB   256 ?   
3c880801-8499-4b16-bce4-2bfbc79bed43  rack1

$ nodetool removenode force 2ca22f3d-f8d8-4bde-8cdc-de649056cf9c # nodetool 
removenode hangs

$ nodetool removenode status
RemovalStatus: Removing token (-9132940871846770123). Waiting for replication 
confirmation from [/192.168.1.29,/192.168.1.28,/192.168.1.26].



Host: 192.168.1.28

$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  OwnsHost ID  
 Rack
UN  192.168.1.29  22.96 GB   256 ?   
506910ae-a07b-4f74-8feb-a3f2b141dea5  rack1
UN  192.168.1.28  19.69 GB   256 ?   
ed79b6ee-cae0-48f9-a420-338058e1f2c5  rack1
UN  192.168.1.13  30.43 GB   256 ?   
595ea5ef-cecf-44c7-aa7f-424648791751  rack1
UN  192.168.1.26  20.72 GB   256 ?   
3c880801-8499-4b16-bce4-2bfbc79bed43  rack1

$ nodetool removenode status
RemovalStatus: No token removals in process.


Host: 192.168.1.29

$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  OwnsHost ID  
 Rack
UN  192.168.1.29  22.96 GB   256 ?   
506910ae-a07b-4f74-8feb-a3f2b141dea5  rack1
UN  192.168.1.28  19.69 GB   256 ?   
ed79b6ee-cae0-48f9-a420-338058e1f2c5  rack1
UN  192.168.1.13  30.43 GB   256 ?   
595ea5ef-cecf-44c7-aa7f-424648791751  rack1
UN  192.168.1.26  20.72 GB   256 ?   
3c880801-8499-4b16-bce4-2bfbc79bed43  rack1

$ nodetool removenode status
RemovalStatus: No token removals in process.



Host: 192.168.1.26

nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  OwnsHost ID  
 Rack
UN  192.168.1.29  22.96 GB   256 ?   
506910ae-a07b-4f74-8feb-a3f2b141dea5  rack1
UN  192.168.1.28  19.69 GB   256 ?   
ed79b6ee-cae0-48f9-a420-338058e1f2c5  rack1
UN  192.168.1.13  30.43 GB   256 ?   
595ea5ef-cecf-44c7-aa7f-424648791751  rack1
UN  192.168.1.26  20.72 GB   256 ?   
3c880801-8499-4b16-bce4-2bfbc79bed43  rack1

$ nodetool removenode status
RemovalStatus: No token removals in process.




 nodetool removenode hangs
 -

 Key: CASSANDRA-6542
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6542
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12, 1.2.11 DSE
Reporter: Eric Lubow
Assignee: Yuki Morishita

 Running *nodetool removenode $host-id* doesn't actually remove the node from 
 the ring.  I've let it run anywhere from 5 minutes to 3 days and there are no 
 messages in the log about it hanging or failing, the command just sits there 
 running.  So the regular response has been to run *nodetool removenode