[jira] [Commented] (CASSANDRA-13365) Nodes entering GC loop, does not recover

2018-11-30 Thread Sergey Kirillov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704616#comment-16704616
 ] 

Sergey Kirillov commented on CASSANDRA-13365:
-

[~sickcate] do you have any materialized views in your database?

> Nodes entering GC loop, does not recover
> 
>
> Key: CASSANDRA-13365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13365
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 34-node cluster over 4 DCs
> Linux CentOS 7.2 x86
> Mix of 64GB/128GB RAM / node
> Mix of 32/40 hardware threads / node, Xeon ~2.4Ghz
> High read volume, low write volume, occasional sstable bulk loading
>Reporter: Mina Naguib
>Priority: Major
>
> Over the last week we've been observing two related problems affecting our 
> Cassandra cluster
> Problem 1: 1-few nodes per DC entering GC loop, not recovering
> Checking the heap usage stats, there's a sudden jump of 1-3GB. Some nodes 
> recover, but some don't and log this:
> {noformat}
> 2017-03-21T11:23:02.957-0400: 54099.519: [Full GC (Allocation Failure)  
> 13G->11G(14G), 29.4127307 secs]
> 2017-03-21T11:23:45.270-0400: 54141.833: [Full GC (Allocation Failure)  
> 13G->12G(14G), 28.1561881 secs]
> 2017-03-21T11:24:20.307-0400: 54176.869: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.7019501 secs]
> 2017-03-21T11:24:50.528-0400: 54207.090: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1372267 secs]
> 2017-03-21T11:25:19.190-0400: 54235.752: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.0703975 secs]
> 2017-03-21T11:25:46.711-0400: 54263.273: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3187768 secs]
> 2017-03-21T11:26:15.419-0400: 54291.981: [Full GC (Allocation Failure)  
> 13G->13G(14G), 26.9493405 secs]
> 2017-03-21T11:26:43.399-0400: 54319.961: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5222085 secs]
> 2017-03-21T11:27:11.383-0400: 54347.945: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1769581 secs]
> 2017-03-21T11:27:40.174-0400: 54376.737: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4639031 secs]
> 2017-03-21T11:28:08.946-0400: 54405.508: [Full GC (Allocation Failure)  
> 13G->13G(14G), 30.3480523 secs]
> 2017-03-21T11:28:40.117-0400: 54436.680: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8220513 secs]
> 2017-03-21T11:29:08.459-0400: 54465.022: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4691271 secs]
> 2017-03-21T11:29:37.114-0400: 54493.676: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.0275733 secs]
> 2017-03-21T11:30:04.635-0400: 54521.198: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1902627 secs]
> 2017-03-21T11:30:32.114-0400: 54548.676: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8872850 secs]
> 2017-03-21T11:31:01.430-0400: 54577.993: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1609706 secs]
> 2017-03-21T11:31:29.024-0400: 54605.587: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3635138 secs]
> 2017-03-21T11:31:57.303-0400: 54633.865: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4143510 secs]
> 2017-03-21T11:32:25.110-0400: 54661.672: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8595986 secs]
> 2017-03-21T11:32:53.922-0400: 54690.485: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5242543 secs]
> 2017-03-21T11:33:21.867-0400: 54718.429: [Full GC (Allocation Failure)  
> 13G->13G(14G), 30.8930130 secs]
> 2017-03-21T11:33:53.712-0400: 54750.275: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.6523013 secs]
> 2017-03-21T11:34:21.760-0400: 54778.322: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3030198 secs]
> 2017-03-21T11:34:50.073-0400: 54806.635: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1594154 secs]
> 2017-03-21T11:35:17.743-0400: 54834.306: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3766949 secs]
> 2017-03-21T11:35:45.797-0400: 54862.360: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5756770 secs]
> 2017-03-21T11:36:13.816-0400: 54890.378: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5541813 secs]
> 2017-03-21T11:36:41.926-0400: 54918.488: [Full GC (Allocation Failure)  
> 13G->13G(14G), 33.7510103 secs]
> 2017-03-21T11:37:16.132-0400: 54952.695: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4856611 secs]
> 2017-03-21T11:37:44.454-0400: 54981.017: [Full GC (Allocation Failure)  
> 13G->13G(14G), 28.1269335 secs]
> 2017-03-21T11:38:12.774-0400: 55009.337: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.7830448 secs]
> 2017-03-21T11:38:40.840-0400: 55037.402: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3527326 secs]
> 2017-03-21T11:39:08.610-0400: 55065.173: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5828941 secs]
> 2017-03-21T11:39:36.833-0400: 55093.396: [Full GC (Allocation Failure)  
> 13G->13G(14G), 

[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-12 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435158#comment-16435158
 ] 

Sergey Kirillov commented on CASSANDRA-14239:
-

[~pauloricardomg] with mutation_repair_rows_per_batch=1000 I got OOM (heap size 
is 31G), with mutation_repair_rows_per_batch=500 everything was very similar to 
default mutation_repair_rows_per_batch=100.

It seems that the only way to fix this for me is to remove MVs.

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
>  Labels: materializedviews
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, Selection_420.png, Selection_421.png, 
> cassandra-env.sh, cassandra.yaml, dstat.png, gc.log.0.201804111524.zip, 
> gc.log.0.current.zip, gc.log.20180441.zip, jvm.options, jvm_opts.txt, 
> stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434042#comment-16434042
 ] 

Sergey Kirillov edited comment on CASSANDRA-14239 at 4/11/18 3:11 PM:
--

Ok. Now it stuck on 5134 pending MemtableFlushWriter jobs. Number is not 
decreasing anymore. 

*UPD* When everything was blocked node had high CPU usage and where reading a 
lot from disks (which seems related to CASSANDRA-13065).
After a while number of pending memtable jobs decreased and mutations 
unblocked, but in a minute node died again with OOM.


was (Author: rushman):
Ok. Now it stuck on 5134 pending MemtableFlushWriter jobs. Number is not 
decreasing anymore. 

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
>  Labels: materializedviews
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, Selection_420.png, Selection_421.png, 
> cassandra-env.sh, cassandra.yaml, dstat.png, gc.log.0.201804111524.zip, 
> gc.log.0.current.zip, gc.log.20180441.zip, jvm.options, jvm_opts.txt, 
> stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13065) Skip building views during base table streams on range movements

2018-04-11 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-13065:

Attachment: Selection_423.png

> Skip building views during base table streams on range movements
> 
>
> Key: CASSANDRA-13065
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13065
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Roth
>Assignee: Benjamin Roth
>Priority: Critical
> Fix For: 4.0
>
>
> Booting or decommisioning nodes with MVs is unbearably slow as all streams go 
> through the regular write paths. This causes read-before-writes for every 
> mutation and during bootstrap it causes them to be sent to batchlog.
> The makes it virtually impossible to boot a new node in an acceptable amount 
> of time.
> Using the regular streaming behaviour for consistent range movements works 
> much better in this case and does not break the MV local consistency contract.
> Already tested on own cluster.
> Bootstrap case is super easy to handle, decommission case requires 
> CASSANDRA-13064



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-14239:

Attachment: dstat.png

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
>  Labels: materializedviews
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, Selection_420.png, Selection_421.png, 
> cassandra-env.sh, cassandra.yaml, dstat.png, gc.log.0.201804111524.zip, 
> gc.log.0.current.zip, gc.log.20180441.zip, jvm.options, jvm_opts.txt, 
> stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13065) Skip building views during base table streams on range movements

2018-04-11 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-13065:

Attachment: (was: Selection_423.png)

> Skip building views during base table streams on range movements
> 
>
> Key: CASSANDRA-13065
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13065
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Roth
>Assignee: Benjamin Roth
>Priority: Critical
> Fix For: 4.0
>
>
> Booting or decommisioning nodes with MVs is unbearably slow as all streams go 
> through the regular write paths. This causes read-before-writes for every 
> mutation and during bootstrap it causes them to be sent to batchlog.
> The makes it virtually impossible to boot a new node in an acceptable amount 
> of time.
> Using the regular streaming behaviour for consistent range movements works 
> much better in this case and does not break the MV local consistency contract.
> Already tested on own cluster.
> Bootstrap case is super easy to handle, decommission case requires 
> CASSANDRA-13064



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-14239:

Attachment: Selection_421.png

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
>  Labels: materializedviews
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, Selection_420.png, Selection_421.png, 
> cassandra-env.sh, cassandra.yaml, gc.log.0.201804111524.zip, 
> gc.log.0.current.zip, gc.log.20180441.zip, jvm.options, jvm_opts.txt, 
> stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434042#comment-16434042
 ] 

Sergey Kirillov commented on CASSANDRA-14239:
-

Ok. Now it stuck on 5134 pending MemtableFlushWriter jobs. Number is not 
decreasing anymore. 

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
>  Labels: materializedviews
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, Selection_420.png, cassandra-env.sh, 
> cassandra.yaml, gc.log.0.201804111524.zip, gc.log.0.current.zip, 
> gc.log.20180441.zip, jvm.options, jvm_opts.txt, stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-14239:

Attachment: Selection_420.png

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
>  Labels: materializedviews
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, Selection_420.png, cassandra-env.sh, 
> cassandra.yaml, gc.log.0.201804111524.zip, gc.log.0.current.zip, 
> gc.log.20180441.zip, jvm.options, jvm_opts.txt, stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434030#comment-16434030
 ] 

Sergey Kirillov commented on CASSANDRA-14239:
-

[~pauloricardomg] I've done quick and dirty backport of  CASSANDRA-13299 to 
3.10 (which I'm using right now), so far there is no OOM, but node is still 
stucking in MutationStage.  Number of pending MemtableFlushWriter jobs is 
slowly decreasing, I'll try to wait till it decrease to zero, maybe this 
will unblock mutations.

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
>  Labels: materializedviews
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, cassandra-env.sh, cassandra.yaml, 
> gc.log.0.201804111524.zip, gc.log.0.current.zip, gc.log.20180441.zip, 
> jvm.options, jvm_opts.txt, stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433921#comment-16433921
 ] 

Sergey Kirillov commented on CASSANDRA-14239:
-

[~jalbersdorfer] so, I was right, it is related to MV update. It is really 
helpful to know this. 

Removing MVs is not easy in my case, but now at least I know that it is worth 
the efforts.

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, cassandra-env.sh, cassandra.yaml, 
> gc.log.0.201804111524.zip, gc.log.0.current.zip, gc.log.20180441.zip, 
> jvm.options, jvm_opts.txt, stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433688#comment-16433688
 ] 

Sergey Kirillov commented on CASSANDRA-14239:
-

[~jalbersdorfer] I'm trying to do it as well.

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, cassandra-env.sh, cassandra.yaml, 
> gc.log.0.current.zip, gc.log.20180441.zip, jvm.options, jvm_opts.txt, 
> stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433661#comment-16433661
 ] 

Sergey Kirillov commented on CASSANDRA-14239:
-

[~jalbersdorfer] do you use a materialized views in your DB? I was able to 
localize this behavior to one table which has a few materialized views defined, 
so I suspect that this may be related to the MV update.

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, cassandra-env.sh, cassandra.yaml, 
> gc.log.0.current.zip, gc.log.20180441.zip, jvm.options, jvm_opts.txt, 
> stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-11 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433655#comment-16433655
 ] 

Sergey Kirillov commented on CASSANDRA-14239:
-

[~jalbersdorfer] I was trying to debug it and it looks like deadlock in 
memtable flush path. This leads to memtables which are never released to the 
pool and eventually you are getting OOM.

However, I still don't understand why those flush/mutation threads are freezing 
and how to resolve this. It would be nice if someone from the devs could take a 
look.

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, cassandra-env.sh, cassandra.yaml, 
> gc.log.0.current.zip, gc.log.20180441.zip, jvm.options, jvm_opts.txt, 
> stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-03-09 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392951#comment-16392951
 ] 

Sergey Kirillov edited comment on CASSANDRA-14239 at 3/9/18 3:08 PM:
-

It fails during bootstrap, it fails if I skip bootstrap and doing repair.

memtable size set to 
{code:yaml}
memtable_heap_space_in_mb: 1048
memtable_offheap_space_in_mb: 1048
{code}
and 
{code:yaml}
memtable_flush_writers: 16
{code}

When I analyze heap dump I see that 95% of memory is used by Memtable 
instances. There are 24k instances of Memtable class and their retained heap is 
26G, but they have no GC root. 

So this means that they must be garbage collected, I don't understand why I'm 
getting OOM instead.


was (Author: rushman):
It fails during bootstrap, it fails if I skip bootstrap and doing repair.

memtable size set to 
{code:yaml}
memtable_heap_space_in_mb: 1048
memtable_offheap_space_in_mb: 1048
{code}
and 
{code:yaml}
memtable_flush_writers: 16
{code}

When I analyze heap dump I see that 95% of memory is used by Memtable 
instances. There are 24k instances of Memtable class and their retained heap is 
26G.

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, cassandra-env.sh, cassandra.yaml, 
> jvm.options, jvm_opts.txt, stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-03-09 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392951#comment-16392951
 ] 

Sergey Kirillov commented on CASSANDRA-14239:
-

It fails during bootstrap, it fails if I skip bootstrap and doing repair.

memtable size set to 
{code:yaml}
memtable_heap_space_in_mb: 1048
memtable_offheap_space_in_mb: 1048
{code}
and 
{code:yaml}
memtable_flush_writers: 16
{code}

When I analyze heap dump I see that 95% of memory is used by Memtable 
instances. There are 24k instances of Memtable class and their retained heap is 
26G.

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, cassandra-env.sh, cassandra.yaml, 
> jvm.options, jvm_opts.txt, stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-03-01 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382015#comment-16382015
 ] 

Sergey Kirillov commented on CASSANDRA-14239:
-

I'm having same problem. Limiting memtable sizes to 2Gb does not help.

 

Any ideas?

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, cassandra-env.sh, cassandra.yaml, 
> jvm.options, jvm_opts.txt, stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14253) MutationStage threads deadlock

2018-02-22 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-14253:

Description: 
Hi.

I've had a persistent problems with cluster upgraded to 3.11.2. I was 
recreating one of my MVs and suddenly nodes request latencies went crazy.

During investigation I have found that half of my nodes had stuck MutationStage 
threads. All 64 threads were Active, Pending count was continuously increasing 
while Completed stuck on one value.

After restart nodes worked a few minutes and then stuck again. Another restart, 
another few minutes of work and stuck.

In attachment you can find stack dump (from sjk stcap) and flame graphs for 
MutationStage threads and for all threads. It seems that all MutationStage 
threads were waiting for some event.

 

Downgrade to 3.10 solved that problem, after downgrade all nodes are 
operational and not freezing.

 

  was:
Hi.

I've had a persistent problems with cluster upgraded to 3.11.2. I was 
recreating one of my MVs and suddenly nodes request latencies went crazy.

During investigation I have found that half of my nodes had stuck MutationStage 
threads. All 64 threads were Active, Pending count was continuously increasing 
while Completed stuck on one value.

After restart nodes worked a few minutes and then stuck again. Another restart, 
another few minutes of work and stuck.

In attachment you can find flame graphs for MutationStage threads and for all 
threads. It seems that all MutationStage threads were waiting for some event.

 

Downgrade to 3.10 solved that problem, after downgrade all nodes are 
operational and not freezing.

 


> MutationStage threads deadlock
> --
>
> Key: CASSANDRA-14253
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14253
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Ubuntu 16.04
>Reporter: Sergey Kirillov
>Priority: Major
> Attachments: dump.std, flame.svg, flame_tn.svg
>
>
> Hi.
> I've had a persistent problems with cluster upgraded to 3.11.2. I was 
> recreating one of my MVs and suddenly nodes request latencies went crazy.
> During investigation I have found that half of my nodes had stuck 
> MutationStage threads. All 64 threads were Active, Pending count was 
> continuously increasing while Completed stuck on one value.
> After restart nodes worked a few minutes and then stuck again. Another 
> restart, another few minutes of work and stuck.
> In attachment you can find stack dump (from sjk stcap) and flame graphs for 
> MutationStage threads and for all threads. It seems that all MutationStage 
> threads were waiting for some event.
>  
> Downgrade to 3.10 solved that problem, after downgrade all nodes are 
> operational and not freezing.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14253) MutationStage threads deadlock

2018-02-22 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-14253:

Attachment: dump.std

> MutationStage threads deadlock
> --
>
> Key: CASSANDRA-14253
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14253
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Ubuntu 16.04
>Reporter: Sergey Kirillov
>Priority: Major
> Attachments: dump.std, flame.svg, flame_tn.svg
>
>
> Hi.
> I've had a persistent problems with cluster upgraded to 3.11.2. I was 
> recreating one of my MVs and suddenly nodes request latencies went crazy.
> During investigation I have found that half of my nodes had stuck 
> MutationStage threads. All 64 threads were Active, Pending count was 
> continuously increasing while Completed stuck on one value.
> After restart nodes worked a few minutes and then stuck again. Another 
> restart, another few minutes of work and stuck.
> In attachment you can find flame graphs for MutationStage threads and for all 
> threads. It seems that all MutationStage threads were waiting for some event.
>  
> Downgrade to 3.10 solved that problem, after downgrade all nodes are 
> operational and not freezing.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14253) MutationStage threads deadlock

2018-02-22 Thread Sergey Kirillov (JIRA)
Sergey Kirillov created CASSANDRA-14253:
---

 Summary: MutationStage threads deadlock
 Key: CASSANDRA-14253
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14253
 Project: Cassandra
  Issue Type: Bug
  Components: Local Write-Read Paths
 Environment: Ubuntu 16.04
Reporter: Sergey Kirillov
 Attachments: flame.svg, flame_tn.svg

Hi.

I've had a persistent problems with cluster upgraded to 3.11.2. I was 
recreating one of my MVs and suddenly nodes request latencies went crazy.

During investigation I have found that half of my nodes had stuck MutationStage 
threads. All 64 threads were Active, Pending count was continuously increasing 
while Completed stuck on one value.

After restart nodes worked a few minutes and then stuck again. Another restart, 
another few minutes of work and stuck.

In attachment you can find flame graphs for MutationStage threads and for all 
threads. It seems that all MutationStage threads were waiting for some event.

 

Downgrade to 3.10 solved that problem, after downgrade all nodes are 
operational and not freezing.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13882) Data corruption

2017-09-18 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169848#comment-16169848
 ] 

Sergey Kirillov commented on CASSANDRA-13882:
-

Maybe it is related to https://issues.apache.org/jira/browse/CASSANDRA-13752 
but error message is different.

> Data corruption
> ---
>
> Key: CASSANDRA-13882
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13882
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Ubuntu 16.04, Apache Cassandra 3.11.0
>Reporter: Sergey Kirillov
>
> It seems that one of the tables in our project got corrupted for no obvious 
> reasons. It is mostly read-only and suddenly all reads on a few servers 
> started to return corrupted data which causing error in Python driver.
> On the servers we see following errors:
> {code:java}
> ERROR [ReadRepairStage:140] 2017-09-18 11:57:08,726 CassandraDaemon.java:228 
> - Exception in thread Thread[ReadRepairStage:140,5,main]java.io.IOError: 
> java.io.EOFException: EOF after 28 bytes out of 108
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:178)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:270)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:98) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.ReadResponse$DataResponse.digest(ReadResponse.java:203)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:87)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_144]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[na:1.8.0_144]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]
> Caused by: java.io.EOFException: EOF after 28 bytes out of 108
>   at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:437) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:245) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(UnfilteredSerializer.java:639)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$deserializeRowBody$1(UnfilteredSerializer.java:604)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1242) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1197) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at org.apache.cassandra.db.Columns.apply(Columns.java:377) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:600)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeOne(UnfilteredSerializer.java:475)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:431)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>   at 
> 

[jira] [Created] (CASSANDRA-13882) Data corruption

2017-09-18 Thread Sergey Kirillov (JIRA)
Sergey Kirillov created CASSANDRA-13882:
---

 Summary: Data corruption
 Key: CASSANDRA-13882
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13882
 Project: Cassandra
  Issue Type: Bug
  Components: Local Write-Read Paths
 Environment: Ubuntu 16.04, Apache Cassandra 3.11.0
Reporter: Sergey Kirillov


It seems that one of the tables in our project got corrupted for no obvious 
reasons. It is mostly read-only and suddenly all reads on a few servers started 
to return corrupted data which causing error in Python driver.

On the servers we see following errors:


{code:java}
ERROR [ReadRepairStage:140] 2017-09-18 11:57:08,726 CassandraDaemon.java:228 - 
Exception in thread Thread[ReadRepairStage:140,5,main]java.io.IOError: 
java.io.EOFException: EOF after 28 bytes out of 108
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:178)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:270)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:98) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.ReadResponse$DataResponse.digest(ReadResponse.java:203) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:87)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_144]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_144]
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]
Caused by: java.io.EOFException: EOF after 28 bytes out of 108
at 
org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:437) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:245) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(UnfilteredSerializer.java:639)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$deserializeRowBody$1(UnfilteredSerializer.java:604)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1242) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1197) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.db.Columns.apply(Columns.java:377) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:600)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeOne(UnfilteredSerializer.java:475)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:431)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:222)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
... 12 common frames omitted
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11371) Error on startup: keyspace not found in the schema definitions keyspace

2016-03-19 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199711#comment-15199711
 ] 

Sergey Kirillov commented on CASSANDRA-11371:
-

It looks like system_schema.tables is damaged and many table names contain 
random binary data. It is disaster. Is there any way to rebuild 
system_schema.tables from existing sstables and CQL schema?

> Error on startup: keyspace not found in the schema definitions keyspace
> ---
>
> Key: CASSANDRA-11371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu
>Reporter: Sergey Kirillov
>Priority: Critical
>
> My entire cluster is down now and all nodes failing to start with following 
> error:
> {quote}
> ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
> keyspace.
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> {quote}
> It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
> memtable_allocation_type now.
> Any advice how to fix this and restart my cluster will be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11371) Error on startup: keyspace not found in the schema definitions keyspace

2016-03-19 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200098#comment-15200098
 ] 

Sergey Kirillov commented on CASSANDRA-11371:
-

Thank you Aleksey. Any advice how to do this in multinode cluster? Do I need to 
do it on every node or on just one and the rest will pick up changes?

> Error on startup: keyspace not found in the schema definitions keyspace
> ---
>
> Key: CASSANDRA-11371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu
>Reporter: Sergey Kirillov
>Assignee: Aleksey Yeschenko
>Priority: Critical
>
> My entire cluster is down now and all nodes failing to start with following 
> error:
> {quote}
> ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
> keyspace.
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> {quote}
> It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
> memtable_allocation_type now.
> Any advice how to fix this and restart my cluster will be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11371) Error on startup

2016-03-19 Thread Sergey Kirillov (JIRA)
Sergey Kirillov created CASSANDRA-11371:
---

 Summary: Error on startup
 Key: CASSANDRA-11371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11371
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu
Reporter: Sergey Kirillov
Priority: Critical


My entire cluster is down now and all nodes fails to start with following error:

{quote}
ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
encountered during startup
java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
keyspace.
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
[apache-cassandra-3.0.4.jar:3.0.4]
{quote}


It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
memtable_allocation_type now.

Any advice would be appreciated.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11371) Error on startup: keyspace not found in the schema definitions keyspace

2016-03-19 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199711#comment-15199711
 ] 

Sergey Kirillov edited comment on CASSANDRA-11371 at 3/17/16 3:27 PM:
--

It looks like system_schema.tables is damaged and many table names contain 
random binary data. It is a disaster. Is there any way to rebuild 
system_schema.tables from existing sstables and CQL schema?


was (Author: rushman):
It looks like system_schema.tables is damaged and many table names contain 
random binary data. It is disaster. Is there any way to rebuild 
system_schema.tables from existing sstables and CQL schema?

> Error on startup: keyspace not found in the schema definitions keyspace
> ---
>
> Key: CASSANDRA-11371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu
>Reporter: Sergey Kirillov
>Priority: Critical
>
> My entire cluster is down now and all nodes failing to start with following 
> error:
> {quote}
> ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
> keyspace.
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> {quote}
> It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
> memtable_allocation_type now.
> Any advice how to fix this and restart my cluster will be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11371) Error on startup

2016-03-19 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-11371:

Description: 
My entire cluster is down now and all nodes failing to start with following 
error:

{quote}
ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
encountered during startup
java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
keyspace.
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
[apache-cassandra-3.0.4.jar:3.0.4]
{quote}


It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
memtable_allocation_type now.

Any advice how to fix this and restart my cluster will be appreciated.




  was:
My entire cluster is down now and all nodes failing to start with following 
error:

{quote}
ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
encountered during startup
java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
keyspace.
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
[apache-cassandra-3.0.4.jar:3.0.4]
{quote}


It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
memtable_allocation_type now.

Any advice would be appreciated.





> Error on startup
> 
>
> Key: CASSANDRA-11371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu
>Reporter: Sergey Kirillov
>Priority: Critical
>
> My entire cluster is down now and all nodes failing to start with following 
> error:
> {quote}
> ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
> keyspace.
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> 

[jira] [Comment Edited] (CASSANDRA-11371) Error on startup: keyspace not found in the schema definitions keyspace

2016-03-19 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199711#comment-15199711
 ] 

Sergey Kirillov edited comment on CASSANDRA-11371 at 3/17/16 3:28 PM:
--

It looks like {{system_schema.tables}} is damaged and many table names contain 
random binary data. It is a disaster. Is there any way to rebuild 
system_schema.tables from existing sstables and CQL schema?


was (Author: rushman):
It looks like system_schema.tables is damaged and many table names contain 
random binary data. It is a disaster. Is there any way to rebuild 
system_schema.tables from existing sstables and CQL schema?

> Error on startup: keyspace not found in the schema definitions keyspace
> ---
>
> Key: CASSANDRA-11371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu
>Reporter: Sergey Kirillov
>Priority: Critical
>
> My entire cluster is down now and all nodes failing to start with following 
> error:
> {quote}
> ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
> keyspace.
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> {quote}
> It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
> memtable_allocation_type now.
> Any advice how to fix this and restart my cluster will be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11371) Error on startup

2016-03-19 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-11371:

Description: 
My entire cluster is down now and all nodes failing to start with following 
error:

{quote}
ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
encountered during startup
java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
keyspace.
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
[apache-cassandra-3.0.4.jar:3.0.4]
{quote}


It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
memtable_allocation_type now.

Any advice would be appreciated.




  was:
My entire cluster is down now and all nodes fails to start with following error:

{quote}
ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
encountered during startup
java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
keyspace.
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) 
[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
[apache-cassandra-3.0.4.jar:3.0.4]
{quote}


It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
memtable_allocation_type now.

Any advice would be appreciated.





> Error on startup
> 
>
> Key: CASSANDRA-11371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu
>Reporter: Sergey Kirillov
>Priority: Critical
>
> My entire cluster is down now and all nodes failing to start with following 
> error:
> {quote}
> ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
> keyspace.
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
>  

[jira] [Updated] (CASSANDRA-11371) Error on startup: keyspace not found in the schema definitions keyspace

2016-03-19 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-11371:

Summary: Error on startup: keyspace not found in the schema definitions 
keyspace  (was: Error on startup)

> Error on startup: keyspace not found in the schema definitions keyspace
> ---
>
> Key: CASSANDRA-11371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu
>Reporter: Sergey Kirillov
>Priority: Critical
>
> My entire cluster is down now and all nodes failing to start with following 
> error:
> {quote}
> ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
> keyspace.
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> {quote}
> It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
> memtable_allocation_type now.
> Any advice how to fix this and restart my cluster will be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11275) sstableloader fails with java.lang.IllegalArgumentException: flags is not a column defined in this metadata

2016-02-29 Thread Sergey Kirillov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171922#comment-15171922
 ] 

Sergey Kirillov commented on CASSANDRA-11275:
-

here is my ugly patch to work around this bug

{code}
diff --git a/src/java/org/apache/cassandra/utils/NativeSSTableLoaderClient.java 
b/src/java/org/apache/cassandra/utils/NativeSSTableLoaderClient.java
index 225e453..335d3a7 100644
--- a/src/java/org/apache/cassandra/utils/NativeSSTableLoaderClient.java
+++ b/src/java/org/apache/cassandra/utils/NativeSSTableLoaderClient.java
@@ -169,12 +169,12 @@ public class NativeSSTableLoaderClient extends 
SSTableLoader.Client
   Types types)
 {
 UUID id = row.getUUID("id");
-Set flags = 
CFMetaData.flagsFromStrings(row.getSet("flags", String.class));
+Set flags = isView ? Collections.emptySet() : 
CFMetaData.flagsFromStrings(row.getSet("flags", String.class));
 
 boolean isSuper = flags.contains(CFMetaData.Flag.SUPER);
 boolean isCounter = flags.contains(CFMetaData.Flag.COUNTER);
 boolean isDense = flags.contains(CFMetaData.Flag.DENSE);
-boolean isCompound = flags.contains(CFMetaData.Flag.COMPOUND);
+boolean isCompound = isView ? true : 
flags.contains(CFMetaData.Flag.COMPOUND);
 
 String columnsQuery = String.format("SELECT * FROM %s.%s WHERE 
keyspace_name = ? AND table_name = ?",
 SchemaKeyspace.NAME,
{code}

> sstableloader fails with java.lang.IllegalArgumentException: flags is not a 
> column defined in this metadata
> ---
>
> Key: CASSANDRA-11275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11275
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: ubuntu, cassandra 3.0.3
>Reporter: Sergey Kirillov
>
> When used on a cluster with materialized view sstableloader fails with:
> {noformat}
> flags is not a column defined in this metadata
> java.lang.IllegalArgumentException: flags is not a column defined in this 
> metadata
> at 
> com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
> at 
> com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
> at 
> com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
> at 
> com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)
> {noformat}
> This happens because there is no column `flags` in `system_schema.views`, but 
> NativeSSTableLoaderClient want's to read it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11275) sstableloader fails with java.lang.IllegalArgumentException: flags is not a column defined in this metadata

2016-02-29 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-11275:

Description: 
When used on a cluster with materialized view sstableloader fails with:
{noformat}
flags is not a column defined in this metadata
java.lang.IllegalArgumentException: flags is not a column defined in this 
metadata
at 
com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
at 
com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
at 
com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
at 
com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)
{noformat}

This happens because there is no column `flags` in `system_schema.views`, but 
NativeSSTableLoaderClient want's to read it.

  was:
sstableloader fails with:
{noformat}
flags is not a column defined in this metadata
java.lang.IllegalArgumentException: flags is not a column defined in this 
metadata
at 
com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
at 
com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
at 
com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
at 
com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)
{noformat}

This happens because there is no column `flags` in `system_schema.views`, but 
NativeSSTableLoaderClient want's to read it.


> sstableloader fails with java.lang.IllegalArgumentException: flags is not a 
> column defined in this metadata
> ---
>
> Key: CASSANDRA-11275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11275
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: ubuntu, cassandra 3.0.3
>Reporter: Sergey Kirillov
>
> When used on a cluster with materialized view sstableloader fails with:
> {noformat}
> flags is not a column defined in this metadata
> java.lang.IllegalArgumentException: flags is not a column defined in this 
> metadata
> at 
> com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
> at 
> com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
> at 
> com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
> at 
> com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)
> {noformat}
> This happens because there is no column `flags` in `system_schema.views`, but 
> NativeSSTableLoaderClient want's to read it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11275) sstableloader fails with java.lang.IllegalArgumentException: flags is not a column defined in this metadata

2016-02-29 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-11275:

Description: 
sstableloader fails with:
{noformat}
flags is not a column defined in this metadata
java.lang.IllegalArgumentException: flags is not a column defined in this 
metadata
at 
com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
at 
com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
at 
com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
at 
com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)
{noformat}

This happens because there is no column `flags` in `system_schema.views`, but 
NativeSSTableLoaderClient want's to read it.

  was:
sstableloader fails with:
{noformat}
flags is not a column defined in this metadata
java.lang.IllegalArgumentException: flags is not a column defined in this 
metadata
at 
com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
at 
com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
at 
com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
at 
com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)
{noformat}

this happens because there is no column `flags` in `system_schema.views`


> sstableloader fails with java.lang.IllegalArgumentException: flags is not a 
> column defined in this metadata
> ---
>
> Key: CASSANDRA-11275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11275
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: ubuntu, cassandra 3.0.3
>Reporter: Sergey Kirillov
>
> sstableloader fails with:
> {noformat}
> flags is not a column defined in this metadata
> java.lang.IllegalArgumentException: flags is not a column defined in this 
> metadata
> at 
> com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
> at 
> com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
> at 
> com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
> at 
> com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)
> {noformat}
> This happens because there is no column `flags` in `system_schema.views`, but 
> NativeSSTableLoaderClient want's to read it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11275) sstableloader fails with java.lang.IllegalArgumentException: flags is not a column defined in this metadata

2016-02-29 Thread Sergey Kirillov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kirillov updated CASSANDRA-11275:

Description: 
sstableloader fails with:
{noformat}
flags is not a column defined in this metadata
java.lang.IllegalArgumentException: flags is not a column defined in this 
metadata
at 
com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
at 
com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
at 
com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
at 
com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)
{noformat}

this happens because there is no column `flags` in `system_schema.views`

  was:
sstableloader fails with:
flags is not a column defined in this metadata
java.lang.IllegalArgumentException: flags is not a column defined in this 
metadata
at 
com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
at 
com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
at 
com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
at 
com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)


this happens because there is no column `flags` in `system_schema.views`


> sstableloader fails with java.lang.IllegalArgumentException: flags is not a 
> column defined in this metadata
> ---
>
> Key: CASSANDRA-11275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11275
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: ubuntu, cassandra 3.0.3
>Reporter: Sergey Kirillov
>
> sstableloader fails with:
> {noformat}
> flags is not a column defined in this metadata
> java.lang.IllegalArgumentException: flags is not a column defined in this 
> metadata
> at 
> com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
> at 
> com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
> at 
> com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
> at 
> com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
> at 
> org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)
> {noformat}
> this happens because there is no column `flags` in `system_schema.views`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11275) sstableloader fails with java.lang.IllegalArgumentException: flags is not a column defined in this metadata

2016-02-29 Thread Sergey Kirillov (JIRA)
Sergey Kirillov created CASSANDRA-11275:
---

 Summary: sstableloader fails with 
java.lang.IllegalArgumentException: flags is not a column defined in this 
metadata
 Key: CASSANDRA-11275
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11275
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: ubuntu, cassandra 3.0.3
Reporter: Sergey Kirillov


sstableloader fails with:
flags is not a column defined in this metadata
java.lang.IllegalArgumentException: flags is not a column defined in this 
metadata
at 
com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:272)
at 
com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:278)
at 
com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:83)
at 
com.datastax.driver.core.AbstractGettableData.getSet(AbstractGettableData.java:217)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.createTableMetadata(NativeSSTableLoaderClient.java:172)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.fetchViews(NativeSSTableLoaderClient.java:157)
at 
org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:93)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:103)


this happens because there is no column `flags` in `system_schema.views`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)