[jira] [Commented] (CASSANDRA-12607) The cassandra commit log corrupted by restart even if no write operations in hours

2017-11-01 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16233765#comment-16233765
 ] 

Mattias W commented on CASSANDRA-12607:
---

The stack trace is similar, so, yes it is mostly likely a duplicate.

I have finished the Cassandra project for now, so I cannot easliy test right 
away.


> The cassandra commit log corrupted by restart even if no write operations in 
> hours
> --
>
> Key: CASSANDRA-12607
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12607
> Project: Cassandra
>  Issue Type: Bug
> Environment: windows 10, 16 gb
>Reporter: Mattias W
>Priority: Major
> Fix For: 3.11.x
>
>
> This is the 3rd time my commitlog is corrupted, and the server refuses to 
> start. What worries me is that I get these error issues even if no update 
> were made to the database.
> The last time, the computer were restarted by Windows update, and I detected 
> the problem immediately. 
> The problems is solved by deleting the commitlog files (ok on my development 
> system)
> My config says that commitlog are synced every 10s seconds, so how can a file 
> be corrupt unless a crash occurs within these 10 seconds?
> Is this a Cassandra bug? Or by design, i.e. bad design?
> I am using 3.4 on Windows 10, Datastax installer.
> In the stdout log, the last part is
> {noformat}
> INFO  06:17:39 Replaying C:\Program 
> Files\DataStax-DDC\data\commitlog\CommitLog-6-1471353812251.log, C:\Program 
> Files\DataStax-DDC\data\commitlog\CommitLog-6-1471353812252.log, C:\Program 
> Files\DataStax-DDC\data\commitlog\CommitLog-6-1471411951134.log, C:\Program 
> Files\DataStax-DDC\data\commitlog\CommitLog-6-1471454506802.log, C:\Program 
> Files\DataStax-DDC\data\commitlog\CommitLog-6-1471532812678.log
> ERROR 06:17:39 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
> Could not read commit log descriptor in file C:\Program 
> Files\DataStax-DDC\data\commitlog\CommitLog-6-1471353812252.log
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:611)
>  [apache-cassandra-3.4.0.jar:3.4.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:373)
>  [apache-cassandra-3.4.0.jar:3.4.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:236)
>  [apache-cassandra-3.4.0.jar:3.4.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:192) 
> [apache-cassandra-3.4.0.jar:3.4.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:172) 
> [apache-cassandra-3.4.0.jar:3.4.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) 
> [apache-cassandra-3.4.0.jar:3.4.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.4.0.jar:3.4.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:680) 
> [apache-cassandra-3.4.0.jar:3.4.0]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-09-03 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15461087#comment-15461087
 ] 

Mattias W commented on CASSANDRA-11528:
---

A select count(*) should not take more space than what is needed for the 
primary key, I have some fat columns.

Is there a standard cassandra test database generated by scripts, which I can 
use for reproducing?

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12607) The cassandra commit log corrupted by restart even if no write operations in hours

2016-09-03 Thread Mattias W (JIRA)
Mattias W created CASSANDRA-12607:
-

 Summary: The cassandra commit log corrupted by restart even if no 
write operations in hours
 Key: CASSANDRA-12607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12607
 Project: Cassandra
  Issue Type: Bug
 Environment: windows 10, 16 gb
Reporter: Mattias W
 Fix For: 3.x


This is the 3rd time my commitlog is corrupted, and the server refuses to 
start. What worries me is that I get these error issues even if no update were 
made to the database.

The last time, the computer were restarted by Windows update, and I detected 
the problem immediately. 

The problems is solved by deleting the commitlog files (ok on my development 
system)

My config says that commitlog are synced every 10s seconds, so how can a file 
be corrupt unless a crash occurs within these 10 seconds?

Is this a Cassandra bug? Or by design, i.e. bad design?

I am using 3.4 on Windows 10, Datastax installer.

In the stdout log, the last part is

{noformat}
INFO  06:17:39 Replaying C:\Program 
Files\DataStax-DDC\data\commitlog\CommitLog-6-1471353812251.log, C:\Program 
Files\DataStax-DDC\data\commitlog\CommitLog-6-1471353812252.log, C:\Program 
Files\DataStax-DDC\data\commitlog\CommitLog-6-1471411951134.log, C:\Program 
Files\DataStax-DDC\data\commitlog\CommitLog-6-1471454506802.log, C:\Program 
Files\DataStax-DDC\data\commitlog\CommitLog-6-1471532812678.log
ERROR 06:17:39 Exiting due to error while processing commit log during 
initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
Could not read commit log descriptor in file C:\Program 
Files\DataStax-DDC\data\commitlog\CommitLog-6-1471353812252.log
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:611)
 [apache-cassandra-3.4.0.jar:3.4.0]
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:373)
 [apache-cassandra-3.4.0.jar:3.4.0]
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:236)
 [apache-cassandra-3.4.0.jar:3.4.0]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:192) 
[apache-cassandra-3.4.0.jar:3.4.0]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:172) 
[apache-cassandra-3.4.0.jar:3.4.0]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) 
[apache-cassandra-3.4.0.jar:3.4.0]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) 
[apache-cassandra-3.4.0.jar:3.4.0]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:680) 
[apache-cassandra-3.4.0.jar:3.4.0]




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-09-03 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460611#comment-15460611
 ] 

Mattias W edited comment on CASSANDRA-11528 at 9/3/16 7:44 AM:
---

I thought the answer was that {count(*)} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{count(*)}.  Maybe this is more a faq-issue, even if I really do not like that 
a stupid client can crash the server just by making an expensive operation.


was (Author: mattiasw2):
I thought the answer was that {code:count(*)} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{code:count(*)}.  Maybe this is more a faq-issue, even if I really do not like 
that a stupid client can crash the server just by making an expensive operation.

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-09-03 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460611#comment-15460611
 ] 

Mattias W edited comment on CASSANDRA-11528 at 9/3/16 7:42 AM:
---

I thought the answer was that {code:count(*)} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{code:count(*)}.  Maybe this is more a faq-issue, even if I really do not like 
that a stupid client can crash the server just by making an expensive operation.


was (Author: mattiasw2):
I thought the answer was that {{count(*)}} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{{count(*)}}.  Maybe this is more a faq-issue, even if I really do not like 
that a stupid client can crash the server just by making an expensive operation.

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-09-03 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460611#comment-15460611
 ] 

Mattias W edited comment on CASSANDRA-11528 at 9/3/16 7:41 AM:
---

I thought the answer was that {{count(*)}} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{{count(*)}}.  Maybe this is more a faq-issue, even if I really do not like 
that a stupid client can crash the server just by making an expensive operation.


was (Author: mattiasw2):
I thought the answer was that `count(*)` is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
count(*).  Maybe this is more a faq-issue, even if I really do not like that a 
stupid client can crash the server just by making an expensive operation.

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-09-03 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460611#comment-15460611
 ] 

Mattias W commented on CASSANDRA-11528:
---

I thought the answer was that `count(*)` is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
count(*).  Maybe this is more a faq-issue, even if I really do not like that a 
stupid client can crash the server just by making an expensive operation.

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233551#comment-15233551
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/9/16 1:41 PM:
---

It is a out-of-memory error. Cassandra 3.4 on ubuntu 14.04 behaves the same, 
and there, the last message in the log is below.

So now I know, select statements can use a lot of heap. The Ubuntu machine only 
has 1.5 GB RAM. (The windows machine above had 8 GB)

{noformat}
INFO  [SharedPool-Worker-3] 2016-04-09 15:32:34,915 ApproximateTime.java:44 - 
Scheduling approximate time-check task with a precision of 10 milliseconds
INFO  [Service Thread] 2016-04-09 15:34:50,366 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 443ms.  CMS Old Gen: 547965232 -> 268786192; Par Eden 
Space: 126017056 -> 0; Par Survivor Space: 3420928 -> 0
INFO  [Service Thread] 2016-04-09 15:34:50,379 StatusLogger.java:52 - Pool Name 
   Active   Pending  Completed   Blocked  All Time Blocked
ERROR [SharedPool-Worker-2] 2016-04-09 15:34:50,409 
JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.Cell$Serializer.serialize(Cell.java:208) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:185)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:110)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:98)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:294)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:292) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1799)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2467)
 ~[apache-cassandra-3.4.jar:3.4]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_77]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.4.jar:3.4]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.4.jar:3.4]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
INFO  [Service Thread] 2016-04-09 15:34:50,412 StatusLogger.java:56 - 

[jira] [Commented] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233551#comment-15233551
 ] 

Mattias W commented on CASSANDRA-11528:
---

It is a out-of-memory error. Cassandra 3.4 on ubuntu 14.04 behaves the same, 
and there, the last message in the log is

{noformat}
INFO  [SharedPool-Worker-3] 2016-04-09 15:32:34,915 ApproximateTime.java:44 - 
Scheduling approximate time-check task with a precision of 10 milliseconds
INFO  [Service Thread] 2016-04-09 15:34:50,366 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 443ms.  CMS Old Gen: 547965232 -> 268786192; Par Eden 
Space: 126017056 -> 0; Par Survivor Space: 3420928 -> 0
INFO  [Service Thread] 2016-04-09 15:34:50,379 StatusLogger.java:52 - Pool Name 
   Active   Pending  Completed   Blocked  All Time Blocked
ERROR [SharedPool-Worker-2] 2016-04-09 15:34:50,409 
JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.Cell$Serializer.serialize(Cell.java:208) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:185)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:110)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:98)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:294)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:292) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1799)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2467)
 ~[apache-cassandra-3.4.jar:3.4]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_77]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.4.jar:3.4]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.4.jar:3.4]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
INFO  [Service Thread] 2016-04-09 15:34:50,412 StatusLogger.java:56 - 
MutationStage 0 0157 0  
   0

INFO  [Service Thread] 2016-04-09 15:34:50,414 StatusLogger.java:56 - 
ViewMutationStage 0  

[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233542#comment-15233542
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/9/16 1:24 PM:
---

This last error also occurs with the same database contents on cassandra 3.4 on 
Ubuntu 14.04. 

i.e.

{{SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;}}


was (Author: mattiasw2):
This error also occurs with the same database contents on cassandra 3.4 on 
Ubuntu 14.04. 

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233542#comment-15233542
 ] 

Mattias W commented on CASSANDRA-11528:
---

This error also occurs with the same database contents on cassandra 3.4 on 
Ubuntu 14.04. 

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mattias W updated CASSANDRA-11528:
--
Description: 
While implementing a dump procedure, which did "select * from" from one table 
at a row, I instantly kill the server. A simple 
{noformat}select count(*) from {noformat} 
also kills it. For a while, I thought the size of blobs were the cause

I also try to only have a unique id as partition key, I was afraid a single 
partition got too big or so, but that didn't change anything

It happens every time, both from Java/Clojure and from DevCenter.

I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is so 
quick, so nothing is recorded there.

There is a Java-out-of-memory in the logs, but that isn't from the time of the 
crash.

It only happens for one table, it only has 15000 entries, but there are blobs 
and byte[] stored there, size between 100kb - 4Mb. Total size for that table is 
about 6.5 GB on disk.

I made a workaround by doing many small selects instead, each only fetching 100 
rows.

Is there a setting a can set to make the system log more eagerly, in order to 
at least get a stacktrace or similar, that might help you.

It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
again

  was:
While implementing a dump procedure, which did "select * from" from one table 
at a row, I instantly kill the server. A simple "select count(*) from"  also 
kills it. For a while, I thought the size of blobs were the cause

I also try to only have a unique id as partition key, I was afraid a single 
partition got too big or so, but that didn't change anything

It happens every time, both from Java/Clojure and from DevCenter.

I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is so 
quick, so nothing is recorded there.

There is a Java-out-of-memory in the logs, but that isn't from the time of the 
crash.

It only happens for one table, it only has 15000 entries, but there are blobs 
and byte[] stored there, size between 100kb - 4Mb. Total size for that table is 
about 6.5 GB on disk.

I made a workaround by doing many small selects instead, each only fetching 100 
rows.

Is there a setting a can set to make the system log more eagerly, in order to 
at least get a stacktrace or similar, that might help you.

It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
again


> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231649#comment-15231649
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/8/16 5:17 AM:
---

I get strange behaviour also on smaller and much more normal tables. For example
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;
{noformat}
works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;
{noformat}

No new entries are made in the log file


was (Author: mattiasw2):
I get strange behaviour also on smaller and much more normal tables. For example
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;
{noformat}
works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;
{noformat}


> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231649#comment-15231649
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/8/16 5:15 AM:
---

I get strange behaviour also on smaller and much more normal tables. For example
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;
{noformat}
works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;
{noformat}



was (Author: mattiasw2):
I get strange behaviour also on smaller and much more normal tables. For example

{{SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;}}

works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{{SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;}}


> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple "select count(*) from"  also 
> kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231649#comment-15231649
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/8/16 5:14 AM:
---

I get strange behaviour also on smaller and much more normal tables. For example

{{SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;}}

works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{{SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;}}



was (Author: mattiasw2):
I get strange behaviour also on smaller and much more normal tables. For example

SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;

works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;


> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple "select count(*) from"  also 
> kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231649#comment-15231649
 ] 

Mattias W commented on CASSANDRA-11528:
---

I get strange behaviour also on smaller and much more normal tables. For example

SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;

works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;


> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple "select count(*) from"  also 
> kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)
Mattias W created CASSANDRA-11528:
-

 Summary: Server Crash when select returns more than a few hundred 
rows
 Key: CASSANDRA-11528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: windows 7, 8 GB machine
Reporter: Mattias W
 Fix For: 3.3
 Attachments: datastax_ddc_server-stdout.2016-04-07.log

While implementing a dump procedure, which did "select * from" from one table 
at a row, I instantly kill the server. A simple "select count(*) from"  also 
kills it. For a while, I thought the size of blobs were the cause

I also try to only have a unique id as partition key, I was afraid a single 
partition got too big or so, but that didn't change anything

It happens every time, both from Java/Clojure and from DevCenter.

I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is so 
quick, so nothing is recorded there.

There is a Java-out-of-memory in the logs, but that isn't from the time of the 
crash.

It only happens for one table, it only has 15000 entries, but there are blobs 
and byte[] stored there, size between 100kb - 4Mb. Total size for that table is 
about 6.5 GB on disk.

I made a workaround by doing many small selects instead, each only fetching 100 
rows.

Is there a setting a can set to make the system log more eagerly, in order to 
at least get a stacktrace or similar, that might help you.

It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)