[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-09-03 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460611#comment-15460611
 ] 

Mattias W edited comment on CASSANDRA-11528 at 9/3/16 7:44 AM:
---

I thought the answer was that {count(*)} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{count(*)}.  Maybe this is more a faq-issue, even if I really do not like that 
a stupid client can crash the server just by making an expensive operation.


was (Author: mattiasw2):
I thought the answer was that {code:count(*)} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{code:count(*)}.  Maybe this is more a faq-issue, even if I really do not like 
that a stupid client can crash the server just by making an expensive operation.

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-09-03 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460611#comment-15460611
 ] 

Mattias W edited comment on CASSANDRA-11528 at 9/3/16 7:42 AM:
---

I thought the answer was that {code:count(*)} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{code:count(*)}.  Maybe this is more a faq-issue, even if I really do not like 
that a stupid client can crash the server just by making an expensive operation.


was (Author: mattiasw2):
I thought the answer was that {{count(*)}} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{{count(*)}}.  Maybe this is more a faq-issue, even if I really do not like 
that a stupid client can crash the server just by making an expensive operation.

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-09-03 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15460611#comment-15460611
 ] 

Mattias W edited comment on CASSANDRA-11528 at 9/3/16 7:41 AM:
---

I thought the answer was that {{count(*)}} is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
{{count(*)}}.  Maybe this is more a faq-issue, even if I really do not like 
that a stupid client can crash the server just by making an expensive operation.


was (Author: mattiasw2):
I thought the answer was that `count(*)` is a very expensive operation 
memory-wise, since that would explain the behaviour. I have now stopped using 
count(*).  Maybe this is more a faq-issue, even if I really do not like that a 
stupid client can crash the server just by making an expensive operation.

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-28 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262125#comment-15262125
 ] 

Alex Petrov edited comment on CASSANDRA-11528 at 4/28/16 1:26 PM:
--

Could you take a closer look at your heapdump (start node with 
{{-XX:+HeapDumpOnOutOfMemoryError}}) and open the file after the node has 
crashed (with, for example 
[jProfiler|https://www.ej-technologies.com/products/jprofiler/overview.html] or 
[VisualVm|https://visualvm.java.net/]) and check what exactly causes (usually 
it's same-type / same-root objects in extreme quantities, in majority of cases 
they're always on very top).


was (Author: ifesdjeen):
Could you take a closer look at your heapdump (start node with 
{{-XX:+HeapDumpOnOutOfMemoryError}}) and open the file after the node has 
crashed (with, for example 
[jProfiler|https://www.ej-technologies.com/products/jprofiler/overview.html] or 
[VisualVm|https://visualvm.java.net/].

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233551#comment-15233551
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/9/16 1:41 PM:
---

It is a out-of-memory error. Cassandra 3.4 on ubuntu 14.04 behaves the same, 
and there, the last message in the log is below.

So now I know, select statements can use a lot of heap. The Ubuntu machine only 
has 1.5 GB RAM. (The windows machine above had 8 GB)

{noformat}
INFO  [SharedPool-Worker-3] 2016-04-09 15:32:34,915 ApproximateTime.java:44 - 
Scheduling approximate time-check task with a precision of 10 milliseconds
INFO  [Service Thread] 2016-04-09 15:34:50,366 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 443ms.  CMS Old Gen: 547965232 -> 268786192; Par Eden 
Space: 126017056 -> 0; Par Survivor Space: 3420928 -> 0
INFO  [Service Thread] 2016-04-09 15:34:50,379 StatusLogger.java:52 - Pool Name 
   Active   Pending  Completed   Blocked  All Time Blocked
ERROR [SharedPool-Worker-2] 2016-04-09 15:34:50,409 
JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.Cell$Serializer.serialize(Cell.java:208) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:185)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:110)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:98)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:294)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:292) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1799)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2467)
 ~[apache-cassandra-3.4.jar:3.4]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_77]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.4.jar:3.4]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.4.jar:3.4]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
INFO  [Service Thread] 2016-04-09 15:34:50,412 StatusLogger.java:56 - 

[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233542#comment-15233542
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/9/16 1:24 PM:
---

This last error also occurs with the same database contents on cassandra 3.4 on 
Ubuntu 14.04. 

i.e.

{{SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;}}


was (Author: mattiasw2):
This error also occurs with the same database contents on cassandra 3.4 on 
Ubuntu 14.04. 

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231649#comment-15231649
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/8/16 5:17 AM:
---

I get strange behaviour also on smaller and much more normal tables. For example
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;
{noformat}
works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;
{noformat}

No new entries are made in the log file


was (Author: mattiasw2):
I get strange behaviour also on smaller and much more normal tables. For example
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;
{noformat}
works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;
{noformat}


> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231649#comment-15231649
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/8/16 5:15 AM:
---

I get strange behaviour also on smaller and much more normal tables. For example
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;
{noformat}
works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;
{noformat}



was (Author: mattiasw2):
I get strange behaviour also on smaller and much more normal tables. For example

{{SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;}}

works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{{SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;}}


> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple "select count(*) from"  also 
> kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231649#comment-15231649
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/8/16 5:14 AM:
---

I get strange behaviour also on smaller and much more normal tables. For example

{{SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;}}

works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

{{SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;}}



was (Author: mattiasw2):
I get strange behaviour also on smaller and much more normal tables. For example

SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;

works fine from within devcenter

but the next one, which hits many more rows, temporarily makes the server 
unavailable, and reports "Unable to execute CQL script on 'connection1': 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed". This error message is the 
same as above, except that the server doesn't die.

SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;


> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple "select count(*) from"  also 
> kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)