[
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15231649#comment-15231649
]
Mattias W edited comment on CASSANDRA-11528 at 4/8/16 5:17 AM:
---------------------------------------------------------------
I get strange behaviour also on smaller and much more normal tables. For example
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;
{noformat}
works fine from within devcenter
but the next one, which hits many more rows, temporarily makes the server
unavailable, and reports "Unable to execute CQL script on 'connection1':
Cassandra failure during read query at consistency ONE (1 responses were
required but only 0 replica responded, 1 failed". This error message is the
same as above, except that the server doesn't die.
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;
{noformat}
No new entries are made in the log file
was (Author: mattiasw2):
I get strange behaviour also on smaller and much more normal tables. For example
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = true LIMIT 100 ALLOW FILTERING;
{noformat}
works fine from within devcenter
but the next one, which hits many more rows, temporarily makes the server
unavailable, and reports "Unable to execute CQL script on 'connection1':
Cassandra failure during read query at consistency ONE (1 responses were
required but only 0 replica responded, 1 failed". This error message is the
same as above, except that the server doesn't die.
{noformat}
SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;
{noformat}
> Server Crash when select returns more than a few hundred rows
> -------------------------------------------------------------
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Environment: windows 7, 8 GB machine
> Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table
> at a row, I instantly kill the server. A simple
> {noformat}select count(*) from {noformat}
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run
> again
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)