[jira] [Updated] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-05-17 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11528:
---
Assignee: (was: Benjamin Lerer)

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11528:
--
Fix Version/s: (was: 3.3)
   3.x

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-07 Thread Mattias W (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mattias W updated CASSANDRA-11528:
--
Description: 
While implementing a dump procedure, which did "select * from" from one table 
at a row, I instantly kill the server. A simple 
{noformat}select count(*) from {noformat} 
also kills it. For a while, I thought the size of blobs were the cause

I also try to only have a unique id as partition key, I was afraid a single 
partition got too big or so, but that didn't change anything

It happens every time, both from Java/Clojure and from DevCenter.

I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is so 
quick, so nothing is recorded there.

There is a Java-out-of-memory in the logs, but that isn't from the time of the 
crash.

It only happens for one table, it only has 15000 entries, but there are blobs 
and byte[] stored there, size between 100kb - 4Mb. Total size for that table is 
about 6.5 GB on disk.

I made a workaround by doing many small selects instead, each only fetching 100 
rows.

Is there a setting a can set to make the system log more eagerly, in order to 
at least get a stacktrace or similar, that might help you.

It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
again

  was:
While implementing a dump procedure, which did "select * from" from one table 
at a row, I instantly kill the server. A simple "select count(*) from"  also 
kills it. For a while, I thought the size of blobs were the cause

I also try to only have a unique id as partition key, I was afraid a single 
partition got too big or so, but that didn't change anything

It happens every time, both from Java/Clojure and from DevCenter.

I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is so 
quick, so nothing is recorded there.

There is a Java-out-of-memory in the logs, but that isn't from the time of the 
crash.

It only happens for one table, it only has 15000 entries, but there are blobs 
and byte[] stored there, size between 100kb - 4Mb. Total size for that table is 
about 6.5 GB on disk.

I made a workaround by doing many small selects instead, each only fetching 100 
rows.

Is there a setting a can set to make the system log more eagerly, in order to 
at least get a stacktrace or similar, that might help you.

It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
again


> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.3
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)