Hi Peter, are you using the hsha RPC server type on this node? If you are, then
it looks like rpc_max_threads threads will be allocated on startup in 2.0.11
while this wasn't the case before. This can exhaust your heap if the value of
rpc_max_threads is too large (eg if you use the default).
That definitely appears to be the issue. Thanks for pointing that out!
https://issues.apache.org/jira/browse/CASSANDRA-8116
It looks like 2.0.12 will check for the default and throw an exception
(thanks Mike Adamson) and also includes a bit more text in the config
file but I'm thinking that
Thank you Bryan and Mark. I have redesigned my schema in such a way that I only
have 50CFs and I’ve given 2GB for the Heap space and now it’s working fine.
De: Mark Reddy [mailto:mark.l.re...@gmail.com]
Enviado el: martes, 28 de octubre de 2014 18:31
Para: user@cassandra.apache.org
Asunto: Re:
Have a cassandra cluster that has been running under 2.1.0 fine. Rebuilt
the cluster using the same settings on 2.1.1 and get this error, even
with only one node present:
Non-system keyspaces don't have the same replication settings,
effective ownership information is meaningless
I only
Some ideas:
1) Put on DEBUG log on the joining node to see what is going on in details
with the stream with 1500 files
2) Check the stream ID to see whether it's a new stream or an old one
pending
On Wed, Oct 29, 2014 at 2:21 AM, Maxime maxim...@gmail.com wrote:
Doan, thanks for the tip, I
On 10/29/2014 02:05 PM, James Derieg wrote:
Have a cassandra cluster that has been running under 2.1.0 fine. Rebuilt
the cluster using the same settings on 2.1.1 and get this error, even
with only one node present:
Non-system keyspaces don't have the same replication settings,
effective
Ah, thanks Michael! Good to know that's not an error.
On 10/29/2014 1:25 PM, Michael Shuler wrote:
On 10/29/2014 02:05 PM, James Derieg wrote:
Have a cassandra cluster that has been running under 2.1.0 fine. Rebuilt
the cluster using the same settings on 2.1.1 and get this error, even
with
I have a 5 node cassandra cluster and i commissioned 1 new node to the
cluster. when i added 1 node. it received streams from 3 nodes out of which 2
were completed successfully and one stream got failed. how can i resume the
stream which has failed?
On Wed, Oct 29, 2014 at 12:49 PM, venkat sam samvenkat...@outlook.com
wrote:
I have a 5 node cassandra cluster and i commissioned 1 new node to the
cluster. when i added 1 node. it received streams from 3 nodes out of which
2 were completed successfully and one stream got failed. how can i
Hi,
looking at the docs, the default value for concurrent_reads is 32, which
seems bit small to me (comparing to say http server)? because if my node is
receiving slight traffic, any more than 32 concurrent read query will have
to wait.(?)
Recommend rule is, 16* number of drives. Would that be
Theres a bit to it, sometimes it can use tweaking though. Its a good
default for most systems so I wouldn't increase right off the bat. When
using ssds or something with a lot of horsepower it could be higher though
(ie i2.xlarge+ on ec2). If you monitor the number of active threads in
read
What could be the reasons for the stream error other than SSTABLE corruption?
Aravind
-Robert Coli rc...@eventbrite.com wrote: -
To: user@cassandra.apache.org user@cassandra.apache.org
From: Robert Coli rc...@eventbrite.com
Date: 10/30/2014 02:21AM
Subject: Re: Commissioning failure
12 matches
Mail list logo