nsultinghttp://www.thelastpickle.com2017-05-11 22:29 GMT+01:00 Daniel Steuernol <dan...@sendwithus.com>:Thank you, it's an Out of memory crash according to dmesg. I have the heap size set to 15G in the jvm.options for cassandra, and there is 30G on the machine.
On May 11 2017, at
rain or shutdown, OR pre-empting cassandra process?On Thu, May 11, 2017 at 1:30 PM, Daniel Steuernol <dan...@sendwithus.com> wrote:I have a 6 node cassandra cluster running, and frequently a node will go down with no obvious error in the logs. This is starting to happen quite often, almost dail
I have a 6 node cassandra cluster running, and frequently a node will go down with no obvious error in the logs. This is starting to happen quite often, almost daily now. Any suggestions on how to track down what is causing the node to stop?
wrote:
Have a look at dmesg. It have already happened to me regarding
type i instances at AWS.
On 11-05-2017 22:17, Daniel Steuernol
wrote:
I had 2 nodes go down today, here is the ERRORs from
the system log on both
, Cogumelos Maravilha <cogumelosmaravi...@sapo.pt> wrote:
Can you grep ERROR system.log
On 11-05-2017 21:52, Daniel Steuernol
wrote:
There is nothing in the system log about it being
drained or shutdow
F, how much data do you store per node and what kind of servers do you use (core count, RAM, disk, ...)?Cheers,TommasoOn Mon, May 29, 2017 at 6:22 PM, Daniel Steuernol <dan...@sendwithus.com> wrote:I am running a 6 node cluster, and I have noticed that the reported load on each node rise
I am running a 6 node cluster, and I have noticed that the reported load on each node rises throughout the week and grows way past the actual disk space used and available on each node. Also eventually latency for operations suffers and the nodes have to be restarted. A couple questions on this,
, 2017 at 10:36 PM, Daniel Steuernol <dan...@sendwithus.com> wrote:I don't believe incremental repair is enabled, I have never enabled it on the cluster, and unless it's the default then it is off. Also I don't see a setting in cassandra.yaml for it.
On May 30 2
ental backup enabled and snapshots are occupying the space.run nodetool clearsnapshot command.On Tue, May 30, 2017 at 11:12 AM, Daniel Steuernol <dan...@sendwithus.com> wrote:It's 3-4TB per node, and by load rises, I'm talking about load as reported by nodetool status.
apshot command.On Tue, May 30, 2017 at 11:12 AM, Daniel Steuernol <dan...@sendwithus.com> wrote:It's 3-4TB per node, and by load rises, I'm talking about load as reported by nodetool status.
On May 30 2017, at 10:25 am, daemeon reiydelle <daeme...@g
o find it was vanity, but
the dreamers of the day are dangerous men, for they may act their dreams
with open eyes, to make it possible.” — T.E. Lawrence
On Tue, May 30, 2017 at 1:36 PM, Daniel Steuernol <dan...@sendwithus.com> wrote:I don't believe incremental repair is enabled, I have n
bey's tuning guide frequently if nothing else for the tools he mentions and notes on the java gc. I want to say heap size of 15G sounds a little high but I am starting to talk a bit out of my depth when it comes to java tuning. see datastax's official cassandra 2.1 jvm tuning doc and
98London (+44) (0) 20 8144 9872
On Thu, Jun 1, 2017 at 7:18 AM, Daniel Steuernol <dan...@sendwithus.com> wrote:I am just restarting cassandra. I'm not having any disk space issues I think, but we're having issues where operations have increased latency, and these are fixed by a restart.
rote:3-4 TB per node or in total?On Tue, May 30, 2017 at 6:48 PM, Daniel Steuernol <dan...@sendwithus.com> wrote:I should also mention that I am running cassandra 3.10 on the cluster
On May 29 2017, at 9:43 am, Daniel S
nodes. Whether it is replacing a down node or inserting a new node, having a large amount of data on each node will mean that it takes longer for a node to join the cluster if it is streaming the data.Kind regards,AnthonyOn 30 May 2017 at 02:43, Daniel Steuernol <dan...@sendwithus.com> wrote
I should also mention that I am running cassandra 3.10 on the cluster
On May 29 2017, at 9:43 am, Daniel Steuernol <dan...@sendwithus.com> wrote:
The cluster is running with RF=3, right now each node is storing abo
16 matches
Mail list logo