* and O/S buffer cache,
because to write to disk you pass through buffer cache first.
From: Aaron Ploetz
Reply-To: "user@cassandra.apache.org"
Date: Tuesday, June 2, 2020 at 9:38 AM
To: "user@cassandra.apache.org"
Subject: Re: Cassandra crashes when using offheap_objects for
memt
primary key ((partition_key, clustering_key))
Also, this primary key definition does not define a partitioning key and a
clustering key. It defines a *composite* partition key.
If you want it to instantiate both a partition and clustering key, get rid
of one set of parens.
primary key
I would try running it with memtable_offheap_space_in_mb at the default for
sure, but definitely lower than 8GB. With 32GB of RAM, you're already
allocating half of that for your heap, and then halving the remainder for
off heap memtables. What's left may not be enough for the OS, etc. Giving
I just changed these properties to increase flushed file size (decrease number
of compactions):
memtable_allocation_type from heap_buffers to offheap_objects
memtable_offheap_space_in_mb: from default (2048) to 8192
Using default value for other memtable/compaction/commitlog configurations .
>> info map,
>> creationtimestamp bigint,
>> lastupdatedtimestamp bigint,
>> PRIMARY KEY ( (id) )
>> );
>>
>> CREATE INDEX ON message ( hash );
>> -
>> Cassandra crashes when i load data using sstablelo
text,
> category text,
> hash text,
> info map,
> creationtimestamp bigint,
> lastupdatedtimestamp bigint,
> PRIMARY KEY ( (id) )
> );
>
> CREATE INDEX ON message ( hash );
> -
> Cassandra crashes when i load data usi
( hash );
-
Cassandra crashes when i load data using sstableloader. Load is happening
correctly but seems that cassandra crashes when its trying to build index
on table with huge data.
I have two questions.
1. Is there any better way to clone keyspace?
2. How can i optimize
User <user@cassandra.apache.org>
Subject: Re: Cassandra crashes
sounds like Cassandra is being killed by the oom killer. can you check dmesg to
see if this is the case? sounds a bit absurd with 256g of memory but could be a
config problem.
So the reason for the large number of prepared statements is because of the
nature of the application.
One of the periodic job does lookup with a partial key (key prefix, not
filtered queries) for thousands of rows.
Hence the large number of prepared statements.
Almost of the queries once
sounds like Cassandra is being killed by the oom killer. can you check
dmesg to see if this is the case? sounds a bit absurd with 256g of memory
but could be a config problem.
On 08/22/2017 05:39 PM, Thakrar, Jayesh wrote:
Surbhi and Fay,
I agree we have plenty of RAM to spare.
Hi
At the very beginning of system.log there is a
INFO [CompactionExecutor:487] 2017-08-21 23:21:01,684
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate
akrar, Jayesh" <jthak...@conversantmedia.com>
Cc: "user@cassandra.apache.org" <user@cassandra.apache.org>, Surbhi Gupta
<surbhi.gupt...@gmail.com>
Subject: Re: Cassandra crashes
what kind compaction? LCS ?
On Aug 22, 2017 8:39 AM, "Thakrar, Jayesh"
<jthak...@conve
e word "stopped"
(e.g. grep stopped cassandra-gc.log.*)
Thanks for the quick replies!
Jayesh
*From: *Surbhi Gupta <surbhi.gupt...@gmail.com>
*Date: *Tuesday, August 22, 2017 at 10:19 AM
*To: *"Thakrar, Jayesh" <jthak...@conversantmedia.com>, "
user@cassandra
16GB heap is too small for G1GC . Try at least 32GB of heap size
On Tue, Aug 22, 2017 at 7:58 AM Fay Hou [Storage Service] <
fay...@coupang.com> wrote:
> What errors do you see?
> 16gb of 256 GB . Heap is too small. I would give heap at least 160gb.
>
>
> On Aug 22, 2017 7:42 AM, "Thakrar,
What errors do you see?
16gb of 256 GB . Heap is too small. I would give heap at least 160gb.
On Aug 22, 2017 7:42 AM, "Thakrar, Jayesh"
wrote:
Hi All,
We are somewhat new users to Cassandra 3.10 on Linux and wanted to ping the
user group for their experiences.
You typically don't want to set the eden space when you're using G1
--
Jeff Jirsa
> On Aug 22, 2017, at 7:42 AM, Thakrar, Jayesh
> wrote:
>
> Hi All,
>
> We are somewhat new users to Cassandra 3.10 on Linux and wanted to ping the
> user group for their
Hi All,
We are somewhat new users to Cassandra 3.10 on Linux and wanted to ping the
user group for their experiences.
Our usage profile is batch jobs that load millions of rows to Cassandra every
hour.
And there are similar period batch jobs that read millions of rows and do some
processing,
It could be the linux kernel killing Cassandra b/c of memory usage. When
this happens, nothing is logged in Cassandra. Check the system
logs: /var/log/messages Look for a message saying Out of Memory... kill
process...
On Mon, Jun 8, 2015 at 1:37 PM, Paulo Motta pauloricard...@gmail.com
wrote:
try checking your system logs (generally /var/log/syslog) to check if the
cassandra process was killed by the OS oom-killer
2015-06-06 15:39 GMT-03:00 Brian Sam-Bodden bsbod...@integrallis.com:
Berk,
1 GB is not enough to run C*, the minimum memory we use on Digital
Ocean is 4GB.
Cheers,
Hi all,
I've installed Cassandra on a test server hosted on Digital Ocean. The server
has 1GB RAM, and is running a single docker container alongside C*. Somehow,
every night, the Cassandra instance crashes. The annoying part is that I cannot
see anything wrong with the log files, so I can't
Berk,
1 GB is not enough to run C*, the minimum memory we use on Digital Ocean
is 4GB.
Cheers,
Brian
http://integrallis.com
On Sat, Jun 6, 2015 at 10:50 AM, graffit...@yahoo.com wrote:
Hi all,
I've installed Cassandra on a test server hosted on Digital Ocean. The
server has 1GB RAM, and
Check your file limits -
http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html?pagename=docsversion=1.2file=#cassandra/troubleshooting/trblshootInsufficientResources_r.html
On Friday, September 6, 2013, Jan Algermissen wrote:
On 06.09.2013, at 13:12, Alex Major
Hi John,
On 10.09.2013, at 01:06, John Sanda john.sa...@gmail.com wrote:
Check your file limits -
http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html?pagename=docsversion=1.2file=#cassandra/troubleshooting/trblshootInsufficientResources_r.html
Did that already - without
On 06.09.2013, at 17:07, Jan Algermissen jan.algermis...@nordsc.com wrote:
On 06.09.2013, at 13:12, Alex Major al3...@gmail.com wrote:
Have you changed the appropriate config settings so that Cassandra will run
with only 2GB RAM? You shouldn't find the nodes go down.
Check out this
Have you changed the appropriate config settings so that Cassandra will run
with only 2GB RAM? You shouldn't find the nodes go down.
Check out this blog post
http://www.opensourceconnections.com/2013/08/31/building-the-perfect-cassandra-test-environment/,
it outlines the configuration settings
On 06.09.2013, at 13:12, Alex Major al3...@gmail.com wrote:
Have you changed the appropriate config settings so that Cassandra will run
with only 2GB RAM? You shouldn't find the nodes go down.
Check out this blog post
Hi,
I have set up C* in a very limited environment: 3 VMs at digitalocean with 2GB
RAM and 40GB SSDs, so my expectations about overall performance are low.
Keyspace uses replication level of 2.
I am loading 1.5 Mio rows (each 60 columns of a mix of numbers and small texts,
300.000 wide rows
I'm sorry for the lack of information
I'm using 0.6.3.
The move was moving the data dir and the commitlog dir
But i now removed them and let the system bootstrap from the ring.
i know i'm lacking in information here.. but i thought i needed to be
mentioned overhere this could happen.
Do you
Hi,
I've moved my cassandra to another machine, started it up again, but got
this error
INFO 22:06:28,931 Replaying
/var/lib/cassandra/commitlog/CommitLog-1279609619367.log,
/var/lib/cassandra/commitlog/CommitLog-1279805020866.log,
/var/lib/cassandra/commitlog/CommitLog-1279840051243.log
INFO
I've moved my cassandra to another machine, started it up again, but got
this error
Which version of Cassandra exactly? (So that one can look at matching
source code)
Also, were you running the exact same version of Cassandra on both
servers (i.e., both the source and the destination)?
Was
Hi,
I'm sorry for the lack of information
I'm using 0.6.3.
The move was moving the data dir and the commitlog dir
But i now removed them and let the system bootstrap from the ring.
i know i'm lacking in information here.. but i thought i needed to be
mentioned overhere this could happen.
Pieter
31 matches
Mail list logo