Apart from all that you can try to reduce the compression chunk size from
the default 64kb to 16kb or even down to 4kb. This can help a lot if your
read io on disk is very high and the page cache is not efficient.
Am 21.07.2017 23:03 schrieb "Petrus Gomes" :
> Thanks a lot to
Haven't checked the code but pretty sure it's because it will always use
the known state stored in the system tables. the seeds in the yaml are
mostly for initial set up, used to discover the rest of the nodes in the
ring.
Once that's done there is little reason to refer to them again, unless
Hi Asad,
The 5000 ms is not configurable
(https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/src/java/org/apache/cassandra/net/MessagingService.java#L423
Thanks a lot to share the result.
Boa Sorte.
;-)
Take care.
Petris Silva
On Fri, Jul 21, 2017 at 12:19 PM, Felipe Esteves <
felipe.este...@b2wdigital.com> wrote:
> Hi, Petrus,
>
> Seems we've solved the problem, but it wasn't relationed to repair the
> cluster or disk latency.
> I've increased
Hi, Petrus,
Seems we've solved the problem, but it wasn't relationed to repair the
cluster or disk latency.
I've increased the memory available for Cassandra from 16GB to 24GB and the
performance was much improved!
The main symptom we've observed in Opscenter was a significantly decrease
in total
SimpleStrategy doesn’t take DC or rack into account at all. It simply places
replicas on subsequent tokens. You could end up with 3 copies in 1 DC and zero
in another.
/**
* This class returns the nodes responsible for a given
* key but does not respect rack awareness. Basically
*
> If using the SimpleStrategy replication class, it appears that
> replication_factor is the only option, which applies to the entire
> cluster, so only one node in both datacenters would have the data.
This runs counter to my understanding, or else I'm not reading your
statement correctly. When
Hi Asad,
You can increase it by 2 at a time. For example if its currently 2, try
increasing it to 4 and retest.
We flush 5-6 tables at a time and use 3 memtable_flush_writers. It works
great!! There were dropped mutations when it was set to one. The idea is to
make sure that writes are not
On 2017-07-21 06:41 (-0700), Jan Algermissen
wrote:
>
> IOW, suppose I
>
> - have a cluster spanning geographic regions
> - restrict the CAS queries to key spaces that are only replicated in a
> single region and I use LOCAL_SERIAL CL
>
> would 100 CAS queries
I have a question.
When I change the list of seeds of my cluster, and I activate the log
cassandra in mode TRACE
nodetool setlogginglevel org.apache.cassandra.gms.Gossiper TRACE
I can see that the Gossip Digest does not change at all, and it conserve
the previous seed list. I was surprise to
Hi Akhil,
Thank you for your reply. Previously, I did ‘tune’ various timeouts – basically
increased them a bit but none of those parameter listed in the link matches
with that “were dropped in last 5000 ms”.
I was wondering from where that [5000ms] number is coming from when, like I
mentioned
Thanks for your reply Subroto – I’ll try you suggestions to see if this help.
I’ll revert with results.
From: Subroto Barua [mailto:sbarua...@yahoo.com.INVALID]
Sent: Thursday, July 20, 2017 12:22 PM
To: user@cassandra.apache.org
Subject: Re: MUTATION messages were dropped in last 5000 ms for
Thank you for your reply. I’ll increase memTable_flush_writes and report back
if it helps.
Is there any formula that we can use to arrive at correct number of
memTable_flush_writers ? or the exercise would windup be like “try and error”
taking much time to arrive at some number that may not be
Hi,
I just read [1] which describes a lease implementation using CAS
queries. It applies a TTL to the lease which needs to be refreshed
periodically by the lease holder.
I use such a pattern myself since a couple of years, so no surprise
there.
However, the article uses CAS queries not
On Thursday 20 of July 2017 23:27:55 Jerry wrote:
> In general it seems that repairs should be done prior to every upgrade (in
> fact they should be done at least weekly), but with a minor upgrade like
> this is it safe to upgrade without first repairing?
Depends on meaning of "is it safe" ...
In general it seems that repairs should be done prior to every upgrade (in
fact they should be done at least weekly), but with a minor upgrade like
this is it safe to upgrade without first repairing?
Specifically, I'm looking to upgrade from Cassandra 2.0.11 to 2.0.17.
16 matches
Mail list logo