Do we have to setup the reaper on one of the node where Cassandra cluster
is running?
We are using a separate node where we have the connectivity to the
Cassandra cluster .
We have tried with the certificate settings in
/usr/local/bin/cassandra-reaper
We have put below in /usr/local/bin/cassandra
looks like you're connecting to a service listening on SSL but you don't
have the CA used in your truststore
On Thu, May 24, 2018 at 1:58 PM, Surbhi Gupta
wrote:
> Getting below error:
>
> Caused by: sun.security.validator.ValidatorException: PKIX path building
> failed: sun.security.provider.ce
Getting below error:
Caused by: sun.security.validator.ValidatorException: PKIX path building
failed: sun.security.provider.certpath.SunCertPathBuilderException: unable
to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397)
at
Another question, We use 9142 cqlsh port in one of the datacenter and on
other datacenter we use 9042 port.
How should we configure this ?
On 24 May 2018 at 10:22, Surbhi Gupta wrote:
> What is the impact of
> PARALLEL - all replicas at the same time ?
> Will it make repair faster,?
> Do we expe
What is the impact of
PARALLEL - all replicas at the same time ?
Will it make repair faster,?
Do we expect more CPU , Load and memory usage in case if we use Parallel ,
compare to other settings ?
On 21 May 2018 at 22:55, Alexander Dejanovski
wrote:
> You won't be able to have less segments th
You won't be able to have less segments than vnodes, so just use 256
segments per node, use parallel as repair parallelism, and set intensity to
1.
You apparently have more than 3TB per node, and that kind of density is
always challenging when it comes to run "fast" repairs.
Cheers,
Le mar. 22 m
We are on Dse 4.8.15 and it is cassandra 2.1.
What are the best configuration to use for reaper for 144 nodes with 256
vnodes and it shows around 532TB data when we start opscenter repairs.
We need to finish repair soon.
On Mon, May 21, 2018 at 10:53 AM Alexander Dejanovski <
a...@thelastpickle.c
Hi Subri,
Reaper might indeed be your best chance to reduce the overhead of vnodes
there.
The latest betas include a new feature that will group vnodes sharing the
same replicas in the same segment. This will allow to have less segments
than vnodes, and is available with Cassandra 2.2 and onwards
Thanks Abdul
On Mon, May 21, 2018 at 6:28 AM Abdul Patel wrote:
> We have a paramater in reaper yaml file called
> repairManagerSchrdulingIntervalSeconds default is 10 seconds , i tested
> with 8,6,5 seconds and found 5 seconds optimal for my environment ..you go
> down further but it will hav
We have a paramater in reaper yaml file called
repairManagerSchrdulingIntervalSeconds default is 10 seconds , i tested
with 8,6,5 seconds and found 5 seconds optimal for my environment ..you go
down further but it will have cascading effects in cpu and memory
consumption.
So test well.
On Monday
Thanks a lot for your inputs,
Abdul, how did u tune reaper?
On Sun, May 20, 2018 at 10:10 AM Jonathan Haddad wrote:
> FWIW the largest deployment I know about is a single reaper instance
> managing 50 clusters and over 2000 nodes.
>
> There might be bigger, but I either don’t know about it or ca
FWIW the largest deployment I know about is a single reaper instance
managing 50 clusters and over 2000 nodes.
There might be bigger, but I either don’t know about it or can’t remember.
On Sun, May 20, 2018 at 10:04 AM Abdul Patel wrote:
> Hi,
>
> I recently tested reaper and it actually helped
Hi,
I recently tested reaper and it actually helped us alot. Even with our
small footprint 18 node reaper takes close to 6 hrs.. But it really depends on number nodes. For
example if you have 4 nodes then it runs on 4*256 =1024 segements ,
so for your env. Ut will be 256*144 close to 36k segements
Hi,
We have a cluster with 144 nodes( 3 datacenter) with 256 Vnodes .
When we tried to start repairs from opscenter then it showed 1.9Million
ranges to repair .
And even after doing compaction and strekamthroughput to 0 , opscenter is
not able to help us much to finish repair in 9 days timeframe .
14 matches
Mail list logo