For write threads, check "nodetool tpstats"
Are you loading the data serially? That is, one query at a time? If so
(and if you have no clear resource bottlenecks) you're probably going to
want to add some concurrency into the process. Break the data up into
smaller chunks and have several
Assuming this isn't an existing cluster, the easiest method is probably to
use logical "racks" to explicitly control which hosts have a full replica
of the data. with RF3 and 3 "racks", each "rack" has one complete replica.
If you're not using the logical racks, I think the replicas are spread
Goal: backup a cluster with the minimum amount of data. Restore to be done
with sstableloader
Let's start with a basic case:
- six node cluster
- one datacenter
- RF3
- data is perfectly replicated/repaired
- Manual tokens (no vnodes)
- simplest strategy
In this case, it is (theoretically)
User in dev env with 4 node cluster , 50k records with inserts of 70k
characters (json in text)
This will happen daily in some intervals not yet defined on a single table.
Its within 1 data center
On Wednesday, August 15, 2018, Durity, Sean R
wrote:
> Might also help to know:
>
> Size of
Might also help to know:
Size of cluster
How much data is being loaded (# of inserts/actual data size)
Single table or multiple tables?
Is this a one-time or occasional load or more frequently?
Is the data located in the same physical data center as the cluster? (any
network latency?)
On the
I didnt see any such bottlenecks , they are testing to write json file as
an text in cassandra which is slow ..rest of performance looks good?
Regarding write threads where i can chexk how many configured and if there
is bittleneck?
On Wednesday, August 15, 2018, Elliott Sims wrote:
> Step one
Step one is always to measure your bottlenecks. Are you spending a lot of
time compacting? Garbage collecting? Are you saturating CPU? Or just a
few cores? Or I/O? Are repairs using all your I/O? Are you just running
out of write threads?
On Wed, Aug 15, 2018 at 5:48 AM, Abdul Patel
That’s what the retry handler does (see Horia’s response). You can also use the
speculative retry to possibly send requests to multiple coordinators a little
earlier as well to reduce the impact of the slow requests (ie a GC).
Hello,
I believe that this is what you are looking for -
https://docs.datastax.com/en/developer/java-driver/3.5/manual/retries/
In particular,
Hello Cassandra community!
Unfortunately, I cannot find the corresponding info via load balancing
manuals, so the question is:
Is it possible to set up java/python cassandra driver to redirect
unsuccessful read requests from the coordinator node, which came to be
unresponsive during the session,
Application team is trying to load data with leveled compaction and its
taking 1hr to load , what are best options to load data faster ?
On Tuesday, August 14, 2018, @Nandan@
wrote:
> Bro, Please explain your question as much as possible.
> This is not a single line Q session where we will
Yep. It might require a full node replace depending on what data is lost
from the system tables. In some cases you might be able to recover from
partially lost system info, but it's not a sure thing.
On Wed., 15 Aug. 2018, 17:55 Christian Lorenz, <
christian.lor...@webtrekk.com> wrote:
> Thank
I've upgraded to 3.0.17 and the issue is still there, Is there a jira
ticket for that bug or should i create one?
On Wed, Jul 25, 2018 at 2:57 PM Vitali Dyachuk wrote:
> I'm using 3.0.15. I see that there is some fix for sstable metadata in
> 3.0.16
Thank you for the answers. We are using the current version 3.11.3 So this one
includes CASSANDRA-6696.
So if I get this right, losing system tables will need a full node rebuild.
Otherwise repair will get the node consistent again.
Regards,
Christian
Von: kurt greaves
Antworten an:
14 matches
Mail list logo