You should take a look into ssTableLoader cassandra utility,
https://docs.datastax.com/en/cassandra/3.0/cassandra/tools/toolsBulkloader.html
On Fri, Apr 26, 2019 at 1:33 AM Ivan Junckes Filho
wrote:
> Hi guys,
>
> I am trying do a bakup and restore script in a simple way. Is there a way
> I can
Hi
Just to add to that, this is the way C* handles deletes. Cassandra creates
delete markers called tombstones on delete requests. They are retained (even
after compaction) for a period of time configured using gc_grace_seconds
(default 10 days) to ensure that if a node was down when delete
Could it be related to hinted hand offs being stored in Node1 and then
attempted to be replayed in Node2 when it comes back causing more load as new
mutations are also being applied from cassandra-stress at same time?
Alok Dwivedi
Senior Consultant
https://www.instaclustr.com/
> On 26 Apr
In the absence of anyone else having any bright ideas - it still sounds to
me like the kind of scenario that can occur in a heavily overloaded
cluster. I would try again with a lower load.
What size machines are you using for stress client and the nodes? Are they
all on separate machines?
Cheers
Hi guys,
I am trying do a bakup and restore script in a simple way. Is there a way I
can do one command to backup and one to restore a backup?
Or the only way is to create snapshots of all the tables and then restore
one by one?
Hello,
My 2 cents? Do not use floats for money, less for billing. It has never
been good for any database due to the representation of that number for the
computer. It's still the case with Cassandra, and it even seems that
internally we have a debate about how we are doing things there:
Hello,
Sorry again.
We found yet another weird thing in this.
If we stop nodes with systemctl or just kill (TERM), it causes the problem,
but if we kill -9, it doesn't cause the problem.
Thanks,
Hiro
On Wed, Apr 24, 2019 at 11:31 PM Hiroyuki Yamada wrote:
> Sorry, I didn't write the version