The table has 24 sstables with size tiered compaction, when I run nodetool
tablehistograms I see 99% percentile of the queries are showing up 24 as
the number of sstables. But the read latency is very low, my understanding
from the tableshistograms's sstable column is - it shows how many sstables
Thanks Erick.
On Sun, Oct 25, 2020 at 6:45 AM Erick Ramirez
wrote:
> Not quite. Cassandra does a validation compaction for the merkle tree
> calculation. And it streams SSTables instead of individual mutations from
> one node to another to synchronise data between replicas. Cheers!
>
>
--
Hello, when repairs are run in Cassandra, does the read and writes done for
repair count in the read/write metrics? Repair has to read the table to
build merkle tree, similarly when it has to do repair it has to write to
the table, logically i feel it should.
If so, is there any way to identify
h of json string of each row.
>
>
>
> Perform average.
>
>
>
> Cheers.
>
>
>
> *From: *Ayub M
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Wednesday, December 11, 2019 at 11:17 PM
> *To: *"user@cassandra.apache.org"
> *S
How to find average row size of a table in cassandra? I am not looking for
partition size (which can be found from nodetool tablehistograms), since a
partition can have many rows. I am looking for row size.
Hello, we are using DSE Search workload with Search and Cass running on
same nodes/jvm.
1. When repairs are run, does it initiate rebuilds of solr indexes? Does it
rebuild only when any data is repaired?
2. How about the compactions, does it trigger any search indexes rebuilds?
I guess not, since
Dimo, how do you generate sstables? Do you mean load data locally on a
cassandra node and use sstableloader?
On Fri, Aug 2, 2019, 5:48 PM Dimo Velev wrote:
> Hi,
>
> Batches will actually slow down the process because they mean a different
> thing in C* - as you read they are just grouping
latency: 0 ms
On Thu, Jul 25, 2019 at 1:49 PM Ayub M wrote:
> Thanks Jeff, does internal mean local node operations - in this case
> mutation response from local node and cross node means the time it took to
> get response back from other nodes depending on the consistency level
sounds like either really bad disks or really bad JVM GC
> pauses.
>
>
> On Thu, Jul 25, 2019 at 8:45 AM Ayub M wrote:
>
>> Hello, how do I read dropped mutations error messages - whats internal
>> and cross node? For mutations it fails on cross-node and read_repair/re
Hello, how do I read dropped mutations error messages - whats internal and
cross node? For mutations it fails on cross-node and read_repair/read it
fails on internal. What does it mean?
INFO [ScheduledTasks:1] 2019-07-21 11:44:46,150
MessagingService.java:1281 - MUTATION messages were dropped in
Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances.
The cluster is up and running, took snapshot of the keyspaces volume.
Now I want to restore few tables/keyspaces from the snapshot volumes, so I
created another cluster say cluster2 and attached the snapshot volumes on
to
arge STCS compaction can cause pretty
> meaningful allocations for these. Also, if you have an unusually low
> compression chunk size or a very low bloom filter FP ratio, those will be
> larger.
> >
> >
> > --
> > Jeff Jirsa
> >
> >
> > > On Jan 26
-21 21:41:04.63 | 10.216.87.180
|611 | 127.0.0.1
Request complete | 2019-02-21
When it reports 1 tombstone cells, does it mean 1 records? Otherwise it
read more than one tombstone cell.
On Wed, Feb 20, 2019 at 1:30 AM Kenneth Brotman
wrote:
> There is another good articl
In the logs I see tombstone warning threshold.
Read 411 live rows and 1644 tombstone cells for query SELECT * FROM ks.tbl
WHERE key = XYZ LIMIT 5000 (see tombstone_warn_threshold)
This is Cassandra 3.11.3, I see there are 2 sstables for this table and the
partition XYZ exists in only one file.
Thanks Alain/Chris.
Firstly I am not seeing any difference when using gc_grace_seconds with
sstablemetadata.
CREATE TABLE ks.nmtest (
reservation_id text,
order_id text,
c1 int,
order_details map,
PRIMARY KEY (reservation_id, order_id)
) WITH CLUSTERING ORDER BY (order_id
Cassandra node went down due to OOM, and checking the /var/log/message I
see below.
```
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: java invoked oom-killer:
gfp_mask=0x280da, order=0, oom_score_adj=0
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: java cpuset=/ mems_allowed=0
Jan 23 20:07:17
I have created a table with a collection. Inserted a record and took
sstabledump of it and seeing there is range tombstone for it in the
sstable. Does this tombstone ever get removed? Also when I run
sstablemetadata on the only sstable, it shows "Estimated droppable
tombstones" as 0.5", Similarly
Hello, I have a table with 3m records and say 3m recs have empty string on
which an MView is built upon.
create table t
( c1 int, c2 text, primary key(c1));
create materialized view mv as select c2,c1 from t
WHEREc2 IS NOT NULL AND c1 IS NOT NULL
PRIMARY KEY (c2, c1);
There are recs
There are 2 DCs each with 3 nodes, and the RF used for writes is 2 and
reads its each_quorum. A lightweight transaction is used to ensure
consistency of updates across DCs. Now what is happening is for certain
records, hundreds (maybe thousands) of lwt updates are hitting the cluster
around same
19 matches
Mail list logo