hi,do you use any perfomance monitoring tool like prometheus?
On Monday, July 22, 2019, 1:16:58 PM GMT+3, CPC <[email protected]> wrote:
Hi everybody,
State column contains "R" or "D" values. Just a single character. As Rajsekhar
said, only difference is the table can contain high number of cell count. In
the mean time we made a major compaction and data per node was 5-6 gb.
On Mon, Jul 22, 2019, 10:56 AM Rajsekhar Mallick <[email protected]>
wrote:
Hello Team,
The difference in write latencies between both the tables though
significant,but the higher latency being 11.353 ms is still acceptable.
Overall Writes not being an issue, but write latency for this particular table
on the higher side does point towards data being written to the table.Few
things which I noticed, is the data in cell count column in nodetool
tablehistogram o/p for message_history_state table is scattered.The partition
size histogram for the tables is consistent, but the column count histogram for
the impacted table isn't uniform.May be we can start thinking on these lines.
I would also wait for some expert advice here.
Thanks
On Mon,the 22 Jul, 2019, 12:31 PM Ben Slater, <[email protected]>
wrote:
Is the size of the data in your “state” column variable? The higher write
latencies at the 95%+ could line up with large volumes of data for particular
rows in that column (the one column not in both tables)?
CheersBen
---
Ben Slater
Chief Product Officer
Read our latest technical blog posts here.
This email has been sent on behalf of Instaclustr Pty. Limited (Australia) and
Instaclustr Inc (USA).
This email and any attachments may contain confidential and legally privileged
information. If you are not the intended recipient, do not copy or disclose
its content, but please reply to this email immediately and highlight the error
to the sender and then immediately delete the message.
On Mon, 22 Jul 2019 at 16:46, CPC <[email protected]> wrote:
Hi guys,
Any idea? I thought it might be a bug but could not find anything related on
jira.
On Fri, Jul 19, 2019, 12:45 PM CPC <[email protected]> wrote:
Hi Rajsekhar,
Here the details:
1)[cassadm@bipcas00 ~]$ nodetool tablestats tims.MESSAGE_HISTORY
Total number of tables: 259
----------------
Keyspace : tims
Read Count: 208256144
Read Latency: 7.655146714749506 ms
Write Count: 2218205275
Write Latency: 1.7826005103175133 ms
Pending Flushes: 0
Table: MESSAGE_HISTORY
SSTable count: 41
Space used (live): 976964101899
Space used (total): 976964101899
Space used by snapshots (total): 3070598526780
Off heap memory used (total): 185828820
SSTable Compression Ratio: 0.8219217809913125
Number of partitions (estimate): 8175715
Memtable cell count: 73124
Memtable data size: 26543733
Memtable off heap memory used: 27829672
Memtable switch count: 1607
Local read count: 7871917
Local read latency: 1.187 ms
Local write count: 172220954
Local write latency: 0.021 ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 130
Bloom filter false ratio: 0.00000
Bloom filter space used: 10898488
Bloom filter off heap memory used: 10898160
Index summary off heap memory used: 2480140
Compression metadata off heap memory used: 144620848
Compacted partition minimum bytes: 36
Compacted partition maximum bytes: 557074610
Compacted partition mean bytes: 155311
Average live cells per slice (last five minutes):
25.56639344262295
Maximum live cells per slice (last five minutes): 5722
Average tombstones per slice (last five minutes):
1.8681948424068768
Maximum tombstones per slice (last five minutes): 770
Dropped Mutations: 97812
----------------
[cassadm@bipcas00 ~]$ nodetool tablestats tims.MESSAGE_HISTORY_STATE
Total number of tables: 259
----------------
Keyspace : tims
Read Count: 208257486
Read Latency: 7.655137315414438 ms
Write Count: 2218218966
Write Latency: 1.7825896304427324 ms
Pending Flushes: 0
Table: MESSAGE_HISTORY_STATE
SSTable count: 5
Space used (live): 6403033568
Space used (total): 6403033568
Space used by snapshots (total): 19086872706
Off heap memory used (total): 6727565
SSTable Compression Ratio: 0.271857664111622
Number of partitions (estimate): 1396462
Memtable cell count: 77450
Memtable data size: 620776
Memtable off heap memory used: 1338914
Memtable switch count: 1616
Local read count: 988278
Local read latency: 0.518 ms
Local write count: 109292691
Local write latency: 11.353 ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 1876208
Bloom filter off heap memory used: 1876168
Index summary off heap memory used: 410747
Compression metadata off heap memory used: 3101736
Compacted partition minimum bytes: 36
Compacted partition maximum bytes: 129557750
Compacted partition mean bytes: 17937
Average live cells per slice (last five minutes):
4.692893401015229
Maximum live cells per slice (last five minutes): 258
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Dropped Mutations: 1344158
2)[cassadm@bipcas00 conf]$ nodetool tablehistograms tims MESSAGE_HISTORY
tims/MESSAGE_HISTORY histograms
Percentile SSTables Write Latency Read Latency Partition Size
Cell Count
(micros) (micros) (bytes)
50% 3.00 20.50 454.83 14237
17
75% 17.00 24.60 2346.80 88148
103
95% 17.00 35.43 14530.76 454826
924
98% 17.00 42.51 20924.30 1131752
2299
99% 17.00 42.51 30130.99 1955666
4768
Min 0.00 3.97 73.46 36
0
Max 20.00 263.21 74975.55 386857368
943127
[cassadm@bipcas00 conf]$ nodetool tablehistograms tims MESSAGE_HISTORY_STATE
tims/MESSAGE_HISTORY_STATE histograms
Percentile SSTables Write Latency Read Latency Partition Size
Cell Count
(micros) (micros) (bytes)
50% 5.00 20.50 315.85 924
1
75% 6.00 35.43 379.02 5722
7
95% 10.00 4055.27 785.94 61214
310
98% 10.00 74975.55 3379.39 182785
924
99% 10.00 107964.79 10090.81 315852
1916
Min 0.00 3.31 42.51 36
0
Max 10.00 322381.14 25109.16 129557750
1629722
3) RF=3
4)CL QUORUM
5) Single insert prepared statement. no LOGGED/UNLOGGED batch or LWT
On Thu, 18 Jul 2019 at 20:51, Rajsekhar Mallick <[email protected]> wrote:
Hello,
Kindly post below details
1. Nodetool cfstats for both the tables.
2. Nodetool cfhistograms for both the tables.
3. Replication factor of the tables.
4. Consistency with which write requests are sent
5. Also the type of write queries for the table if handy would also help
(Light weight transactions or Batch writes or Prepared statements)
Thanks
On 2019/07/18 15:48:09, CPC <[email protected]> wrote:
> Hi all,>
>
> Our cassandra cluster consist of two dc and every dc we have 10 nodes. We>
> are using DSE 5.1.12 (cassandra 3.11).We have a high local write latency on>
> a single table. All other tables in our keyspace have normal latencies>
> like 0.02 msec,even tables that have more write tps and more data. Below>
> you can find two table descriptions and their latencies.>
> message_history_state have high local write latency. This is not node>
> specific every node have this high local write latency for>
> message_history_state. Have you ever see such a behavior or any clue why>
> this could happen?>
>
> CREATE TABLE tims."MESSAGE_HISTORY" (>
> > username text,>
> > date_partition text,>
> > jid text,>
> > sent_time timestamp,>
> > message_id text,>
> > stanza text,>
> > PRIMARY KEY ((username, date_partition), jid, sent_time, message_id)>
> > ) WITH CLUSTERING ORDER BY (jid ASC, sent_time DESC, message_id ASC)>
> > AND bloom_filter_fp_chance = 0.01>
> > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}>
> > AND comment = ''>
> > AND compaction = {'bucket_high': '1.5', 'bucket_low': '0.5', 'class':>
> > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',>
> > 'enabled': 'true', 'max_threshold': '32', 'min_sstable_size': '50',>
> > 'min_threshold': '4', 'tombstone_compaction_interval': '86400',>
> > 'tombstone_threshold': '0.2', 'unchecked_tombstone_compaction': 'false'}>
> > AND compression = {'chunk_length_in_kb': '64', 'class':>
> > 'org.apache.cassandra.io.compress.LZ4Compressor'}>
> > AND crc_check_chance = 1.0>
> > AND dclocal_read_repair_chance = 0.0>
> > AND default_time_to_live = 0>
> > AND gc_grace_seconds = 86400>
> > AND max_index_interval = 2048>
> > AND memtable_flush_period_in_ms = 0>
> > AND min_index_interval = 128>
> > AND read_repair_chance = 0.0>
> > AND speculative_retry = '99PERCENTILE';>
> >>
> > CREATE TABLE tims."MESSAGE_HISTORY_STATE" (>
> > username text,>
> > date_partition text,>
> > message_id text,>
> > jid text,>
> > state text,>
> > sent_time timestamp,>
> > PRIMARY KEY ((username, date_partition), message_id, jid, state)>
> > ) WITH CLUSTERING ORDER BY (message_id ASC, jid ASC, state ASC)>
> > AND bloom_filter_fp_chance = 0.01>
> > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}>
> > AND comment = ''>
> > AND compaction = {'class':>
> > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',>
> > 'max_threshold': '32', 'min_threshold': '4'}>
> > AND compression = {'chunk_length_in_kb': '64', 'class':>
> > 'org.apache.cassandra.io.compress.LZ4Compressor'}>
> > AND crc_check_chance = 1.0>
> > AND dclocal_read_repair_chance = 0.1>
> > AND default_time_to_live = 0>
> > AND gc_grace_seconds = 864000>
> > AND max_index_interval = 2048>
> > AND memtable_flush_period_in_ms = 0>
> > AND min_index_interval = 128>
> > AND read_repair_chance = 0.0>
> > AND speculative_retry = '99PERCENTILE';>
> >>
>
> message_history Local write latency: 0.021 ms>
> message_history_state Local write latency: 11.353 ms>
>
> Thanks in advance.>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]