Re: can i...

2019-03-07 Thread Nick Hatfield
Amazing thank you!

I have an overlapping timestamp data issue in multiple SSTables. This is a 
production cluster of 27 nodes using Kairosdb front-end on cassandra 3.11. Our 
kairosdb data_points table, looks like this:

CREATE TABLE kairosdb.data_points (
key blob,
column1 blob,
value blob,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'com.jeffjirsa.cassandra.db.compaction.TimeWindowCompactionStrategy', 
'compaction_window_size': '1', 'compaction_window_unit': 'DAYS', 
'max_threshold': '32', 'min_threshold': '4', 'timestamp_resolution': 
'MILLISECONDS', 'tombstone_compaction_interval': '432000', 
'tombstone_threshold': '0.2', 'unchecked_tombstone_compaction': 'true'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 7884009
AND gc_grace_seconds = 432000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99PERCENTILE’;


I run repairs once every 3 days on this 27 node cluster. I ran a manual 
compaction on individual sstables that had very old timestamps. It seems that 
all that did was merge the old stale data, into a less sstables, rather than 
deleting the tombstoned data completely. Here’s a quick look at a few entries 
from the sstable metadata:


Max: 01/01/2019 Min: 12/31/2018 Estimated droppable tombstones: 
0.64078755881038836.8G Mar 4 22:40 mc-230833-big-Data.db
Max: 01/02/2019 Min: 01/01/2019 Estimated droppable tombstones: 
0.67172757633128488.7G Mar 5 12:38 mc-231449-big-Data.db
Max: 01/03/2019 Min: 01/02/2019 Estimated droppable tombstones: 
0.703709586018468314G Mar 5 14:01 mc-231502-big-Data.db
Max: 01/04/2019 Min: 01/03/2019 Estimated droppable tombstones: 
0.708118834202396117G Mar 5 02:03 mc-230946-big-Data.db
Max: 01/05/2019 Min: 01/04/2019 Estimated droppable tombstones: 
0.714898411549268821G Mar 5 05:06 mc-231068-big-Data.db
Max: 01/06/2019 Min: 01/05/2019 Estimated droppable tombstones: 
0.719035155909180221G Mar 5 10:40 mc-231315-big-Data.db
Max: 01/07/2019 Min: 01/06/2019 Estimated droppable tombstones: 
0.717492416346719222G Mar 4 20:05 mc-230680-big-Data.db
Max: 01/08/2019 Min: 01/07/2019 Estimated droppable tombstones: 
0.720906151037500422G Mar 5 09:02 mc-231243-big-Data.db
Max: 01/09/2019 Min: 01/08/2019 Estimated droppable tombstones: 
0.715258976995694719G Mar 4 20:02 mc-230685-big-Data.db
Max: 01/10/2019 Min: 01/09/2019 Estimated droppable tombstones: 
0.67915914976640888.0G Mar 5 04:43 mc-231091-big-Data.db
Max: 01/11/2019 Min: 01/10/2019 Estimated droppable tombstones: 
0.690384642310195812G Mar 5 15:58 mc-231600-big-Data.db
Max: 01/12/2019 Min: 01/11/2019 Estimated droppable tombstones: 
0.796090167846665114G Mar 5 05:48 mc-231118-big-Data.db
Max: 01/13/2019 Min: 01/12/2019 Estimated droppable tombstones: 
0.792557798017554413G Mar 5 18:45 mc-231725-big-Data.db
Max: 01/14/2019 Min: 01/13/2019 Estimated droppable tombstones: 
0.797745632256318314G Mar 5 15:38 mc-231577-big-Data.db
Max: 01/15/2019 Min: 01/14/2019 Estimated droppable tombstones: 
0.791474216117418914G Mar 5 17:39 mc-231674-big-Data.db
Max: 01/16/2019 Min: 01/15/2019 Estimated droppable tombstones: 
0.784442939681395113G Mar 5 11:07 mc-231363-big-Data.db
Max: 01/17/2019 Min: 01/16/2019 Estimated droppable tombstones: 
0.59022766069512796.6G Mar 5 02:13 mc-230984-big-Data.db
Max: 01/18/2019 Min: 01/17/2019 Estimated droppable tombstones: 
0.65475768787093887.4G Mar 5 00:57 mc-230924-big-Data.db
Max: 01/19/2019 Min: 01/18/2019 Estimated droppable tombstones: 
0.68925969368995078.0G Mar 4 19:59 mc-230714-big-Data.db
Max: 01/20/2019 Min: 01/19/2019 Estimated droppable tombstones: 
0.70769466244072038.2G Mar 5 19:14 mc-231761-big-Data.db
Max: 01/21/2019 Min: 01/20/2019 Estimated droppable tombstones: 
0.71127444928487298.8G Mar 5 11:53 mc-231418-big-Data.db
Max: 01/22/2019 Min: 01/21/2019 Estimated droppable tombstones: 
0.70981849069562519.0G Mar 5 12:59 mc-231466-big-Data.db
Max: 01/23/2019 Min: 01/22/2019 Estimated droppable tombstones: 
0.72803784161282869.4G Mar 4 20:04 mc-230712-big-Data.db
Max: 01/24/2019 Min: 01/23/2019 Estimated droppable tombstones: 
0.73759891894863779.9G Mar 5 08:03 mc-231233-big-Data.db
Max: 01/25/2019 Min: 01/24/2019 Estimated droppable tombstones: 
0.755066510564628811G Mar 5 05:54 mc-231130-big-Data.db
Max: 01/26/2019 Min: 01/25/2019 Estimated droppable tombstones: 
0.755651643899716 

Re: can i...

2019-03-07 Thread Surbhi Gupta
Send the details

On Thu, Mar 7, 2019 at 8:45 AM Nick Hatfield 
wrote:

> Use this email to get some insight on how to fix database issues in our
> cluster?
>


can i...

2019-03-07 Thread Nick Hatfield
Use this email to get some insight on how to fix database issues in our cluster?


Commit Log sync problems

2019-03-07 Thread Meg Mara
Hello all,

I recently upgraded from C* 3.0.10 to 3.0.16 and have been receiving these 
warnings about Commit-Log durations being longer than the configured interval. 
I don't understand what the problem is, why is the system complaining about 
such small sync durations? Please advice.

Here are some warnings:


3:WARN  [PERIODIC-COMMIT-LOG-SYNCER] 2019-03-07 13:32:23,785 
NoSpamLogger.java:94 - Out of 2750 commit log syncs over the past 274s with 
average duration of 0.66ms, 4 have exceeded the configured commit interval by 
an average of 22.50ms



2:WARN  [PERIODIC-COMMIT-LOG-SYNCER] 2019-03-07 13:27:01,214 
NoSpamLogger.java:94 - Out of 1 commit log syncs over the past 0s with average 
duration of 113.00ms, 1 have exceeded the configured commit interval by an 
average of 13.00ms


Node's cassandra.yaml setting related to Commit-Log:

commit_failure_policy: stop
commitlog_directory: /cassandra/log
commitlog_segment_size_in_mb: 32
commitlog_sync: periodic
commitlog_sync_period_in_ms: 1
commitlog_total_space_in_mb: 4096

Thank you,
Meg Mara




Re: Maximum memory usage reached

2019-03-07 Thread Kyrylo Lebediev
Got it.
Thank you for helping me, Jon, Jeff!

> Is there a reason why you’re picking Cassandra for this dataset?
Decision wasn’t made by myself, I guess C* was chosen because some huge growth 
was planned.

Regards,
Kyrill

From: Jeff Jirsa 
Reply-To: "user@cassandra.apache.org" 
Date: Wednesday, March 6, 2019 at 22:19
To: "user@cassandra.apache.org" 
Subject: Re: Maximum memory usage reached

Also, that particular logger is for the internal chunk / page cache. If it 
can’t allocate from within that pool, it’ll just use a normal bytebuffer.

It’s not really a problem, but if you see performance suffer, upgrade to latest 
3.11.4, there was a bit of a perf improvement in the case where that cache 
fills up.

--
Jeff Jirsa


On Mar 6, 2019, at 11:40 AM, Jonathan Haddad 
mailto:j...@jonhaddad.com>> wrote:
That’s not an error. To the left of the log message is the severity, level INFO.

Generally, I don’t recommend running Cassandra on only 2GB ram or for small 
datasets that can easily fit in memory. Is there a reason why you’re picking 
Cassandra for this dataset?

On Thu, Mar 7, 2019 at 8:04 AM Kyrylo Lebediev 
mailto:klebed...@conductor.com>> wrote:
Hi All,

We have a tiny 3-node cluster
C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade immediately)
HEAP_SIZE is 2G
JVM options are default
All setting in cassandra.yaml are default (file_cache_size_in_mb not set)

Data per node – just ~ 1Gbyte

We’re getting following errors messages:

DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,545 
CompactionTask.java:150 - Compacting (ed4a4d90-4028-11e9-adc0-230e0d6622df) 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23248-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23247-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23246-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23245-big-Data.db:level=0,
 ]
DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,582 
CompactionTask.java:230 - Compacted (ed4a4d90-4028-11e9-adc0-230e0d6622df) 4 
sstables to 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23249-big,]
 to level=0.  6.264KiB to 1.485KiB (~23% of original) in 36ms.  Read Throughput 
= 170.754KiB/s, Write Throughput = 40.492KiB/s, Row Throughput = ~106/s.  194 
total partitions merged to 44.  Partition merge counts were {1:18, 4:44, }
INFO  [IndexSummaryManager:1] 2019-03-06 11:00:22,007 
IndexSummaryRedistribution.java:75 - Redistributing index summaries
INFO  [pool-1-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:41:25,010 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:56:25,018 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB

What’s interesting that “Maximum memory usage reached” messages appears each 15 
minutes.
Reboot temporary solve the issue, but it then appears again after some time

Checked, there are no huge partitions (max partition size is ~2Mbytes )

How such small amount of data may cause this issue?
How to debug this issue further?


Regards,
Kyrill


--
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade