[jira] [Updated] (CASSANDRA-14415) Performance regression in queries for distinct keys

2018-06-05 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-14415:
-
Issue Type: Bug  (was: Improvement)

> Performance regression in queries for distinct keys
> ---
>
> Key: CASSANDRA-14415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14415
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Samuel Klock
>Assignee: Samuel Klock
>Priority: Major
>  Labels: performance
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Running Cassandra 3.0.16, we observed a major performance regression 
> affecting {{SELECT DISTINCT keys}}-style queries against certain tables.  
> Based on some investigation (guided by some helpful feedback from Benjamin on 
> the dev list), we tracked the regression down to two problems.
>  * One is that Cassandra was reading more data from disk than was necessary 
> to satisfy the query.  This was fixed under CASSANDRA-10657 in a later 3.x 
> release.
>  * If the fix for CASSANDRA-10657 is incorporated, the other is this code 
> snippet in {{RebufferingInputStream}}:
> {code:java}
>     @Override
>     public int skipBytes(int n) throws IOException
>     {
>     if (n < 0)
>     return 0;
>     int requested = n;
>     int position = buffer.position(), limit = buffer.limit(), remaining;
>     while ((remaining = limit - position) < n)
>     {
>     n -= remaining;
>     buffer.position(limit);
>     reBuffer();
>     position = buffer.position();
>     limit = buffer.limit();
>     if (position == limit)
>     return requested - n;
>     }
>     buffer.position(position + n);
>     return requested;
>     }
> {code}
> The gist of it is that to skip bytes, the stream needs to read those bytes 
> into memory then throw them away.  In our tests, we were spending a lot of 
> time in this method, so it looked like the chief drag on performance.
> We noticed that the subclass of {{RebufferingInputStream}} in use for our 
> queries, {{RandomAccessReader}} (over compressed sstables), implements a 
> {{seek()}} method.  Overriding {{skipBytes()}} in it to use {{seek()}} 
> instead was sufficient to fix the performance regression.
> The performance difference is significant for tables with large values.  It's 
> straightforward to evaluate with very simple key-value tables, e.g.:
> {{CREATE TABLE testtable (key TEXT PRIMARY KEY, value BLOB);}}
> We did some basic experimentation with the following variations (all in a 
> single-node 3.11.2 cluster with off-the-shelf settings running on a dev 
> workstation):
>  * small values (1 KB, 100,000 entries), somewhat larger values (25 KB, 
> 10,000 entries), and much larger values (1 MB, 10,000 entries);
>  * compressible data (a single byte repeated) and uncompressible data (output 
> from {{openssl rand $bytes}}); and
>  * with and without sstable compression.  (With compression, we use 
> Cassandra's defaults.)
> The difference is most conspicuous for tables with large, uncompressible data 
> and sstable decompression (which happens to describe the use case that 
> triggered our investigation).  It is smaller but still readily apparent for 
> tables with effective compression.  For uncompressible data without 
> compression enabled, there is no appreciable difference.
> Here's what the performance looks like without our patch for the 1-MB entries 
> (times in seconds, five consecutive runs for each data set, all exhausting 
> the results from a {{SELECT DISTINCT key FROM ...}} query with a page size of 
> 24):
> {noformat}
> working on compressible
> 5.21180510521
> 5.10270500183
> 5.22311806679
> 4.6732840538
> 4.84219098091
> working on uncompressible_uncompressed
> 55.0423607826
> 0.769015073776
> 0.850513935089
> 0.713396072388
> 0.62596988678
> working on uncompressible
> 413.292617083
> 231.345913887
> 449.524993896
> 425.135111094
> 243.469946861
> {noformat}
> and with the fix:
> {noformat}
> working on compressible
> 2.86733293533
> 1.24895811081
> 1.108907938
> 1.12742400169
> 1.04647302628
> working on uncompressible_uncompressed
> 56.4146180153
> 0.895509958267
> 0.922824144363
> 0.772884130478
> 0.731923818588
> working on uncompressible
> 64.4587619305
> 1.81325793266
> 1.52577018738
> 1.41769099236
> 1.60442209244
> {noformat}
> The long initial runs for the uncompressible data presumably come from 
> repeatedly hitting the disk.  In contrast to the runs without the fix, the 
> initial runs seem to be effective at warming the page cache (as lots of data 
> is skipped, so the data that's read can fit in memory), so subsequent runs 
> are faster.
> For smaller data sets, {{RandomAccessReader.seek()}} and 
> 

[jira] [Commented] (CASSANDRA-13935) Indexes creation should have IF EXISTS on its String representation

2018-06-05 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502772#comment-16502772
 ] 

Kurt Greaves commented on CASSANDRA-13935:
--

Looks like this breaks 
{{indexWithfailedInitializationIsNotQueryableAfterPartialRebuild}} on trunk 
which didn't exist on 3.11. Need to check if it's important. Error was:
{code}
[junit] junit.framework.AssertionFailedError
[junit] at 
org.apache.cassandra.index.SecondaryIndexManagerTest.indexWithfailedInitializationIsNotQueryableAfterPartialRebuild(SecondaryIndexManagerTest.java:475)
{code}

> Indexes creation should have IF EXISTS on its String representation
> ---
>
> Key: CASSANDRA-13935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13935
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Secondary Indexes
> Environment: Ubuntu 16.04.2 LTS
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Javier Canillas
>Assignee: Javier Canillas
>Priority: Trivial
> Fix For: 3.11.x
>
> Attachments: 13935-3.0.txt, 13935-3.11.txt, 13935-trunk.txt
>
>
> I came across something that bothers me a lot. I'm using snapshots to backup 
> data from my Cassandra cluster in case something really bad happens (like 
> dropping a table or a keyspace).
> Exercising the recovery actions from those backups, I discover that the 
> schema put on the file "schema.cql" as a result of the snapshot has the 
> "CREATE IF NOT EXISTS" for the table, but not for the indexes.
> When restoring from snapshots, and relying on the execution of these schemas 
> to build up the table structure, everything seems fine for tables without 
> secondary indexes, but for the ones that make use of them, the execution of 
> these statements fail miserably.
> Here I paste a generated schema.cql content for a table with indexes:
> CREATE TABLE IF NOT EXISTS keyspace1.table1 (
>   id text PRIMARY KEY,
>   content text,
>   last_update_date date,
>   last_update_date_time timestamp)
>   WITH ID = f1045fc0-2f59-11e7-95ec-295c3c064920
>   AND bloom_filter_fp_chance = 0.01
>   AND dclocal_read_repair_chance = 0.1
>   AND crc_check_chance = 1.0
>   AND default_time_to_live = 864
>   AND gc_grace_seconds = 864000
>   AND min_index_interval = 128
>   AND max_index_interval = 2048
>   AND memtable_flush_period_in_ms = 0
>   AND read_repair_chance = 0.0
>   AND speculative_retry = '99PERCENTILE'
>   AND caching = { 'keys': 'NONE', 'rows_per_partition': 'NONE' }
>   AND compaction = { 'max_threshold': '32', 'min_threshold': '4', 
> 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' }
>   AND compression = { 'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor' }
>   AND cdc = false
>   AND extensions = {  };
> CREATE INDEX table1_last_update_date_idx ON keyspace1.table1 
> (last_update_date);
> I think the last part should be:
> CREATE INDEX IF NOT EXISTS table1_last_update_date_idx ON keyspace1.table1 
> (last_update_date);



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14496) TWCS erroneously disabling tombstone compactions

2018-06-05 Thread Robert Tarrall (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502769#comment-16502769
 ] 

Robert Tarrall edited comment on CASSANDRA-14496 at 6/6/18 3:01 AM:


BTW rereading my original ticket here, I see I totally failed to mention I had 
enabled {{unchecked_tombstone_compaction}} ... which was really the key to why 
I opened the ticket.

To hopefully clarify: if {{unchecked_tombstone_compaction}} is set to true, 
tombstone compactions should be enabled, even in TWCS.  We should not have to 
also set interval or threshold in order to enable.

Also, it would probably be better to have maximally high default values for 
interval and threshold to more correctly indicate default behavior.  The 
default isn't really 86400 and 0.2; it's "never".  That's the right choice for 
most TWCS users, but isn't clear from the docs.


was (Author: tarrall):
BTW rereading my original ticket here, I see I totally failed to mention 
{{unchecked_tombstone_compaction}} which was really the key to why I opened the 
ticket.

To hopefully clarify: if {{unchecked_tombstone_compaction}} is set to true, 
tombstone compactions should be enabled, even in TWCS.  We should not have to 
also set interval or threshold in order to enable.

Also, it would probably be better to have maximally high default values for 
interval and threshold to more correctly indicate default behavior.  The 
default isn't really 86400 and 0.2; it's "never".  That's the right choice for 
most TWCS users, but isn't clear from the docs.

> TWCS erroneously disabling tombstone compactions
> 
>
> Key: CASSANDRA-14496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14496
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Robert Tarrall
>Priority: Minor
>
> This code:
> {code:java}
> this.options = new TimeWindowCompactionStrategyOptions(options);
> if 
> (!options.containsKey(AbstractCompactionStrategy.TOMBSTONE_COMPACTION_INTERVAL_OPTION)
>  && 
> !options.containsKey(AbstractCompactionStrategy.TOMBSTONE_THRESHOLD_OPTION))
> {
> disableTombstoneCompactions = true;
> logger.debug("Disabling tombstone compactions for TWCS");
> }
> else
> logger.debug("Enabling tombstone compactions for TWCS");
> }
> {code}
> ... in TimeWindowCompactionStrategy.java disables tombstone compactions in 
> TWCS if you have not *explicitly* set either tombstone_compaction_interval or 
> tombstone_threshold.  Adding 'tombstone_compaction_interval': '86400' to the 
> compaction stanza in a table definition has the (to me unexpected) side 
> effect of enabling tombstone compactions. 
> This is surprising and does not appear to be mentioned in the docs.
> I would suggest that tombstone compactions should be run unless these options 
> are both set to 0.
> If the concern is that (as with DTCS in CASSANDRA-9234) we don't want to 
> waste time on tombstone compactions when we expect the tables to eventually 
> be expired away, perhaps we should also check unchecked_tombstone_compaction 
> and still enable tombstone compactions if that's set to true.
> May also make sense to set defaults for interval & threshold to 0 & disable 
> if they're nonzero so that setting non-default values, rather than setting 
> ANY value, is what determines whether tombstone compactions are enabled?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14496) TWCS erroneously disabling tombstone compactions

2018-06-05 Thread Robert Tarrall (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502769#comment-16502769
 ] 

Robert Tarrall commented on CASSANDRA-14496:


BTW rereading my original ticket here, I see I totally failed to mention 
{{unchecked_tombstone_compaction}} which was really the key to why I opened the 
ticket.

To hopefully clarify: if {{unchecked_tombstone_compaction}} is set to true, 
tombstone compactions should be enabled, even in TWCS.  We should not have to 
also set interval or threshold in order to enable.

Also, it would probably be better to have maximally high default values for 
interval and threshold to more correctly indicate default behavior.  The 
default isn't really 86400 and 0.2; it's "never".  That's the right choice for 
most TWCS users, but isn't clear from the docs.

> TWCS erroneously disabling tombstone compactions
> 
>
> Key: CASSANDRA-14496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14496
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Robert Tarrall
>Priority: Minor
>
> This code:
> {code:java}
> this.options = new TimeWindowCompactionStrategyOptions(options);
> if 
> (!options.containsKey(AbstractCompactionStrategy.TOMBSTONE_COMPACTION_INTERVAL_OPTION)
>  && 
> !options.containsKey(AbstractCompactionStrategy.TOMBSTONE_THRESHOLD_OPTION))
> {
> disableTombstoneCompactions = true;
> logger.debug("Disabling tombstone compactions for TWCS");
> }
> else
> logger.debug("Enabling tombstone compactions for TWCS");
> }
> {code}
> ... in TimeWindowCompactionStrategy.java disables tombstone compactions in 
> TWCS if you have not *explicitly* set either tombstone_compaction_interval or 
> tombstone_threshold.  Adding 'tombstone_compaction_interval': '86400' to the 
> compaction stanza in a table definition has the (to me unexpected) side 
> effect of enabling tombstone compactions. 
> This is surprising and does not appear to be mentioned in the docs.
> I would suggest that tombstone compactions should be run unless these options 
> are both set to 0.
> If the concern is that (as with DTCS in CASSANDRA-9234) we don't want to 
> waste time on tombstone compactions when we expect the tables to eventually 
> be expired away, perhaps we should also check unchecked_tombstone_compaction 
> and still enable tombstone compactions if that's set to true.
> May also make sense to set defaults for interval & threshold to 0 & disable 
> if they're nonzero so that setting non-default values, rather than setting 
> ANY value, is what determines whether tombstone compactions are enabled?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-05 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-13929:
-
Fix Version/s: (was: 3.11.x)
   3.11.3

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.3
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14126) don't work udf javascripts

2018-06-05 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-14126:
-
Status: In Progress  (was: Patch Available)

> don't work udf javascripts
> --
>
> Key: CASSANDRA-14126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14126
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Denis Pershin
>Assignee: Alex Lourie
>Priority: Minor
>  Labels: security
> Fix For: 3.11.x
>
> Attachments: cassandra-01.yaml, cassandra-02.yaml, cassandra-03.yaml
>
>
> * config:
> {code}
> enable_user_defined_functions: true
> enable_scripted_user_defined_functions: true
> {code}
> * create keyspace:
> {code}
> CREATE KEYSPACE testkeyspace WITH REPLICATION = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 1 };
> {code}
> * in testkeyspace create function:
> {code}
> CREATE OR REPLACE FUNCTION first_int(input set) RETURNS NULL ON NULL 
> INPUT RETURNS int LANGUAGE javascript AS '(function(){var result = 2;return 
> result;})();';
> {code}
> * create table and insert:
> {code}
> create table A (id int primary key, val set);
> insert into A  (id, val) values (1, {3,5,7,1});
> {code}
> * select:
> {code}
> select first_int(val) from A where id = 1;
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1044, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.10.zip/cassandra-driver-3.10/cassandra/cluster.py",
>  line 3826, in result
> raise self._final_exception
> FunctionFailure: Error from server: code=1400 [User Defined Function failure] 
> message="execution of 'testkeyspace.first_int[set]' failed: 
> java.security.AccessControlException: access denied: 
> ("java.lang.RuntimePermission" "accessClassInPackage.java.io")"
> {code}
> raw log:
> {code}
> root@001b19bd3cc6:/# cqlsh
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4]
> Use HELP for help.
> cqlsh> CREATE KEYSPACE testkeyspace WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> cqlsh> USE testkeyspace ;
> cqlsh:testkeyspace> CREATE OR REPLACE FUNCTION first_int(input set) 
> RETURNS NULL ON NULL INPUT RETURNS int LANGUAGE javascript AS 
> '(function(){var result = 2;return result;})();';
> cqlsh:testkeyspace> create table A (id int primary key, val set);
> cqlsh:testkeyspace> insert into A  (id, val) values (1, {3,5,7,1});
> cqlsh:testkeyspace> select first_int(val) from A where id = 1;
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1044, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.10.zip/cassandra-driver-3.10/cassandra/cluster.py",
>  line 3826, in result
> raise self._final_exception
> FunctionFailure: Error from server: code=1400 [User Defined Function failure] 
> message="execution of 'testkeyspace.first_int[set]' failed: 
> java.security.AccessControlException: access denied: 
> ("java.lang.RuntimePermission" "accessClassInPackage.java.io")"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14500) Debian package to include systemd file and conf

2018-06-05 Thread Lerh Chuan Low (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502729#comment-16502729
 ] 

Lerh Chuan Low commented on CASSANDRA-14500:


Here is a branch that currently has what I am trying to do: 
[https://github.com/juiceblender/cassandra/tree/debian-systemd.] I tested it on 
my instance - it seems to work and my Cassandra starts up fine. Though I'll 
admit I'm not good at Debian packaging and had just learned it before trying to 
implement this change.


{code:java}
admin@ip-10-0-6-20:~/cassandra$ systemctl status cassandra.service
● cassandra.service - Apache Cassandra
Loaded: loaded (/lib/systemd/system/cassandra.service; enabled; vendor preset: 
enabled)
Drop-In: /lib/systemd/system/cassandra.service.d
└─cassandra.conf
Active: active (running) since Wed 2018-06-06 00:40:18 UTC; 12s ago
Main PID: 7173 (java)
CGroup: /system.slice/cassandra.service
└─7173 java -Xloggc:/var/log/cassandra/gc.log -ea -da:net.openhft... 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 
-XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
-XX:+AlwaysPreTouch -XX:-UseBiasedLocking -XX:+Us

Jun 06 00:40:26 ip-10-0-6-20 cassandra[7173]: INFO [main] 2018-06-06 
00:40:26,715 MigrationManager.java:238 - Create new table: system_auth.roles
Jun 06 00:40:26 ip-10-0-6-20 cassandra[7173]: INFO [main] 2018-06-06 
00:40:26,831 MigrationManager.java:238 - Create new table: 
system_auth.role_members
Jun 06 00:40:26 ip-10-0-6-20 cassandra[7173]: INFO [main] 2018-06-06 
00:40:26,932 MigrationManager.java:238 - Create new table: 
system_auth.role_permissions
Jun 06 00:40:27 ip-10-0-6-20 cassandra[7173]: INFO [main] 2018-06-06 
00:40:27,039 MigrationManager.java:238 - Create new table: 
system_auth.resource_role_permissons_index
Jun 06 00:40:27 ip-10-0-6-20 cassandra[7173]: INFO [ScheduledTasks:1] 
2018-06-06 00:40:27,084 TokenMetadata.java:489 - Updating topology for all 
endpoints that have changed
Jun 06 00:40:27 ip-10-0-6-20 cassandra[7173]: INFO [main] 2018-06-06 
00:40:27,146 MigrationManager.java:238 - Create new table: 
system_auth.network_permissions
Jun 06 00:40:27 ip-10-0-6-20 cassandra[7173]: INFO [MigrationStage:1] 
2018-06-06 00:40:27,241 Keyspace.java:369 - Creating replication strategy 
system_auth params KeyspaceParams{durable_writes=true, 
replication=ReplicationParams{class=org.apache.cass
Jun 06 00:40:27 ip-10-0-6-20 cassandra[7173]: INFO [MigrationStage:1] 
2018-06-06 00:40:27,247 ColumnFamilyStore.java:402 - Initializing 
system_auth.network_permissions
Jun 06 00:40:27 ip-10-0-6-20 cassandra[7173]: INFO [MigrationStage:1] 
2018-06-06 00:40:27,250 ViewManager.java:131 - Not submitting build tasks for 
views in keyspace system_auth as storage service is not initialized
Jun 06 00:40:27 ip-10-0-6-20 cassandra[7173]: INFO [main] 2018-06-06 
00:40:27,261 Gossiper.java:1802 - Waiting for gossip to settle...{code}

The actual commit is here: 
[https://github.com/juiceblender/cassandra/commit/54a16b83ec553dd4000907409046129abf50616b
|https://github.com/juiceblender/cassandra/commit/05c112e77e2888d3c19ec36cc40c7c6872fd42ce]

I've not removed the original init scripts because I wasn't sure if it was a 
good idea - maybe some of the users are running really old distributions? If 
people think this is a good idea, I'll cleanup some of the other things as 
well. 

> Debian package to include systemd file and conf
> ---
>
> Key: CASSANDRA-14500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14500
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Lerh Chuan Low
>Assignee: Lerh Chuan Low
>Priority: Minor
>
> I've been testing Cassandra on trunk on Debian stretch, and have been 
> creating my own systemd service files for Cassandra. My Cassandra clusters 
> would sometimes die due to too many open files. 
> As it turns out after some digging, this is because systemd ignores 
> */etc/security/limits.conf.* It relies on a configuration file in 
> .d/.conf. There's more information here: 
> [https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html]. 
> So, for example, for */etc/systemd/system/cassandra.service*, the ulimits are 
> read from */etc/systemd/system/cassandra.service.d/cassandra.conf*. 
> Crosschecking with the limits of my Cassandra process, it looks like the 
> */etc/security/limits.conf* really were not respected. If I make the change 
> above, then it works as expected. */etc/security/limits.conf* is shipped in 
> Cassandra's debian package. 
> Given that there are far more distributions using Systemd (Ubuntu is now as 
> well), I was wondering if it's worth the effort to change Cassandra's debian 
> packaging to use systemd (or at least, include systemd service). I'm not 
> totally familiar 

[jira] [Updated] (CASSANDRA-14358) OutboundTcpConnection can hang for many minutes when nodes restart

2018-06-05 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-14358:
-
Fix Version/s: 3.11.x
   3.0.x
   2.2.x
   2.1.x

> OutboundTcpConnection can hang for many minutes when nodes restart
> --
>
> Key: CASSANDRA-14358
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14358
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.19 (also reproduced on 3.0.15), running 
> with {{internode_encryption: all}} and the EC2 multi region snitch on Linux 
> 4.13 within the same AWS region. Smallest cluster I've seen the problem on is 
> 12 nodes, reproduces more reliably on 40+ and 300 node clusters consistently 
> reproduce on at least one node in the cluster.
> So all the connections are SSL and we're connecting on the internal ip 
> addresses (not the public endpoint ones).
> Potentially relevant sysctls:
> {noformat}
> /proc/sys/net/ipv4/tcp_syn_retries = 2
> /proc/sys/net/ipv4/tcp_synack_retries = 5
> /proc/sys/net/ipv4/tcp_keepalive_time = 7200
> /proc/sys/net/ipv4/tcp_keepalive_probes = 9
> /proc/sys/net/ipv4/tcp_keepalive_intvl = 75
> /proc/sys/net/ipv4/tcp_retries2 = 15
> {noformat}
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Major
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.11.x
>
> Attachments: 10 Minute Partition.pdf
>
>
> edit summary: This primarily impacts networks with stateful firewalls such as 
> AWS. I'm working on a proper patch for trunk but unfortunately it relies on 
> the Netty refactor in 4.0 so it will be hard to backport to previous 
> versions. A workaround for earlier versions is to set the 
> {{net.ipv4.tcp_retries2}} sysctl to ~5. This can be done with the following:
> {code:java}
> $ cat /etc/sysctl.d/20-cassandra-tuning.conf
> net.ipv4.tcp_retries2=5
> $ # Reload all sysctls
> $ sysctl --system{code}
> Original Bug Report:
> I've been trying to debug nodes not being able to see each other during 
> longer (~5 minute+) Cassandra restarts in 3.0.x and 2.1.x which can 
> contribute to {{UnavailableExceptions}} during rolling restarts of 3.0.x and 
> 2.1.x clusters for us. I think I finally have a lead. It appears that prior 
> to trunk (with the awesome Netty refactor) we do not set socket connect 
> timeouts on SSL connections (in 2.1.x, 3.0.x, or 3.11.x) nor do we set 
> {{SO_TIMEOUT}} as far as I can tell on outbound connections either. I believe 
> that this means that we could potentially block forever on {{connect}} or 
> {{recv}} syscalls, and we could block forever on the SSL Handshake as well. I 
> think that the OS will protect us somewhat (and that may be what's causing 
> the eventual timeout) but I think that given the right network conditions our 
> {{OutboundTCPConnection}} threads can just be stuck never making any progress 
> until the OS intervenes.
> I have attached some logs of such a network partition during a rolling 
> restart where an old node in the cluster has a completely foobarred 
> {{OutboundTcpConnection}} for ~10 minutes before finally getting a 
> {{java.net.SocketException: Connection timed out (Write failed)}} and 
> immediately successfully reconnecting. I conclude that the old node is the 
> problem because the new node (the one that restarted) is sending ECHOs to the 
> old node, and the old node is sending ECHOs and REQUEST_RESPONSES to the new 
> node's ECHOs, but the new node is never getting the ECHO's. This appears, to 
> me, to indicate that the old node's {{OutboundTcpConnection}} thread is just 
> stuck and can't make any forward progress. By the time we could notice this 
> and slap TRACE logging on, the only thing we see is ~10 minutes later a 
> {{SocketException}} inside {{writeConnected}}'s flush and an immediate 
> recovery. It is interesting to me that the exception happens in 
> {{writeConnected}} and it's a _connection timeout_ (and since we see {{Write 
> failure}} I believe that this can't be a connection reset), because my 
> understanding is that we should have a fully handshaked SSL connection at 
> that point in the code.
> Current theory:
>  # "New" node restarts,  "Old" node calls 
> [newSocket|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L433]
>  # Old node starts [creating a 
> new|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java#L141]
>  SSL socket 
>  # SSLSocket calls 
> 

[jira] [Created] (CASSANDRA-14500) Debian package to include systemd file and conf

2018-06-05 Thread Lerh Chuan Low (JIRA)
Lerh Chuan Low created CASSANDRA-14500:
--

 Summary: Debian package to include systemd file and conf
 Key: CASSANDRA-14500
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14500
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Reporter: Lerh Chuan Low
Assignee: Lerh Chuan Low


I've been testing Cassandra on trunk on Debian stretch, and have been creating 
my own systemd service files for Cassandra. My Cassandra clusters would 
sometimes die due to too many open files. 

As it turns out after some digging, this is because systemd ignores 
*/etc/security/limits.conf.* It relies on a configuration file in 
.d/.conf. There's more information here: 
[https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html]. 

So, for example, for */etc/systemd/system/cassandra.service*, the ulimits are 
read from */etc/systemd/system/cassandra.service.d/cassandra.conf*. 

Crosschecking with the limits of my Cassandra process, it looks like the 
*/etc/security/limits.conf* really were not respected. If I make the change 
above, then it works as expected. */etc/security/limits.conf* is shipped in 
Cassandra's debian package. 

Given that there are far more distributions using Systemd (Ubuntu is now as 
well), I was wondering if it's worth the effort to change Cassandra's debian 
packaging to use systemd (or at least, include systemd service). I'm not 
totally familiar with whether it's common or normal to include a service file 
in packaging so happy to be corrected/cancelled depending on what people think. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14499) node-level disk quota

2018-06-05 Thread Jordan West (JIRA)
Jordan West created CASSANDRA-14499:
---

 Summary: node-level disk quota
 Key: CASSANDRA-14499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14499
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jordan West
Assignee: Jordan West


Operators should be able to specify, via YAML, the amount of usable disk space 
on a node as a percentage of the total available or as an absolute value. If 
both are specified, the absolute value should take precedence. This allows 
operators to reserve space available to the database for background tasks -- 
primarily compaction. When a node reaches its quota, gossip should be disabled 
to prevent it taking further writes (which would increase the amount of data 
stored), being involved in reads (which are likely to be more inconsistent over 
time), or participating in repair (which may increase the amount of space used 
on the machine). The node re-enables gossip when the amount of data it stores 
is below the quota.   

The proposed option differs from {{min_free_space_per_drive_in_mb}}, which 
reserves some amount of space on each drive that is not usable by the database. 
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14496) TWCS erroneously disabling tombstone compactions

2018-06-05 Thread Robert Tarrall (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502639#comment-16502639
 ] 

Robert Tarrall commented on CASSANDRA-14496:


I definitely agree with respect to "don't enable by default."

However, I believe if I have set {{unchecked_tombstone_compaction}} to true, 
I'm asking for these compactions.  I may however be missing something – is 
there another purpose for that option?  I.e. might someone else be setting that 
to true who would be surprised to find it enables tombstone compactions?  I 
can't find any documentation which explains that 
{{'unchecked_tombstone_compaction': 'true'}} has no effect unless you also 
explicitly set other options, and I see discussions in blog posts that suggest 
people think that setting is how you enable tombstone compactions in TWCS, and 
I had to rummage around in source code for a while to work out why I wasn't 
getting those compactions.

Coming at this from the other direction may help.  If I have the following 
compaction defined:
{code:java}
'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 
'compaction_window_size': '1', 'compaction_window_unit': 'HOURS', 
'unchecked_tombstone_compaction': 'true'{code}
... I don't get tombstone compactions.  When I change that to:
{code:java}
'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 
'compaction_window_size': '1', 'compaction_window_unit': 'HOURS', 
'tombstone_compaction_interval': '86400', 'unchecked_tombstone_compaction': 
'true'
{code}
... I have just switched from "no tombstone compactions" to "tombstone 
compactions".  This seems like a surprising side effect – one would not expect 
that explicitly setting an option to its default value would change behavior 
like this.

If there's another purpose for unchecked_tombstone_compaction, I'd recommend 
the defaults for TWCS make it clear that tombstone compactions intentionally 
act differently from STCS; instead of interval & threshold defaults of 86400 & 
0.2, they should be infinitely high, and documented as such, so that you must 
specify non-default values in order to get tombstone compactions.

> TWCS erroneously disabling tombstone compactions
> 
>
> Key: CASSANDRA-14496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14496
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Robert Tarrall
>Priority: Minor
>
> This code:
> {code:java}
> this.options = new TimeWindowCompactionStrategyOptions(options);
> if 
> (!options.containsKey(AbstractCompactionStrategy.TOMBSTONE_COMPACTION_INTERVAL_OPTION)
>  && 
> !options.containsKey(AbstractCompactionStrategy.TOMBSTONE_THRESHOLD_OPTION))
> {
> disableTombstoneCompactions = true;
> logger.debug("Disabling tombstone compactions for TWCS");
> }
> else
> logger.debug("Enabling tombstone compactions for TWCS");
> }
> {code}
> ... in TimeWindowCompactionStrategy.java disables tombstone compactions in 
> TWCS if you have not *explicitly* set either tombstone_compaction_interval or 
> tombstone_threshold.  Adding 'tombstone_compaction_interval': '86400' to the 
> compaction stanza in a table definition has the (to me unexpected) side 
> effect of enabling tombstone compactions. 
> This is surprising and does not appear to be mentioned in the docs.
> I would suggest that tombstone compactions should be run unless these options 
> are both set to 0.
> If the concern is that (as with DTCS in CASSANDRA-9234) we don't want to 
> waste time on tombstone compactions when we expect the tables to eventually 
> be expired away, perhaps we should also check unchecked_tombstone_compaction 
> and still enable tombstone compactions if that's set to true.
> May also make sense to set defaults for interval & threshold to 0 & disable 
> if they're nonzero so that setting non-default values, rather than setting 
> ANY value, is what determines whether tombstone compactions are enabled?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14466) Enable Direct I/O

2018-06-05 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502605#comment-16502605
 ] 

Ariel Weisberg commented on CASSANDRA-14466:


This doesn't compile with Java 8. Maybe you can use reflection to get at the 3 
things you need?

> Enable Direct I/O 
> --
>
> Key: CASSANDRA-14466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14466
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local Write-Read Paths
>Reporter: Mulugeta Mammo
>Priority: Major
> Attachments: direct_io.patch
>
>
> Hi,
> JDK 10 introduced a new API for Direct IO that enables applications to bypass 
> the file system cache and potentially improve performance. Details of this 
> feature can be found at [https://bugs.openjdk.java.net/browse/JDK-8164900].
> This patch uses the JDK 10 API to enable Direct IO for the Cassandra read 
> path. By default, we have disabled this feature; but it can be enabled using 
> a  new configuration parameter, enable_direct_io_for_read_path. We have 
> conducted a Cassandra read-only stress test and measured a throughput gain of 
> up to 60% on flash drives.
> The patch requires JDK 10 Cassandra Support - 
> https://issues.apache.org/jira/browse/CASSANDRA-9608 
> Please review the patch and let us know your feedback.
> Thanks,
> [^direct_io.patch]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13698) Reinstate or get rid of unit tests with multiple compaction strategies

2018-06-05 Thread Lerh Chuan Low (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502602#comment-16502602
 ] 

Lerh Chuan Low commented on CASSANDRA-13698:


Gentle prod...

> Reinstate or get rid of unit tests with multiple compaction strategies
> --
>
> Key: CASSANDRA-13698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13698
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Paulo Motta
>Assignee: Lerh Chuan Low
>Priority: Minor
>  Labels: lhf
> Attachments: 13698-3.0.txt, 13698-3.11.txt, 13698-trunk.txt
>
>
> At some point there were (anti-)compaction tests with multiple compaction 
> strategy classes, but now it's only tested with {{STCS}}:
> * 
> [AnticompactionTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java#L247]
> * 
> [CompactionsTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java#L85]
> We should either reinstate these tests or decide they are not important and 
> remove the unused parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14466) Enable Direct I/O

2018-06-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502601#comment-16502601
 ] 

ASF GitHub Bot commented on CASSANDRA-14466:


GitHub user aweisberg opened a pull request:

https://github.com/apache/cassandra/pull/232

Enable Direct I/O

Patch by Mulugeta Mammo; Reviewed by Ariel Weisberg for CASSANDRA-14466

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aweisberg/cassandra cassandra-14466-trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cassandra/pull/232.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #232


commit d221e254c1d248aafd96ab2c7f3f15751c81d384
Author: =Mulugeta Mammo 
Date:   2018-06-05T22:36:08Z

Enable Direct I/O

Patch by Mulugeta Mammo; Reviewed by Ariel Weisberg for CASSANDRA-14466




> Enable Direct I/O 
> --
>
> Key: CASSANDRA-14466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14466
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local Write-Read Paths
>Reporter: Mulugeta Mammo
>Priority: Major
> Attachments: direct_io.patch
>
>
> Hi,
> JDK 10 introduced a new API for Direct IO that enables applications to bypass 
> the file system cache and potentially improve performance. Details of this 
> feature can be found at [https://bugs.openjdk.java.net/browse/JDK-8164900].
> This patch uses the JDK 10 API to enable Direct IO for the Cassandra read 
> path. By default, we have disabled this feature; but it can be enabled using 
> a  new configuration parameter, enable_direct_io_for_read_path. We have 
> conducted a Cassandra read-only stress test and measured a throughput gain of 
> up to 60% on flash drives.
> The patch requires JDK 10 Cassandra Support - 
> https://issues.apache.org/jira/browse/CASSANDRA-9608 
> Please review the patch and let us know your feedback.
> Thanks,
> [^direct_io.patch]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14466) Enable Direct I/O

2018-06-05 Thread Mulugeta Mammo (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502584#comment-16502584
 ] 

Mulugeta Mammo commented on CASSANDRA-14466:


The results we posted are based on a read_ahead_kb value of 8, a chunk size of 
64KB and a uniform distribution invocation:

{{cassandra-stress user 
profile=$CASSANDRA_TOOLS/cqlstress-insanity-example.yaml  ops\(simple1=1\) 
no-warmup cl=ONE duration=300s -mode native cql3 -pop 
dist=uniform\(1..12\) -node server_ip -rate threads=288}}

And no we don’t see any significant difference if we set the read_ahead_kb to 
0. For a buffered run with read_ahead_kb set to 0 vs 8, we observed just a 5% 
increase in throughput for the 0.

Also, for all of our runs, the Cassandra caches (row cache, key cache, etc.) 
were disabled. For a cacheable data, we believe a better solution is to have 
the caches enabled and tuned instead of relying on the page cache. Generally, 
we believe relying on the page cache is not a good strategy as the application 
has no control over the caching. The problem also gets worse if other 
applications, e.g. a Spark analytics workload, are running on the same node.  

You may download and test it, git clone -b direct_io 
[https://github.com/mulugetam/cassandra.git] (requires JDK 10)

> Enable Direct I/O 
> --
>
> Key: CASSANDRA-14466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14466
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local Write-Read Paths
>Reporter: Mulugeta Mammo
>Priority: Major
> Attachments: direct_io.patch
>
>
> Hi,
> JDK 10 introduced a new API for Direct IO that enables applications to bypass 
> the file system cache and potentially improve performance. Details of this 
> feature can be found at [https://bugs.openjdk.java.net/browse/JDK-8164900].
> This patch uses the JDK 10 API to enable Direct IO for the Cassandra read 
> path. By default, we have disabled this feature; but it can be enabled using 
> a  new configuration parameter, enable_direct_io_for_read_path. We have 
> conducted a Cassandra read-only stress test and measured a throughput gain of 
> up to 60% on flash drives.
> The patch requires JDK 10 Cassandra Support - 
> https://issues.apache.org/jira/browse/CASSANDRA-9608 
> Please review the patch and let us know your feedback.
> Thanks,
> [^direct_io.patch]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14459) DynamicEndpointSnitch should never prefer latent nodes

2018-06-05 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502565#comment-16502565
 ] 

Jason Brown edited comment on CASSANDRA-14459 at 6/5/18 10:05 PM:
--

bq.  I figured since getSnapshot is called every 100ms in updateScore ...

Huh, you are correct. I didn't realize we did that (grab a snapshot when we 
calculate scores). OK, well, then ... I guess once every ten minutes isn't as 
egregious as I thought. Ignore my earlier comment, then.



was (Author: jasobrown):
bq.  I figured since getSnapshot is called every 100ms in updateScore ...

Huh, you are correct. I didn't realize we did that. OK, well, then ... I guess 
once every ten minutes isn't as egregious as I thought. Ignore my earlier 
comment, then.


> DynamicEndpointSnitch should never prefer latent nodes
> --
>
> Key: CASSANDRA-14459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14459
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>
> The DynamicEndpointSnitch has two unfortunate behaviors that allow it to 
> provide latent hosts as replicas:
>  # Loses all latency information when Cassandra restarts
>  # Clears latency information entirely every ten minutes (by default), 
> allowing global queries to be routed to _other datacenters_ (and local 
> queries cross racks/azs)
> This means that the first few queries after restart/reset could be quite slow 
> compared to average latencies. I propose we solve this by resetting to the 
> minimum observed latency instead of completely clearing the samples and 
> extending the {{isLatencyForSnitch}} idea to a three state variable instead 
> of two, in particular {{YES}}, {{NO}}, {{MAYBE}}. This extension allows 
> {{EchoMessages}} and {{PingMessages}} to send {{MAYBE}} indicating that the 
> DS should use those measurements if it only has one or fewer samples for a 
> host. This fixes both problems because on process restart we send out 
> {{PingMessages}} / {{EchoMessages}} as part of startup, and we would reset to 
> effectively the RTT of the hosts (also at that point normal gossip 
> {{EchoMessages}} have an opportunity to add an additional latency 
> measurement).
> This strategy also nicely deals with the "a host got slow but now it's fine" 
> problem that the DS resets were (afaik) designed to stop because the 
> {{EchoMessage}} ping latency will count only after the reset for that host. 
> Ping latency is a more reasonable lower bound on host latency (as opposed to 
> status quo of zero).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14459) DynamicEndpointSnitch should never prefer latent nodes

2018-06-05 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502565#comment-16502565
 ] 

Jason Brown commented on CASSANDRA-14459:
-

bq.  I figured since getSnapshot is called every 100ms in updateScore ...

Huh, you are correct. I didn't realize we did that. OK, well, then ... I guess 
once every ten minutes isn't as egregious as I thought. Ignore my earlier 
comment, then.


> DynamicEndpointSnitch should never prefer latent nodes
> --
>
> Key: CASSANDRA-14459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14459
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>
> The DynamicEndpointSnitch has two unfortunate behaviors that allow it to 
> provide latent hosts as replicas:
>  # Loses all latency information when Cassandra restarts
>  # Clears latency information entirely every ten minutes (by default), 
> allowing global queries to be routed to _other datacenters_ (and local 
> queries cross racks/azs)
> This means that the first few queries after restart/reset could be quite slow 
> compared to average latencies. I propose we solve this by resetting to the 
> minimum observed latency instead of completely clearing the samples and 
> extending the {{isLatencyForSnitch}} idea to a three state variable instead 
> of two, in particular {{YES}}, {{NO}}, {{MAYBE}}. This extension allows 
> {{EchoMessages}} and {{PingMessages}} to send {{MAYBE}} indicating that the 
> DS should use those measurements if it only has one or fewer samples for a 
> host. This fixes both problems because on process restart we send out 
> {{PingMessages}} / {{EchoMessages}} as part of startup, and we would reset to 
> effectively the RTT of the hosts (also at that point normal gossip 
> {{EchoMessages}} have an opportunity to add an additional latency 
> measurement).
> This strategy also nicely deals with the "a host got slow but now it's fine" 
> problem that the DS resets were (afaik) designed to stop because the 
> {{EchoMessage}} ping latency will count only after the reset for that host. 
> Ping latency is a more reasonable lower bound on host latency (as opposed to 
> status quo of zero).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14459) DynamicEndpointSnitch should never prefer latent nodes

2018-06-05 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502560#comment-16502560
 ] 

Joseph Lynch commented on CASSANDRA-14459:
--

[~sumanth.pasupuleti]
Regarding lowest or latest latency, if I do additional bookkeeping either is 
viable, but without additional bookkeeping it becomes slightly complex to do 
the latest value. Either strategy would achieve allowing temporarily slow nodes 
to receive traffic again, but I worry that resetting a host to a recent large 
value would temporarily remove it from replica consideration entirely until the 
next reset/EchoMessage. I'll fix the enum documentation.

[~jasobrown] Thanks for taking a look! I figured since {{getSnapshot}} is 
called every 100ms in {{updateScore}}, calling it once more every 10 minutes 
wouldn't be a huge deal? Separate bookeeping is certainly possible if you 
prefer, I just figured it was better to minimize the complexity of the change. 
While testing this I think that I need to hook in somewhere to the 3-way 
gossiping messaging since {{EchoMessages}} only get sent if the node is alive 
in gossip but not in the local failure detector. I'll work on a test that a 
round of gossip has latency numbers added and then fix it :-)



> DynamicEndpointSnitch should never prefer latent nodes
> --
>
> Key: CASSANDRA-14459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14459
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>
> The DynamicEndpointSnitch has two unfortunate behaviors that allow it to 
> provide latent hosts as replicas:
>  # Loses all latency information when Cassandra restarts
>  # Clears latency information entirely every ten minutes (by default), 
> allowing global queries to be routed to _other datacenters_ (and local 
> queries cross racks/azs)
> This means that the first few queries after restart/reset could be quite slow 
> compared to average latencies. I propose we solve this by resetting to the 
> minimum observed latency instead of completely clearing the samples and 
> extending the {{isLatencyForSnitch}} idea to a three state variable instead 
> of two, in particular {{YES}}, {{NO}}, {{MAYBE}}. This extension allows 
> {{EchoMessages}} and {{PingMessages}} to send {{MAYBE}} indicating that the 
> DS should use those measurements if it only has one or fewer samples for a 
> host. This fixes both problems because on process restart we send out 
> {{PingMessages}} / {{EchoMessages}} as part of startup, and we would reset to 
> effectively the RTT of the hosts (also at that point normal gossip 
> {{EchoMessages}} have an opportunity to add an additional latency 
> measurement).
> This strategy also nicely deals with the "a host got slow but now it's fine" 
> problem that the DS resets were (afaik) designed to stop because the 
> {{EchoMessage}} ping latency will count only after the reset for that host. 
> Ping latency is a more reasonable lower bound on host latency (as opposed to 
> status quo of zero).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-05 Thread Cyril Scetbon (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502546#comment-16502546
 ] 

Cyril Scetbon commented on CASSANDRA-13929:
---

Hey guys, any news on that issue ?

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14497) Add Role login cache

2018-06-05 Thread Jay Zhuang (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502534#comment-16502534
 ] 

Jay Zhuang commented on CASSANDRA-14497:


Thanks [~beobal].

I also have a [draft 
patch|https://github.com/cooldoger/cassandra/commit/03f72307e1d41c98e6f3015a0ff8fe22157cb21a]
 for it. As the author of the auth feature, you may have a better idea about 
that. I can help to test and review your change.

> Add Role login cache
> 
>
> Key: CASSANDRA-14497
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14497
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Auth
>Reporter: Jay Zhuang
>Priority: Major
>
> The 
> [{{ClientState.login()}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/ClientState.java#L313]
>  function is used for all auth message: 
> [{{AuthResponse.java:82}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/messages/AuthResponse.java#L82].
>  But the 
> [{{role.canLogin}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L521]
>  information is not cached. So it hits the database every time: 
> [{{CassandraRoleManager.java:407}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L407].
>  For a cluster with lots of new connections, it's causing performance issue. 
> The mitigation for us is to increase the {{system_auth}} replication factor 
> to match the number of nodes, so 
> [{{local_one}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L488]
>  would be very cheap. The P99 dropped immediately, but I don't think it is 
> not a good solution.
> I would purpose to add {{Role.canLogin}} to the RolesCache to improve the 
> auth performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14497) Add Role login cache

2018-06-05 Thread Jay Zhuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang reassigned CASSANDRA-14497:
--

Assignee: Sam Tunnicliffe

> Add Role login cache
> 
>
> Key: CASSANDRA-14497
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14497
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Auth
>Reporter: Jay Zhuang
>Assignee: Sam Tunnicliffe
>Priority: Major
>
> The 
> [{{ClientState.login()}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/ClientState.java#L313]
>  function is used for all auth message: 
> [{{AuthResponse.java:82}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/messages/AuthResponse.java#L82].
>  But the 
> [{{role.canLogin}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L521]
>  information is not cached. So it hits the database every time: 
> [{{CassandraRoleManager.java:407}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L407].
>  For a cluster with lots of new connections, it's causing performance issue. 
> The mitigation for us is to increase the {{system_auth}} replication factor 
> to match the number of nodes, so 
> [{{local_one}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L488]
>  would be very cheap. The P99 dropped immediately, but I don't think it is 
> not a good solution.
> I would purpose to add {{Role.canLogin}} to the RolesCache to improve the 
> auth performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14459) DynamicEndpointSnitch should never prefer latent nodes

2018-06-05 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502521#comment-16502521
 ] 

Jason Brown commented on CASSANDRA-14459:
-

fwiw, 
[circleci|https://circleci.com/gh/jasobrown/workflows/cassandra/tree/14459] 
with [~jolynch]'s patch was all GREEN.

> DynamicEndpointSnitch should never prefer latent nodes
> --
>
> Key: CASSANDRA-14459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14459
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>
> The DynamicEndpointSnitch has two unfortunate behaviors that allow it to 
> provide latent hosts as replicas:
>  # Loses all latency information when Cassandra restarts
>  # Clears latency information entirely every ten minutes (by default), 
> allowing global queries to be routed to _other datacenters_ (and local 
> queries cross racks/azs)
> This means that the first few queries after restart/reset could be quite slow 
> compared to average latencies. I propose we solve this by resetting to the 
> minimum observed latency instead of completely clearing the samples and 
> extending the {{isLatencyForSnitch}} idea to a three state variable instead 
> of two, in particular {{YES}}, {{NO}}, {{MAYBE}}. This extension allows 
> {{EchoMessages}} and {{PingMessages}} to send {{MAYBE}} indicating that the 
> DS should use those measurements if it only has one or fewer samples for a 
> host. This fixes both problems because on process restart we send out 
> {{PingMessages}} / {{EchoMessages}} as part of startup, and we would reset to 
> effectively the RTT of the hosts (also at that point normal gossip 
> {{EchoMessages}} have an opportunity to add an additional latency 
> measurement).
> This strategy also nicely deals with the "a host got slow but now it's fine" 
> problem that the DS resets were (afaik) designed to stop because the 
> {{EchoMessage}} ping latency will count only after the reset for that host. 
> Ping latency is a more reasonable lower bound on host latency (as opposed to 
> status quo of zero).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14451) Infinity ms Commit Log Sync

2018-06-05 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14451:

   Resolution: Fixed
Fix Version/s: (was: 4.0.x)
   (was: 3.11.x)
   (was: 3.0.x)
   3.11.3
   3.0.17
   4.0
   Status: Resolved  (was: Patch Available)

Made all the last recs from [~jrwest], and committed as sha 
{{214a3abfcc25460af50805b543a5833697a1b341}}. Thanks, all!

> Infinity ms Commit Log Sync
> ---
>
> Key: CASSANDRA-14451
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14451
> Project: Cassandra
>  Issue Type: Bug
> Environment: 3.11.2 - 2 DC
>Reporter: Harry Hough
>Assignee: Jason Brown
>Priority: Minor
> Fix For: 4.0, 3.0.17, 3.11.3
>
>
> Its giving commit log sync warnings where there were apparently zero syncs 
> and therefore gives "Infinityms" as the average duration
> {code:java}
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:11:14,294 
> NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with 
> average duration of Infinityms, 1 have exceeded the configured commit 
> interval by an average of 74.40ms 
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:16:57,844 
> NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with 
> average duration of Infinityms, 1 have exceeded the configured commit 
> interval by an average of 198.69ms 
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:24:46,325 
> NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with 
> average duration of Infinityms, 1 have exceeded the configured commit 
> interval by an average of 264.11ms 
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:29:46,393 
> NoSpamLogger.java:94 - Out of 32 commit log syncs over the past 268.84s with, 
> average duration of 17.56ms, 1 have exceeded the configured commit interval 
> by an average of 173.66ms{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/6] cassandra git commit: Fix regression of lagging commitlog flush log message

2018-06-05 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 38096da25 -> 214a3abfc
  refs/heads/cassandra-3.11 b92d90dc1 -> 77a12053b
  refs/heads/trunk 5d8767765 -> 843a5fdf2


Fix regression of lagging commitlog flush log message

patch by jasobrown, reviewed by Jordan West for CASSANDRA-14451


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/214a3abf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/214a3abf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/214a3abf

Branch: refs/heads/cassandra-3.0
Commit: 214a3abfcc25460af50805b543a5833697a1b341
Parents: 38096da
Author: Jason Brown 
Authored: Fri Jun 1 05:45:23 2018 -0700
Committer: Jason Brown 
Committed: Tue Jun 5 13:47:37 2018 -0700

--
 CHANGES.txt |  1 +
 .../db/commitlog/AbstractCommitLogService.java  | 85 +---
 .../commitlog/AbstractCommitLogServiceTest.java | 49 ++-
 3 files changed, 104 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/214a3abf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 16fe6d1..dfdfbfd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Fix regression of lagging commitlog flush log message (CASSANDRA-14451)
  * Add Missing dependencies in pom-all (CASSANDRA-14422)
  * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447)
  * Fix deprecated repair error notifications from 3.x clusters to legacy JMX 
clients (CASSANDRA-13121)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/214a3abf/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
--
diff --git 
a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java 
b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
index 1cee55d..0845bd5 100644
--- a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
+++ b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
@@ -29,6 +29,8 @@ import java.util.concurrent.Semaphore;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicLong;
 
+import com.google.common.annotations.VisibleForTesting;
+
 public abstract class AbstractCommitLogService
 {
 /**
@@ -165,13 +167,15 @@ public abstract class AbstractCommitLogService
 
 // sync and signal
 long pollStarted = clock.currentTimeMillis();
-if (lastSyncedAt + syncIntervalMillis <= pollStarted || 
shutdown || syncRequested)
+boolean flushToDisk = lastSyncedAt + syncIntervalMillis <= 
pollStarted || shutdown || syncRequested;
+if (flushToDisk)
 {
 // in this branch, we want to flush the commit log to disk
 syncRequested = false;
 commitLog.sync(shutdown, true);
 lastSyncedAt = pollStarted;
 syncComplete.signalAll();
+syncCount++;
 }
 else
 {
@@ -179,41 +183,15 @@ public abstract class AbstractCommitLogService
 commitLog.sync(false, false);
 }
 
-// sleep any time we have left before the next one is due
 long now = clock.currentTimeMillis();
-long sleep = pollStarted + markerIntervalMillis - now;
-if (sleep < 0)
-{
-// if we have lagged noticeably, update our lag counter
-if (firstLagAt == 0)
-{
-firstLagAt = now;
-totalSyncDuration = syncExceededIntervalBy = syncCount 
= lagCount = 0;
-}
-syncExceededIntervalBy -= sleep;
-lagCount++;
-}
-syncCount++;
-totalSyncDuration += now - pollStarted;
-
-if (firstLagAt > 0)
-{
-//Only reset the lag tracking if it actually logged this 
time
-boolean logged = NoSpamLogger.log(
-logger,
-NoSpamLogger.Level.WARN,
-5,
-TimeUnit.MINUTES,
-"Out of {} commit log syncs over the past {}s with average 
duration of {}ms, {} have exceeded the configured commit interval by an average 
of {}ms",
-syncCount, (now - firstLagAt) / 1000, 
String.format("%.2f", (double) totalSyncDuration / syncCount), lagCount, 
String.format("%.2f", (double) 

[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-06-05 Thread jasobrown
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/77a12053
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/77a12053
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/77a12053

Branch: refs/heads/trunk
Commit: 77a12053b69ceebd529556d5159f9325703283eb
Parents: b92d90d 214a3ab
Author: Jason Brown 
Authored: Tue Jun 5 13:48:56 2018 -0700
Committer: Jason Brown 
Committed: Tue Jun 5 13:50:36 2018 -0700

--
 CHANGES.txt |  1 +
 .../db/commitlog/AbstractCommitLogService.java  | 88 +---
 .../commitlog/AbstractCommitLogServiceTest.java | 49 ++-
 3 files changed, 105 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/77a12053/CHANGES.txt
--
diff --cc CHANGES.txt
index 2d4ef25,dfdfbfd..2e77d2e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,21 -1,5 +1,22 @@@
 -3.0.17
 +3.11.3
 + * Reduce nodetool GC thread count (CASSANDRA-14475)
 + * Fix New SASI view creation during Index Redistribution (CASSANDRA-14055)
 + * Remove string formatting lines from BufferPool hot path (CASSANDRA-14416)
 + * Update metrics to 3.1.5 (CASSANDRA-12924)
 + * Detect OpenJDK jvm type and architecture (CASSANDRA-12793)
 + * Don't use guava collections in the non-system keyspace jmx attributes 
(CASSANDRA-12271)
 + * Allow existing nodes to use all peers in shadow round (CASSANDRA-13851)
 + * Fix cqlsh to read connection.ssl cqlshrc option again (CASSANDRA-14299)
 + * Downgrade log level to trace for CommitLogSegmentManager (CASSANDRA-14370)
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Serialize empty buffer as empty string for json output format 
(CASSANDRA-14245)
 + * Allow logging implementation to be interchanged for embedded testing 
(CASSANDRA-13396)
 + * SASI tokenizer for simple delimiter based entries (CASSANDRA-14247)
 + * Fix Loss of digits when doing CAST from varint/bigint to decimal 
(CASSANDRA-14170)
 + * RateBasedBackPressure unnecessarily invokes a lock on the Guava 
RateLimiter (CASSANDRA-14163)
 + * Fix wildcard GROUP BY queries (CASSANDRA-14209)
 +Merged from 3.0:
+  * Fix regression of lagging commitlog flush log message (CASSANDRA-14451)
   * Add Missing dependencies in pom-all (CASSANDRA-14422)
   * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447)
   * Fix deprecated repair error notifications from 3.x clusters to legacy JMX 
clients (CASSANDRA-13121)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/77a12053/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
--
diff --cc 
src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
index 7c5d300,0845bd5..b7ab705
--- a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
+++ b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
@@@ -17,15 -17,6 +17,16 @@@
   */
  package org.apache.cassandra.db.commitlog;
  
 +import java.util.concurrent.TimeUnit;
 +import java.util.concurrent.atomic.AtomicLong;
 +import java.util.concurrent.locks.LockSupport;
 +
++import com.google.common.annotations.VisibleForTesting;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.codahale.metrics.Timer.Context;
 +
  import org.apache.cassandra.concurrent.NamedThreadFactory;
  import org.apache.cassandra.config.Config;
  import org.apache.cassandra.db.commitlog.CommitLogSegment.Allocation;
@@@ -162,14 -160,15 +163,15 @@@ public abstract class AbstractCommitLog
  
  boolean sync()
  {
 +// always run once after shutdown signalled
 +boolean shutdownRequested = shutdown;
 +
  try
  {
 -// always run once after shutdown signalled
 -boolean run = !shutdown;
 -
  // sync and signal
 -long pollStarted = clock.currentTimeMillis();
 -boolean flushToDisk = lastSyncedAt + syncIntervalMillis <= 
pollStarted || shutdown || syncRequested;
 +long pollStarted = clock.nanoTime();
- if (lastSyncedAt + syncIntervalNanos <= pollStarted || 
shutdownRequested || syncRequested)
++boolean flushToDisk = lastSyncedAt + syncIntervalNanos <= 
pollStarted || shutdownRequested || syncRequested;
+ if (flushToDisk)
  {
  // in this branch, we want to flush the commit log to disk
  syncRequested = false;
@@@ -181,47 -180,30 +183,19 @@@
  else
  {
  // in 

[3/6] cassandra git commit: Fix regression of lagging commitlog flush log message

2018-06-05 Thread jasobrown
Fix regression of lagging commitlog flush log message

patch by jasobrown, reviewed by Jordan West for CASSANDRA-14451


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/214a3abf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/214a3abf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/214a3abf

Branch: refs/heads/trunk
Commit: 214a3abfcc25460af50805b543a5833697a1b341
Parents: 38096da
Author: Jason Brown 
Authored: Fri Jun 1 05:45:23 2018 -0700
Committer: Jason Brown 
Committed: Tue Jun 5 13:47:37 2018 -0700

--
 CHANGES.txt |  1 +
 .../db/commitlog/AbstractCommitLogService.java  | 85 +---
 .../commitlog/AbstractCommitLogServiceTest.java | 49 ++-
 3 files changed, 104 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/214a3abf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 16fe6d1..dfdfbfd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Fix regression of lagging commitlog flush log message (CASSANDRA-14451)
  * Add Missing dependencies in pom-all (CASSANDRA-14422)
  * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447)
  * Fix deprecated repair error notifications from 3.x clusters to legacy JMX 
clients (CASSANDRA-13121)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/214a3abf/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
--
diff --git 
a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java 
b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
index 1cee55d..0845bd5 100644
--- a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
+++ b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
@@ -29,6 +29,8 @@ import java.util.concurrent.Semaphore;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicLong;
 
+import com.google.common.annotations.VisibleForTesting;
+
 public abstract class AbstractCommitLogService
 {
 /**
@@ -165,13 +167,15 @@ public abstract class AbstractCommitLogService
 
 // sync and signal
 long pollStarted = clock.currentTimeMillis();
-if (lastSyncedAt + syncIntervalMillis <= pollStarted || 
shutdown || syncRequested)
+boolean flushToDisk = lastSyncedAt + syncIntervalMillis <= 
pollStarted || shutdown || syncRequested;
+if (flushToDisk)
 {
 // in this branch, we want to flush the commit log to disk
 syncRequested = false;
 commitLog.sync(shutdown, true);
 lastSyncedAt = pollStarted;
 syncComplete.signalAll();
+syncCount++;
 }
 else
 {
@@ -179,41 +183,15 @@ public abstract class AbstractCommitLogService
 commitLog.sync(false, false);
 }
 
-// sleep any time we have left before the next one is due
 long now = clock.currentTimeMillis();
-long sleep = pollStarted + markerIntervalMillis - now;
-if (sleep < 0)
-{
-// if we have lagged noticeably, update our lag counter
-if (firstLagAt == 0)
-{
-firstLagAt = now;
-totalSyncDuration = syncExceededIntervalBy = syncCount 
= lagCount = 0;
-}
-syncExceededIntervalBy -= sleep;
-lagCount++;
-}
-syncCount++;
-totalSyncDuration += now - pollStarted;
-
-if (firstLagAt > 0)
-{
-//Only reset the lag tracking if it actually logged this 
time
-boolean logged = NoSpamLogger.log(
-logger,
-NoSpamLogger.Level.WARN,
-5,
-TimeUnit.MINUTES,
-"Out of {} commit log syncs over the past {}s with average 
duration of {}ms, {} have exceeded the configured commit interval by an average 
of {}ms",
-syncCount, (now - firstLagAt) / 1000, 
String.format("%.2f", (double) totalSyncDuration / syncCount), lagCount, 
String.format("%.2f", (double) syncExceededIntervalBy / lagCount));
-if (logged)
-firstLagAt = 0;
-}
+if (flushToDisk)
+

[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-06-05 Thread jasobrown
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/843a5fdf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/843a5fdf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/843a5fdf

Branch: refs/heads/trunk
Commit: 843a5fdf2ff8f2cb61a4e1d6632fd443bc2136fb
Parents: 5d87677 77a1205
Author: Jason Brown 
Authored: Tue Jun 5 13:50:58 2018 -0700
Committer: Jason Brown 
Committed: Tue Jun 5 13:51:50 2018 -0700

--
 CHANGES.txt |  1 +
 .../db/commitlog/AbstractCommitLogService.java  | 88 +---
 .../commitlog/AbstractCommitLogServiceTest.java | 49 ++-
 3 files changed, 105 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/843a5fdf/CHANGES.txt
--
diff --cc CHANGES.txt
index eb064be,2e77d2e..9857704
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -264,8 -16,10 +264,9 @@@
   * RateBasedBackPressure unnecessarily invokes a lock on the Guava 
RateLimiter (CASSANDRA-14163)
   * Fix wildcard GROUP BY queries (CASSANDRA-14209)
  Merged from 3.0:
+  * Fix regression of lagging commitlog flush log message (CASSANDRA-14451)
   * Add Missing dependencies in pom-all (CASSANDRA-14422)
   * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447)
 - * Fix deprecated repair error notifications from 3.x clusters to legacy JMX 
clients (CASSANDRA-13121)
   * Cassandra not starting when using enhanced startup scripts in windows 
(CASSANDRA-14418)
   * Fix progress stats and units in compactionstats (CASSANDRA-12244)
   * Better handle missing partition columns in system_schema.columns 
(CASSANDRA-14379)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/6] cassandra git commit: Fix regression of lagging commitlog flush log message

2018-06-05 Thread jasobrown
Fix regression of lagging commitlog flush log message

patch by jasobrown, reviewed by Jordan West for CASSANDRA-14451


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/214a3abf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/214a3abf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/214a3abf

Branch: refs/heads/cassandra-3.11
Commit: 214a3abfcc25460af50805b543a5833697a1b341
Parents: 38096da
Author: Jason Brown 
Authored: Fri Jun 1 05:45:23 2018 -0700
Committer: Jason Brown 
Committed: Tue Jun 5 13:47:37 2018 -0700

--
 CHANGES.txt |  1 +
 .../db/commitlog/AbstractCommitLogService.java  | 85 +---
 .../commitlog/AbstractCommitLogServiceTest.java | 49 ++-
 3 files changed, 104 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/214a3abf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 16fe6d1..dfdfbfd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Fix regression of lagging commitlog flush log message (CASSANDRA-14451)
  * Add Missing dependencies in pom-all (CASSANDRA-14422)
  * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447)
  * Fix deprecated repair error notifications from 3.x clusters to legacy JMX 
clients (CASSANDRA-13121)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/214a3abf/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
--
diff --git 
a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java 
b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
index 1cee55d..0845bd5 100644
--- a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
+++ b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
@@ -29,6 +29,8 @@ import java.util.concurrent.Semaphore;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicLong;
 
+import com.google.common.annotations.VisibleForTesting;
+
 public abstract class AbstractCommitLogService
 {
 /**
@@ -165,13 +167,15 @@ public abstract class AbstractCommitLogService
 
 // sync and signal
 long pollStarted = clock.currentTimeMillis();
-if (lastSyncedAt + syncIntervalMillis <= pollStarted || 
shutdown || syncRequested)
+boolean flushToDisk = lastSyncedAt + syncIntervalMillis <= 
pollStarted || shutdown || syncRequested;
+if (flushToDisk)
 {
 // in this branch, we want to flush the commit log to disk
 syncRequested = false;
 commitLog.sync(shutdown, true);
 lastSyncedAt = pollStarted;
 syncComplete.signalAll();
+syncCount++;
 }
 else
 {
@@ -179,41 +183,15 @@ public abstract class AbstractCommitLogService
 commitLog.sync(false, false);
 }
 
-// sleep any time we have left before the next one is due
 long now = clock.currentTimeMillis();
-long sleep = pollStarted + markerIntervalMillis - now;
-if (sleep < 0)
-{
-// if we have lagged noticeably, update our lag counter
-if (firstLagAt == 0)
-{
-firstLagAt = now;
-totalSyncDuration = syncExceededIntervalBy = syncCount 
= lagCount = 0;
-}
-syncExceededIntervalBy -= sleep;
-lagCount++;
-}
-syncCount++;
-totalSyncDuration += now - pollStarted;
-
-if (firstLagAt > 0)
-{
-//Only reset the lag tracking if it actually logged this 
time
-boolean logged = NoSpamLogger.log(
-logger,
-NoSpamLogger.Level.WARN,
-5,
-TimeUnit.MINUTES,
-"Out of {} commit log syncs over the past {}s with average 
duration of {}ms, {} have exceeded the configured commit interval by an average 
of {}ms",
-syncCount, (now - firstLagAt) / 1000, 
String.format("%.2f", (double) totalSyncDuration / syncCount), lagCount, 
String.format("%.2f", (double) syncExceededIntervalBy / lagCount));
-if (logged)
-firstLagAt = 0;
-}
+if (flushToDisk)
+

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-06-05 Thread jasobrown
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/77a12053
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/77a12053
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/77a12053

Branch: refs/heads/cassandra-3.11
Commit: 77a12053b69ceebd529556d5159f9325703283eb
Parents: b92d90d 214a3ab
Author: Jason Brown 
Authored: Tue Jun 5 13:48:56 2018 -0700
Committer: Jason Brown 
Committed: Tue Jun 5 13:50:36 2018 -0700

--
 CHANGES.txt |  1 +
 .../db/commitlog/AbstractCommitLogService.java  | 88 +---
 .../commitlog/AbstractCommitLogServiceTest.java | 49 ++-
 3 files changed, 105 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/77a12053/CHANGES.txt
--
diff --cc CHANGES.txt
index 2d4ef25,dfdfbfd..2e77d2e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,21 -1,5 +1,22 @@@
 -3.0.17
 +3.11.3
 + * Reduce nodetool GC thread count (CASSANDRA-14475)
 + * Fix New SASI view creation during Index Redistribution (CASSANDRA-14055)
 + * Remove string formatting lines from BufferPool hot path (CASSANDRA-14416)
 + * Update metrics to 3.1.5 (CASSANDRA-12924)
 + * Detect OpenJDK jvm type and architecture (CASSANDRA-12793)
 + * Don't use guava collections in the non-system keyspace jmx attributes 
(CASSANDRA-12271)
 + * Allow existing nodes to use all peers in shadow round (CASSANDRA-13851)
 + * Fix cqlsh to read connection.ssl cqlshrc option again (CASSANDRA-14299)
 + * Downgrade log level to trace for CommitLogSegmentManager (CASSANDRA-14370)
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Serialize empty buffer as empty string for json output format 
(CASSANDRA-14245)
 + * Allow logging implementation to be interchanged for embedded testing 
(CASSANDRA-13396)
 + * SASI tokenizer for simple delimiter based entries (CASSANDRA-14247)
 + * Fix Loss of digits when doing CAST from varint/bigint to decimal 
(CASSANDRA-14170)
 + * RateBasedBackPressure unnecessarily invokes a lock on the Guava 
RateLimiter (CASSANDRA-14163)
 + * Fix wildcard GROUP BY queries (CASSANDRA-14209)
 +Merged from 3.0:
+  * Fix regression of lagging commitlog flush log message (CASSANDRA-14451)
   * Add Missing dependencies in pom-all (CASSANDRA-14422)
   * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447)
   * Fix deprecated repair error notifications from 3.x clusters to legacy JMX 
clients (CASSANDRA-13121)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/77a12053/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
--
diff --cc 
src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
index 7c5d300,0845bd5..b7ab705
--- a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
+++ b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
@@@ -17,15 -17,6 +17,16 @@@
   */
  package org.apache.cassandra.db.commitlog;
  
 +import java.util.concurrent.TimeUnit;
 +import java.util.concurrent.atomic.AtomicLong;
 +import java.util.concurrent.locks.LockSupport;
 +
++import com.google.common.annotations.VisibleForTesting;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.codahale.metrics.Timer.Context;
 +
  import org.apache.cassandra.concurrent.NamedThreadFactory;
  import org.apache.cassandra.config.Config;
  import org.apache.cassandra.db.commitlog.CommitLogSegment.Allocation;
@@@ -162,14 -160,15 +163,15 @@@ public abstract class AbstractCommitLog
  
  boolean sync()
  {
 +// always run once after shutdown signalled
 +boolean shutdownRequested = shutdown;
 +
  try
  {
 -// always run once after shutdown signalled
 -boolean run = !shutdown;
 -
  // sync and signal
 -long pollStarted = clock.currentTimeMillis();
 -boolean flushToDisk = lastSyncedAt + syncIntervalMillis <= 
pollStarted || shutdown || syncRequested;
 +long pollStarted = clock.nanoTime();
- if (lastSyncedAt + syncIntervalNanos <= pollStarted || 
shutdownRequested || syncRequested)
++boolean flushToDisk = lastSyncedAt + syncIntervalNanos <= 
pollStarted || shutdownRequested || syncRequested;
+ if (flushToDisk)
  {
  // in this branch, we want to flush the commit log to disk
  syncRequested = false;
@@@ -181,47 -180,30 +183,19 @@@
  else
  {
  

[jira] [Commented] (CASSANDRA-14467) Add option to sanity check tombstones on reads/compaction

2018-06-05 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502436#comment-16502436
 ] 

Marcus Eriksson commented on CASSANDRA-14467:
-

not that I know of, people will need to rebase to get their existing 
trunk-branches working

> Add option to sanity check tombstones on reads/compaction
> -
>
> Key: CASSANDRA-14467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14467
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 4.x
>
>
> We should add an option to do a quick sanity check of tombstones on reads + 
> compaction. It should either log the error or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14467) Add option to sanity check tombstones on reads/compaction

2018-06-05 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502425#comment-16502425
 ] 

Ariel Weisberg commented on CASSANDRA-14467:


The thing about the dtest changes is that it's going to produce a config file 
that won't work with revisions prior to the introduction of the config option 
right?

Do we have a solution for that?

> Add option to sanity check tombstones on reads/compaction
> -
>
> Key: CASSANDRA-14467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14467
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 4.x
>
>
> We should add an option to do a quick sanity check of tombstones on reads + 
> compaction. It should either log the error or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14467) Add option to sanity check tombstones on reads/compaction

2018-06-05 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502412#comment-16502412
 ] 

Marcus Eriksson commented on CASSANDRA-14467:
-

[~aweisberg] did you review the dtest changes? If not, could you just quickly 
check the PR above?

the cassandra changes were committed as 
{{5d8767765090cd968c39008f76b0cd795d6e5032}}, thanks!

> Add option to sanity check tombstones on reads/compaction
> -
>
> Key: CASSANDRA-14467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14467
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 4.x
>
>
> We should add an option to do a quick sanity check of tombstones on reads + 
> compaction. It should either log the error or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14467) Add option to sanity check tombstones on reads/compaction

2018-06-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502411#comment-16502411
 ] 

ASF GitHub Bot commented on CASSANDRA-14467:


GitHub user krummas opened a pull request:

https://github.com/apache/cassandra-dtest/pull/30

CASSANDRA-14467: always enable tombstone validation exceptions during tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/krummas/cassandra-dtest marcuse/14467

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cassandra-dtest/pull/30.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #30


commit 3dc02646948120e9847347f951d1539c16cef2a9
Author: Marcus Eriksson 
Date:   2018-05-31T06:41:11Z

always enable tombstone validation exceptions during tests




> Add option to sanity check tombstones on reads/compaction
> -
>
> Key: CASSANDRA-14467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14467
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 4.x
>
>
> We should add an option to do a quick sanity check of tombstones on reads + 
> compaction. It should either log the error or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Add option to sanity check tombstones on reads/compaction

2018-06-05 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4413fdbd3 -> 5d8767765


Add option to sanity check tombstones on reads/compaction

Patch by marcuse; reviewed by Ariel Weisberg for CASSANDRA-14467


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5d876776
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5d876776
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5d876776

Branch: refs/heads/trunk
Commit: 5d8767765090cd968c39008f76b0cd795d6e5032
Parents: 4413fdb
Author: Marcus Eriksson 
Authored: Tue May 22 13:43:22 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Jun 5 12:47:20 2018 -0700

--
 CHANGES.txt |   1 +
 conf/cassandra.yaml |   3 +
 .../org/apache/cassandra/config/Config.java |   9 +-
 .../cassandra/config/DatabaseDescriptor.java|  10 +
 .../org/apache/cassandra/db/DeletionTime.java   |   9 +
 .../cassandra/db/UnfilteredValidation.java  | 113 ++
 .../columniterator/AbstractSSTableIterator.java |   2 +
 .../db/columniterator/SSTableIterator.java  |   2 +
 .../columniterator/SSTableReversedIterator.java |   1 +
 .../apache/cassandra/db/rows/AbstractCell.java  |   7 +
 .../apache/cassandra/db/rows/AbstractRow.java   |  12 +
 .../apache/cassandra/db/rows/ColumnData.java|   7 +
 .../cassandra/db/rows/ComplexColumnData.java|  10 +
 .../db/rows/RangeTombstoneBoundMarker.java  |   5 +
 .../db/rows/RangeTombstoneBoundaryMarker.java   |   5 +
 .../apache/cassandra/db/rows/Unfiltered.java|   6 +
 .../io/sstable/SSTableIdentityIterator.java |   6 +-
 .../io/sstable/format/SSTableReader.java|   6 +
 .../cassandra/service/StorageService.java   |  12 +
 .../cassandra/service/StorageServiceMBean.java  |   2 +
 test/conf/cassandra.yaml|   1 +
 .../config/DatabaseDescriptorRefTest.java   |   1 +
 .../cql3/validation/operations/TTLTest.java |  19 ++
 .../db/compaction/CompactionsCQLTest.java   | 223 +++
 .../sstable/SSTableCorruptionDetectionTest.java |   5 +
 25 files changed, 475 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d876776/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 351ae37..eb064be 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add option to sanity check tombstones on reads/compactions (CASSANDRA-14467)
  * Add a virtual table to expose all running sstable tasks (CASSANDRA-14457)
  * Let nodetool import take a list of directories (CASSANDRA-14442)
  * Avoid unneeded memory allocations / cpu for disabled log levels 
(CASSANDRA-14488)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d876776/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 49c6f03..7ff056d 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -1198,3 +1198,6 @@ audit_logging_options:
 # included_users:
 # excluded_users:
 
+# validate tombstones on reads and compaction
+# can be either "disabled", "warn" or "exception"
+# corrupted_tombstone_strategy: disabled

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d876776/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index d945368..d9250bb 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -383,7 +383,7 @@ public class Config
 
 public volatile AuditLogOptions audit_logging_options = new 
AuditLogOptions();
 
-
+public CorruptedTombstoneStrategy corrupted_tombstone_strategy = 
CorruptedTombstoneStrategy.disabled;
 /**
  * @deprecated migrate to {@link DatabaseDescriptor#isClientInitialized()}
  */
@@ -468,6 +468,13 @@ public class Config
 reject
 }
 
+public enum CorruptedTombstoneStrategy
+{
+disabled,
+warn,
+exception
+}
+
 private static final List SENSITIVE_KEYS = new ArrayList() 
{{
 add("client_encryption_options");
 add("server_encryption_options");

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d876776/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 592b96e..91ee63a 100644
--- 

[jira] [Commented] (CASSANDRA-14467) Add option to sanity check tombstones on reads/compaction

2018-06-05 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502109#comment-16502109
 ] 

Ariel Weisberg commented on CASSANDRA-14467:


+1 with the latest changes.

> Add option to sanity check tombstones on reads/compaction
> -
>
> Key: CASSANDRA-14467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14467
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 4.x
>
>
> We should add an option to do a quick sanity check of tombstones on reads + 
> compaction. It should either log the error or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14496) TWCS erroneously disabling tombstone compactions

2018-06-05 Thread Jon Haddad (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502045#comment-16502045
 ] 

Jon Haddad commented on CASSANDRA-14496:


I'm not a fan of enabling them by default.  There's no value for the frequency 
that make any sense at all given that a TTL could either be 5 minutes or 5 
years.   

If someone supplies the {{tombstone_compaction_interval}} I suppose it wouldn't 
be unreasonable to enable them.

> TWCS erroneously disabling tombstone compactions
> 
>
> Key: CASSANDRA-14496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14496
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Robert Tarrall
>Priority: Minor
>
> This code:
> {code:java}
> this.options = new TimeWindowCompactionStrategyOptions(options);
> if 
> (!options.containsKey(AbstractCompactionStrategy.TOMBSTONE_COMPACTION_INTERVAL_OPTION)
>  && 
> !options.containsKey(AbstractCompactionStrategy.TOMBSTONE_THRESHOLD_OPTION))
> {
> disableTombstoneCompactions = true;
> logger.debug("Disabling tombstone compactions for TWCS");
> }
> else
> logger.debug("Enabling tombstone compactions for TWCS");
> }
> {code}
> ... in TimeWindowCompactionStrategy.java disables tombstone compactions in 
> TWCS if you have not *explicitly* set either tombstone_compaction_interval or 
> tombstone_threshold.  Adding 'tombstone_compaction_interval': '86400' to the 
> compaction stanza in a table definition has the (to me unexpected) side 
> effect of enabling tombstone compactions. 
> This is surprising and does not appear to be mentioned in the docs.
> I would suggest that tombstone compactions should be run unless these options 
> are both set to 0.
> If the concern is that (as with DTCS in CASSANDRA-9234) we don't want to 
> waste time on tombstone compactions when we expect the tables to eventually 
> be expired away, perhaps we should also check unchecked_tombstone_compaction 
> and still enable tombstone compactions if that's set to true.
> May also make sense to set defaults for interval & threshold to 0 & disable 
> if they're nonzero so that setting non-default values, rather than setting 
> ANY value, is what determines whether tombstone compactions are enabled?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14465) Consider logging prepared statements bound values in Audit Log

2018-06-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502041#comment-16502041
 ] 

Per Otterström commented on CASSANDRA-14465:


A third option would be to make this a configuration option.

Would make it easy for users to opt in or out. Also, no need to create custom 
IAuditLogger implementations.

Security is a valid concern. Another may be performance.

> Consider logging prepared statements bound values in Audit Log
> --
>
> Key: CASSANDRA-14465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14465
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vinay Chella
>Priority: Minor
>
> The Goal of this ticket is to determine the best way to implement audit 
> logging of actual bound values from prepared statement execution. The current 
> default implementation does not log bound values
> Here are the options I see
>  1. Log bound values of prepared statements 
>  2. Let a custom implementation of IAuditLogger decide what to do
> *Context*:
>  Option #1: Works for teams which expects bind values to be logged in audit 
> log without any security or compliance concerns
>  Option #2: Allows teams make the best choice for themselves
> Note that the efforts of securing C* clusters by certs, authentication, and 
> audit logging can go in vain when log rotation and log aggregation systems 
> are not equally secure enough since logging bind values allow someone to 
> replay the database events and expose sensitive data.
> [~spo...@gmail.com] [~jasobrown]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14498) Audit log does not include statements on some system keyspaces

2018-06-05 Thread JIRA
Per Otterström created CASSANDRA-14498:
--

 Summary: Audit log does not include statements on some system 
keyspaces
 Key: CASSANDRA-14498
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14498
 Project: Cassandra
  Issue Type: Bug
  Components: Auth
Reporter: Per Otterström
 Fix For: 4.0


Audit logs does not include statements on the "system" and "system_schema" 
keyspace.

It may be a common use case to whitelist queries on these keyspaces, but 
Cassandra should not make assumptions. Users who don't want these statements in 
their audit log are still able to whitelist them with configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10876) Alter behavior of batch WARN and fail on single partition batches

2018-06-05 Thread Tania S Engel (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501951#comment-16501951
 ] 

Tania S Engel commented on CASSANDRA-10876:
---

Cassandra data models are based on queries so tables can often be the same 
partition key with different frequently queried data points making up the 
clustering keys. In that case, the data being the same, it's also quite common 
to want to atomically batch insert the data. In this example, which I also 
posted on stack overflow,

[https://stackoverflow.com/questions/50652243/can-a-cassandra-partition-key-span-multiple-tables-in-one-keyspace]

would the coordinator farm these inserts out to different nodes given a RF < 
nodes? Or would the partition key, albeit in different tables, hash to the same 
value? I ask because of all the recommendations not to use multiple partition 
batches. And, in our design we are still seeing these batch_size_warn_threshold 
warnings in 3.11.1. 

 

use logskeyspace;

CREATE TABLE Log_User(LogDay timestamp, UserId int, EventId int) PRIMARY KEY 
(Day, UserId)

CREATE TABLE Log_Event(LogDay timestamp, EventId int,  UserId int) PRIMARY KEY 
(Day, EventId)

BEGIN BATCH

INSERT INTO Log_User(LogDay timestamp,  UserId int, EventId int) 
VALUES("2018-03-21 00:00Z", 10, 23);

INSERT INTO Log_Event(LogDay timestamp, EventId int,  UserId int) 
VALUES("2018-03-21 00:00Z", 23, 10);

APPLY BATCH;

> Alter behavior of batch WARN and fail on single partition batches
> -
>
> Key: CASSANDRA-10876
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10876
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Patrick McFadin
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6
>
> Attachments: 10876.txt
>
>
> In an attempt to give operator insight into potentially harmful batch usage, 
> Jiras were created to log WARN or fail on certain batch sizes. This ignores 
> the single partition batch, which doesn't create the same issues as a 
> multi-partition batch. 
> The proposal is to ignore size on single partition batch statements. 
> Reference:
> [CASSANDRA-6487|https://issues.apache.org/jira/browse/CASSANDRA-6487]
> [CASSANDRA-8011|https://issues.apache.org/jira/browse/CASSANDRA-8011]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14451) Infinity ms Commit Log Sync

2018-06-05 Thread Jordan West (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501923#comment-16501923
 ] 

Jordan West commented on CASSANDRA-14451:
-

{quote} I left it where it was previously located, but can move it to the more 
logical spot.
{quote}
I don't find it very useful where it is now. Would vote to move it or remove it 
(the code is pretty clear).
{quote}I wanted to keep the logic as close to the original as possible, since 
3.0 is far along in it's age. I suppose it doesn't matter that much, though, 
and can change if you think it's worthwhile. wdyt?
{quote}
>From the review perspective it was just a second implementation to check for 
>correctness and it seems like either implementation could be used. Would vote 
>for them to be the same but fine as is if you prefer.

 

Otherwise, +1

> Infinity ms Commit Log Sync
> ---
>
> Key: CASSANDRA-14451
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14451
> Project: Cassandra
>  Issue Type: Bug
> Environment: 3.11.2 - 2 DC
>Reporter: Harry Hough
>Assignee: Jason Brown
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Its giving commit log sync warnings where there were apparently zero syncs 
> and therefore gives "Infinityms" as the average duration
> {code:java}
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:11:14,294 
> NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with 
> average duration of Infinityms, 1 have exceeded the configured commit 
> interval by an average of 74.40ms 
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:16:57,844 
> NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with 
> average duration of Infinityms, 1 have exceeded the configured commit 
> interval by an average of 198.69ms 
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:24:46,325 
> NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with 
> average duration of Infinityms, 1 have exceeded the configured commit 
> interval by an average of 264.11ms 
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:29:46,393 
> NoSpamLogger.java:94 - Out of 32 commit log syncs over the past 268.84s with, 
> average duration of 17.56ms, 1 have exceeded the configured commit interval 
> by an average of 173.66ms{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Add a virtual table to expose all running sstable tasks [Forced Update!]

2018-06-05 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2a2ee0063 -> 4413fdbd3 (forced update)


Add a virtual table to expose all running sstable tasks

patch by Chris Lohfink; reviewed by Aleksey Yeschenko for
CASSANDRA-14457


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4413fdbd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4413fdbd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4413fdbd

Branch: refs/heads/trunk
Commit: 4413fdbd3e9350c5f5dac5ef4dc517fd9b5064ad
Parents: 0f79427
Author: Chris Lohfink 
Authored: Sat May 19 01:27:28 2018 -0500
Committer: Aleksey Yeshchenko 
Committed: Tue Jun 5 15:36:59 2018 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/db/compaction/CompactionInfo.java | 51 +--
 .../db/compaction/CompactionManager.java| 13 +++-
 .../cassandra/db/virtual/SSTableTasksTable.java | 69 
 .../db/virtual/SystemViewsKeyspace.java |  2 +-
 .../tools/nodetool/CompactionStats.java | 14 ++--
 6 files changed, 121 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4413fdbd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 86842d0..351ae37 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add a virtual table to expose all running sstable tasks (CASSANDRA-14457)
  * Let nodetool import take a list of directories (CASSANDRA-14442)
  * Avoid unneeded memory allocations / cpu for disabled log levels 
(CASSANDRA-14488)
  * Implement virtual keyspace interface (CASSANDRA-7622)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4413fdbd/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
index ccdfeb4..99df259 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.db.compaction;
 import java.io.Serializable;
 import java.util.HashMap;
 import java.util.Map;
+import java.util.Optional;
 import java.util.UUID;
 
 import org.apache.cassandra.schema.TableMetadata;
@@ -28,6 +29,16 @@ import org.apache.cassandra.schema.TableMetadata;
 public final class CompactionInfo implements Serializable
 {
 private static final long serialVersionUID = 3695381572726744816L;
+
+public static final String ID = "id";
+public static final String KEYSPACE = "keyspace";
+public static final String COLUMNFAMILY = "columnfamily";
+public static final String COMPLETED = "completed";
+public static final String TOTAL = "total";
+public static final String TASK_TYPE = "taskType";
+public static final String UNIT = "unit";
+public static final String COMPACTION_ID = "compactionId";
+
 private final TableMetadata metadata;
 private final OperationType tasktype;
 private final long completed;
@@ -84,19 +95,14 @@ public final class CompactionInfo implements Serializable
 return new CompactionInfo(metadata, tasktype, complete, total, unit, 
compactionId);
 }
 
-public UUID getId()
-{
-return metadata != null ? metadata.id.asUUID() : null;
-}
-
-public String getKeyspace()
+public Optional getKeyspace()
 {
-return metadata != null ? metadata.keyspace : null;
+return Optional.ofNullable(metadata != null ? metadata.keyspace : 
null);
 }
 
-public String getColumnFamily()
+public Optional getTable()
 {
-return metadata != null ? metadata.name : null;
+return Optional.ofNullable(metadata != null ? metadata.name : null);
 }
 
 public TableMetadata getTableMetadata()
@@ -119,19 +125,24 @@ public final class CompactionInfo implements Serializable
 return tasktype;
 }
 
-public UUID compactionId()
+public UUID getTaskId()
 {
 return compactionId;
 }
 
+public Unit getUnit()
+{
+return unit;
+}
+
 public String toString()
 {
 StringBuilder buff = new StringBuilder();
 buff.append(getTaskType());
 if (metadata != null)
 {
-buff.append('@').append(getId()).append('(');
-buff.append(getKeyspace()).append(", 
").append(getColumnFamily()).append(", ");
+buff.append('@').append(metadata.id).append('(');
+buff.append(metadata.keyspace).append(", 
").append(metadata.name).append(", ");
 }
 else
 {
@@ -144,14 +155,14 @@ 

[jira] [Comment Edited] (CASSANDRA-14457) Add a virtual table with current compactions

2018-06-05 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501794#comment-16501794
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-14457 at 6/5/18 2:24 PM:
---

Committed to trunk as 
[2a2ee006302a086ff054eac52161209a3118bb7c|https://github.com/apache/cassandra/commit/2a2ee006302a086ff054eac52161209a3118bb7c],
 thanks.

Made some minor tweaks on commit:
- Use the correct {{Optional.ofNullable()}} rather than {{Optional.of()}} in 
{{getKeyspace()}} and {{getTable()}}
- Fix {{asMap()}} to properly handle empty {{Optional}} s so it doesn't throw
- Made nodetool {{CompactionStats}} use {{CompactionInfo}} constants that we 
have now to access the map
- Hid dealing with {{Holder}} in {{CompactionMetrics}}

In an unrelated note: why, why on Earth is {{CompactionMetrics}} the place that 
is the authorative source of all running compactions? And all that stored in a 
static synchronised identity set? Has nobody been bothered by this since 2012?


was (Author: iamaleksey):
Committed to trunk as 
[899f7c41935d92f15bc17a33f36443030616c8eb|https://github.com/apache/cassandra/commit/899f7c41935d92f15bc17a33f36443030616c8eb],
 thanks.

Made some minor tweaks on commit:
- Use the correct {{Optional.ofNullable()}} rather than {{Optional.of()}} in 
{{getKeyspace()}} and {{getTable()}}
- Fix {{asMap()}} to properly handle empty {{Optional}} s so it doesn't throw
- Made nodetool {{CompactionStats}} use {{CompactionInfo}} constants that we 
have now to access the map
- Hid dealing with {{Holder}} in {{CompactionMetrics}}

In an unrelated note: why, why on Earth is {{CompactionMetrics}} the place that 
is the authorative source of all running compactions? And all that stored in a 
static synchronised identity set? Has nobody been bothered by this since 2012?

> Add a virtual table with current compactions
> 
>
> Key: CASSANDRA-14457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Add a virtual table to expose all running sstable tasks [Forced Update!]

2018-06-05 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 899f7c419 -> 2a2ee0063 (forced update)


Add a virtual table to expose all running sstable tasks

patch by Chris Lohfink; reviewed by Aleksey Yeschenko for
CASSANDRA-14457


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2a2ee006
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2a2ee006
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2a2ee006

Branch: refs/heads/trunk
Commit: 2a2ee006302a086ff054eac52161209a3118bb7c
Parents: 0f79427
Author: Chris Lohfink 
Authored: Sat May 19 01:27:28 2018 -0500
Committer: Aleksey Yeshchenko 
Committed: Tue Jun 5 15:22:34 2018 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/db/compaction/CompactionInfo.java | 51 +--
 .../db/compaction/CompactionManager.java| 13 +++-
 .../cassandra/db/virtual/SSTableTasksTable.java | 69 
 .../db/virtual/SystemViewsKeyspace.java |  2 +-
 .../tools/nodetool/CompactionStats.java | 14 ++--
 6 files changed, 121 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a2ee006/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 86842d0..351ae37 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add a virtual table to expose all running sstable tasks (CASSANDRA-14457)
  * Let nodetool import take a list of directories (CASSANDRA-14442)
  * Avoid unneeded memory allocations / cpu for disabled log levels 
(CASSANDRA-14488)
  * Implement virtual keyspace interface (CASSANDRA-7622)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a2ee006/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
index ccdfeb4..99df259 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.db.compaction;
 import java.io.Serializable;
 import java.util.HashMap;
 import java.util.Map;
+import java.util.Optional;
 import java.util.UUID;
 
 import org.apache.cassandra.schema.TableMetadata;
@@ -28,6 +29,16 @@ import org.apache.cassandra.schema.TableMetadata;
 public final class CompactionInfo implements Serializable
 {
 private static final long serialVersionUID = 3695381572726744816L;
+
+public static final String ID = "id";
+public static final String KEYSPACE = "keyspace";
+public static final String COLUMNFAMILY = "columnfamily";
+public static final String COMPLETED = "completed";
+public static final String TOTAL = "total";
+public static final String TASK_TYPE = "taskType";
+public static final String UNIT = "unit";
+public static final String COMPACTION_ID = "compactionId";
+
 private final TableMetadata metadata;
 private final OperationType tasktype;
 private final long completed;
@@ -84,19 +95,14 @@ public final class CompactionInfo implements Serializable
 return new CompactionInfo(metadata, tasktype, complete, total, unit, 
compactionId);
 }
 
-public UUID getId()
-{
-return metadata != null ? metadata.id.asUUID() : null;
-}
-
-public String getKeyspace()
+public Optional getKeyspace()
 {
-return metadata != null ? metadata.keyspace : null;
+return Optional.ofNullable(metadata != null ? metadata.keyspace : 
null);
 }
 
-public String getColumnFamily()
+public Optional getTable()
 {
-return metadata != null ? metadata.name : null;
+return Optional.ofNullable(metadata != null ? metadata.name : null);
 }
 
 public TableMetadata getTableMetadata()
@@ -119,19 +125,24 @@ public final class CompactionInfo implements Serializable
 return tasktype;
 }
 
-public UUID compactionId()
+public UUID getTaskId()
 {
 return compactionId;
 }
 
+public Unit getUnit()
+{
+return unit;
+}
+
 public String toString()
 {
 StringBuilder buff = new StringBuilder();
 buff.append(getTaskType());
 if (metadata != null)
 {
-buff.append('@').append(getId()).append('(');
-buff.append(getKeyspace()).append(", 
").append(getColumnFamily()).append(", ");
+buff.append('@').append(metadata.id).append('(');
+buff.append(metadata.keyspace).append(", 
").append(metadata.name).append(", ");
 }
 else
 {
@@ -144,14 +155,14 @@ 

[jira] [Comment Edited] (CASSANDRA-14457) Add a virtual table with current compactions

2018-06-05 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501794#comment-16501794
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-14457 at 6/5/18 1:41 PM:
---

Committed to trunk as 
[899f7c41935d92f15bc17a33f36443030616c8eb|https://github.com/apache/cassandra/commit/899f7c41935d92f15bc17a33f36443030616c8eb],
 thanks.

Made some minor tweaks on commit:
- Use the correct {{Optional.ofNullable()}} rather than {{Optional.of()}} in 
{{getKeyspace()}} and {{getTable()}}
- Fix {{asMap()}} to properly handle empty {{Optional}} s so it doesn't throw
- Made nodetool {{CompactionStats}} use {{CompactionInfo}} constants that we 
have now to access the map
- Hid dealing with {{Holder}} in {{CompactionMetrics}}

In an unrelated note: why, why on Earth is {{CompactionMetrics}} the place that 
is the authorative source of all running compactions? And all that stored in a 
static synchronised identity set? Has nobody been bothered by this since 2012?


was (Author: iamaleksey):
Committed to trunk as 
[899f7c41935d92f15bc17a33f36443030616c8eb|https://github.com/apache/cassandra/commit/899f7c41935d92f15bc17a33f36443030616c8eb],
 thanks.

Made some minor tweaks on commit:
- Use the correct {{Optional.ofNullable()}} rather than {{Optional.of()}} in 
{{getKeyspace()}} and {{getTable()}}
- Fix {{asMap()}} to properly handle empty {{Optional}}s so it doesn't throw
- Made nodetool {{CompactionStats}} use {{CompactionInfo}} constants that we 
have now to access the map
- Hid dealing with {{Holder}} in {{CompactionMetrics}}

In an unrelated note: why, why on Earth is {{CompactionMetrics}} the place that 
is the authorative source of all running compactions? And all that stored in a 
static synchronised identity set? Has nobody been bothered by this since 2012?

> Add a virtual table with current compactions
> 
>
> Key: CASSANDRA-14457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14457) Add a virtual table with current compactions

2018-06-05 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14457:
--
   Resolution: Fixed
Fix Version/s: (was: 4.x)
   4.0
   Status: Resolved  (was: Patch Available)

> Add a virtual table with current compactions
> 
>
> Key: CASSANDRA-14457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14457) Add a virtual table with current compactions

2018-06-05 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501794#comment-16501794
 ] 

Aleksey Yeschenko commented on CASSANDRA-14457:
---

Committed to trunk as 
[899f7c41935d92f15bc17a33f36443030616c8eb|https://github.com/apache/cassandra/commit/899f7c41935d92f15bc17a33f36443030616c8eb],
 thanks.

Made some minor tweaks on commit:
- Use the correct {{Optional.ofNullable()}} rather than {{Optional.of()}} in 
{{getKeyspace()}} and {{getTable()}}
- Fix {{asMap()}} to properly handle empty {{Optional}}s so it doesn't throw
- Made nodetool {{CompactionStats}} use {{CompactionInfo}} constants that we 
have now to access the map
- Hid dealing with {{Holder}} in {{CompactionMetrics}}

In an unrelated note: why, why on Earth is {{CompactionMetrics}} the place that 
is the authorative source of all running compactions? And all that stored in a 
static synchronised identity set? Has nobody been bothered by this since 2012?

> Add a virtual table with current compactions
> 
>
> Key: CASSANDRA-14457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14459) DynamicEndpointSnitch should never prefer latent nodes

2018-06-05 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501791#comment-16501791
 ] 

Jason Brown commented on CASSANDRA-14459:
-

Took a quick look, and on the whole I like where this goes. The biggest problem 
I have is {{DynamicEndpointSnitch#reset(boolean)}} gets a snapshot of the 
existing {{reservoir}}. Unfortunately, generating that that snapshot is *very* 
heavyweight (copies a bunch of data, creates a bunch of garbage), just to get 
one value and then throw it all away. You should consider a different mechanism 
for getting the min value. Perhaps instead of {{DynamicEndpointSnitch#samples}} 
being defined as {{ConcurrentHashMap}}, maybe the value class is something like:

{code}
static class Holder
{
private final ExponentiallyDecayingReservoir res;
private volatile long minValue;

void update(long val)
{
res.update(...)

// It's probably ok if there's a race on minValue. better a 
small/irrelevant race than any real coordination
if (val < minValue)
minValue = val;
}
}
{code}

Suit to taste, if you find this useful.

> DynamicEndpointSnitch should never prefer latent nodes
> --
>
> Key: CASSANDRA-14459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14459
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>
> The DynamicEndpointSnitch has two unfortunate behaviors that allow it to 
> provide latent hosts as replicas:
>  # Loses all latency information when Cassandra restarts
>  # Clears latency information entirely every ten minutes (by default), 
> allowing global queries to be routed to _other datacenters_ (and local 
> queries cross racks/azs)
> This means that the first few queries after restart/reset could be quite slow 
> compared to average latencies. I propose we solve this by resetting to the 
> minimum observed latency instead of completely clearing the samples and 
> extending the {{isLatencyForSnitch}} idea to a three state variable instead 
> of two, in particular {{YES}}, {{NO}}, {{MAYBE}}. This extension allows 
> {{EchoMessages}} and {{PingMessages}} to send {{MAYBE}} indicating that the 
> DS should use those measurements if it only has one or fewer samples for a 
> host. This fixes both problems because on process restart we send out 
> {{PingMessages}} / {{EchoMessages}} as part of startup, and we would reset to 
> effectively the RTT of the hosts (also at that point normal gossip 
> {{EchoMessages}} have an opportunity to add an additional latency 
> measurement).
> This strategy also nicely deals with the "a host got slow but now it's fine" 
> problem that the DS resets were (afaik) designed to stop because the 
> {{EchoMessage}} ping latency will count only after the reset for that host. 
> Ping latency is a more reasonable lower bound on host latency (as opposed to 
> status quo of zero).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Add a virtual table to expose all running sstable tasks

2018-06-05 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 0f7942775 -> 899f7c419


Add a virtual table to expose all running sstable tasks

patch by Chris Lohfink; reviewed by Aleksey Yeschenko for
CASSANDRA-14457


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/899f7c41
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/899f7c41
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/899f7c41

Branch: refs/heads/trunk
Commit: 899f7c41935d92f15bc17a33f36443030616c8eb
Parents: 0f79427
Author: Chris Lohfink 
Authored: Sat May 19 01:27:28 2018 -0500
Committer: Aleksey Yeshchenko 
Committed: Tue Jun 5 14:22:36 2018 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/db/compaction/CompactionInfo.java | 51 +--
 .../db/compaction/CompactionManager.java|  2 +-
 .../cassandra/db/virtual/SSTableTasksTable.java | 69 
 .../db/virtual/SystemViewsKeyspace.java |  2 +-
 .../cassandra/metrics/CompactionMetrics.java|  6 ++
 .../tools/nodetool/CompactionStats.java | 14 ++--
 7 files changed, 116 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/899f7c41/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 86842d0..351ae37 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add a virtual table to expose all running sstable tasks (CASSANDRA-14457)
  * Let nodetool import take a list of directories (CASSANDRA-14442)
  * Avoid unneeded memory allocations / cpu for disabled log levels 
(CASSANDRA-14488)
  * Implement virtual keyspace interface (CASSANDRA-7622)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/899f7c41/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
index ccdfeb4..99df259 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.db.compaction;
 import java.io.Serializable;
 import java.util.HashMap;
 import java.util.Map;
+import java.util.Optional;
 import java.util.UUID;
 
 import org.apache.cassandra.schema.TableMetadata;
@@ -28,6 +29,16 @@ import org.apache.cassandra.schema.TableMetadata;
 public final class CompactionInfo implements Serializable
 {
 private static final long serialVersionUID = 3695381572726744816L;
+
+public static final String ID = "id";
+public static final String KEYSPACE = "keyspace";
+public static final String COLUMNFAMILY = "columnfamily";
+public static final String COMPLETED = "completed";
+public static final String TOTAL = "total";
+public static final String TASK_TYPE = "taskType";
+public static final String UNIT = "unit";
+public static final String COMPACTION_ID = "compactionId";
+
 private final TableMetadata metadata;
 private final OperationType tasktype;
 private final long completed;
@@ -84,19 +95,14 @@ public final class CompactionInfo implements Serializable
 return new CompactionInfo(metadata, tasktype, complete, total, unit, 
compactionId);
 }
 
-public UUID getId()
-{
-return metadata != null ? metadata.id.asUUID() : null;
-}
-
-public String getKeyspace()
+public Optional getKeyspace()
 {
-return metadata != null ? metadata.keyspace : null;
+return Optional.ofNullable(metadata != null ? metadata.keyspace : 
null);
 }
 
-public String getColumnFamily()
+public Optional getTable()
 {
-return metadata != null ? metadata.name : null;
+return Optional.ofNullable(metadata != null ? metadata.name : null);
 }
 
 public TableMetadata getTableMetadata()
@@ -119,19 +125,24 @@ public final class CompactionInfo implements Serializable
 return tasktype;
 }
 
-public UUID compactionId()
+public UUID getTaskId()
 {
 return compactionId;
 }
 
+public Unit getUnit()
+{
+return unit;
+}
+
 public String toString()
 {
 StringBuilder buff = new StringBuilder();
 buff.append(getTaskType());
 if (metadata != null)
 {
-buff.append('@').append(getId()).append('(');
-buff.append(getKeyspace()).append(", 
").append(getColumnFamily()).append(", ");
+buff.append('@').append(metadata.id).append('(');
+buff.append(metadata.keyspace).append(", 
").append(metadata.name).append(", ");
 }
 

[jira] [Commented] (CASSANDRA-14451) Infinity ms Commit Log Sync

2018-06-05 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501696#comment-16501696
 ] 

Jason Brown commented on CASSANDRA-14451:
-

bq. Are the ArchiveCommitLog dtest failures expected on the 3.0 branch? 

Yes, 3.0 dtests consistently have about 14 failures, including 
{{ArchiveCommitLog}}. So, unfortunately, this is expected.

bq. The “sleep any time we have left” comment would be more appropriate above 
the assignment of wakeUpAt.

I left it where it was previously located, but can move it to the more logical 
spot.

bq. change in behavior of updating totalSyncDuration is intentional

lol, it wasn't intentional, but it now does the correct thing! You are right 
that in CASSANDRA-14108 I was adding time to mark the headers (without 
flushing) to {{totalSyncDuration}}, which is incorrect.

bq. Is there are reason you opted for the “excessTimeToFlush” approach in 3.0 
but the “maxFlushTimestamp” approach on 3.11 and trunk?

I wanted to keep the logic as close to the original as possible, since 3.0 is 
far along in it's age. I suppose it doesn't matter that much, though, and can 
change if you think it's worthwhile. wdyt?

> Infinity ms Commit Log Sync
> ---
>
> Key: CASSANDRA-14451
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14451
> Project: Cassandra
>  Issue Type: Bug
> Environment: 3.11.2 - 2 DC
>Reporter: Harry Hough
>Assignee: Jason Brown
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Its giving commit log sync warnings where there were apparently zero syncs 
> and therefore gives "Infinityms" as the average duration
> {code:java}
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:11:14,294 
> NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with 
> average duration of Infinityms, 1 have exceeded the configured commit 
> interval by an average of 74.40ms 
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:16:57,844 
> NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with 
> average duration of Infinityms, 1 have exceeded the configured commit 
> interval by an average of 198.69ms 
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:24:46,325 
> NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with 
> average duration of Infinityms, 1 have exceeded the configured commit 
> interval by an average of 264.11ms 
> WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-16 21:29:46,393 
> NoSpamLogger.java:94 - Out of 32 commit log syncs over the past 268.84s with, 
> average duration of 17.56ms, 1 have exceeded the configured commit interval 
> by an average of 173.66ms{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14497) Add Role login cache

2018-06-05 Thread Sam Tunnicliffe (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501537#comment-16501537
 ] 

Sam Tunnicliffe commented on CASSANDRA-14497:
-

I have a patch for this which is 90% done, but I haven't had time to finish 
off. I'll try and put aside a few hours later this week and get it cleaned up & 
submitted.

> Add Role login cache
> 
>
> Key: CASSANDRA-14497
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14497
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Auth
>Reporter: Jay Zhuang
>Priority: Major
>
> The 
> [{{ClientState.login()}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/ClientState.java#L313]
>  function is used for all auth message: 
> [{{AuthResponse.java:82}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/messages/AuthResponse.java#L82].
>  But the 
> [{{role.canLogin}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L521]
>  information is not cached. So it hits the database every time: 
> [{{CassandraRoleManager.java:407}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L407].
>  For a cluster with lots of new connections, it's causing performance issue. 
> The mitigation for us is to increase the {{system_auth}} replication factor 
> to match the number of nodes, so 
> [{{local_one}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L488]
>  would be very cheap. The P99 dropped immediately, but I don't think it is 
> not a good solution.
> I would purpose to add {{Role.canLogin}} to the RolesCache to improve the 
> auth performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14344) Support filtering using IN restrictions

2018-06-05 Thread Benjamin Lerer (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501450#comment-16501450
 ] 

Benjamin Lerer commented on CASSANDRA-14344:


Thanks for the patch.

The operator logic that you added for the {{IN}} operator is similar to the one 
use for the {{CONTAINS}} but I wonder if it could not be done in a better way. 
That approach force the deserialization of all the list elements and of the 
value for each check.
It seems to me that comparing the elements with the value using the type 
comparator would be more efficient as it will not need to read all the bytes 
and will generate less garbage.

It would also be nice to have some extra unit tests for filtering on composite 
partition keys and/or on static columns.

> Support filtering using IN restrictions
> ---
>
> Key: CASSANDRA-14344
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14344
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Dikang Gu
>Assignee: Venkata Harikrishna Nukala
>Priority: Major
> Attachments: 14344-trunk.txt
>
>
> Support IN filter query like this:
>  
> CREATE TABLE ks1.t1 (
>     key int,
>     col1 int,
>     col2 int,
>     value int,
>     PRIMARY KEY (key, col1, col2)
> ) WITH CLUSTERING ORDER BY (col1 ASC, col2 ASC)
>  
> cqlsh:ks1> select * from t1 where key = 1 and col2 in (1) allow filtering;
>  
>  key | col1 | col2 | value
> -+--+--+---
>    1 |    1 |    1 |     1
>    1 |    2 |    1 |     3
>  
> (2 rows)
> cqlsh:ks1> select * from t1 where key = 1 and col2 in (1, 2) allow filtering;
> *{color:#ff}InvalidRequest: Error from server: code=2200 [Invalid query] 
> message="IN restrictions are not supported on indexed columns"{color}*
> cqlsh:ks1>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org