[jira] [Updated] (CASSANDRA-15150) Update docs to point to Slack rather than IRC

2019-06-12 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15150:
-
Status: Patch Available  (was: Open)

Updated references to IRC in patches, bugs, contactus, and release_process for 
updating the room topic.

> Update docs to point to Slack rather than IRC
> -
>
> Key: CASSANDRA-15150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Low
> Fix For: 4.0
>
> Attachments: CASSANDRA-15150.txt
>
>
> Update the contact us/community page to point to ASF Slack rather than IRC.  
> We can remove cassandra-builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15150) Update docs to point to Slack rather than IRC

2019-06-12 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15150:
-
Attachment: (was: CASSANDRA-15150.txt)

> Update docs to point to Slack rather than IRC
> -
>
> Key: CASSANDRA-15150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Low
> Fix For: 4.0
>
>
> Update the contact us/community page to point to ASF Slack rather than IRC.  
> We can remove cassandra-builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15150) Update docs to point to Slack rather than IRC

2019-06-12 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15150:
-
Summary: Update docs to point to Slack rather than IRC  (was: Update the 
contact us/community page to point to Slack rather than IRC)

> Update docs to point to Slack rather than IRC
> -
>
> Key: CASSANDRA-15150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Low
> Fix For: 4.0
>
> Attachments: CASSANDRA-15150.txt
>
>
> Update the contact us/community page to point to ASF Slack rather than IRC.  
> We can remove cassandra-builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15150) Update the contact us/community page to point to Slack rather than IRC

2019-06-10 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15150:
-
Status: Open  (was: Resolved)

Found a few more instances - will clean it up and submit an updated patch.

> Update the contact us/community page to point to Slack rather than IRC
> --
>
> Key: CASSANDRA-15150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Low
> Fix For: 4.0
>
> Attachments: CASSANDRA-15150.txt
>
>
> Update the contact us/community page to point to ASF Slack rather than IRC.  
> We can remove cassandra-builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15103) Add Failure Detection Documentation

2019-06-04 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16856158#comment-16856158
 ] 

Jeremy Hanna commented on CASSANDRA-15103:
--

Perhaps we could put a reference to the phi accrual paper as a footnote or 
something like that - 
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.80.7427=rep1=pdf
 ?

> Add Failure Detection Documentation
> ---
>
> Key: CASSANDRA-15103
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15103
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation/Blog
>Reporter: Elijah Augustin
>Assignee: Elijah Augustin
>Priority: Normal
>  Labels: documentation, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Topics/Concepts To Cover
>  * Why Failure Detection is needed and used
>  * State tracking with Gossip
>  * Typical Reasons for Node Failure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15122) cqlshrc ssl userkey and ssl usercert cannot be encrypted

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15122:
-
Component/s: Feature/Encryption

> cqlshrc ssl userkey and ssl usercert cannot be encrypted
> 
>
> Key: CASSANDRA-15122
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15122
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: seaside
>Priority: Normal
>
> cqlshrc ssl userkey and ssl usercert is provided when require_client_auth = 
> true,but  I did not find a way to encrypt the certificate file?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15127) Add compression performance metrics

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15127:
-
Component/s: Feature/Compression

> Add compression performance metrics
> ---
>
> Key: CASSANDRA-15127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15127
> Project: Cassandra
>  Issue Type: Task
>  Components: Feature/Compression, Observability/Metrics
>Reporter: Michael
>Priority: Normal
>
> When doing some bulk load into my cluster I notice almost 100% CPU usage.
> As I am using DeflateCompressor, I assume that the data 
> compression/decompression contributes a lot to overall CPU load. 
> Unfortunately cassandra doesn't seem to have any metrics explaining how much 
> CPU time has been required for that.
> So I guess it would be useful to introduce cumulative times for compression 
> and decompression, breaking down by each supported compression algorithm.
> Then by comparing how much does each specific value increase per minute, with 
> number of processed requests and their cumulative times, it would be easy to 
> conclude how costly is the compression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15146) Transitional TLS server configuration options are overly complex

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15146:
-
Component/s: Feature/Encryption

> Transitional TLS server configuration options are overly complex
> 
>
> Key: CASSANDRA-15146
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15146
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption, Local/Config
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Normal
>
> It appears as part of the port from transitional client TLS to transitional 
> server TLS in CASSANDRA-10404 (the ability to switch a cluster to using 
> {{internode_encryption}} without listening on two ports and without downtime) 
> we carried the {{enabled}} setting over from the client implementation. I 
> believe that the {{enabled}} option is redundant to {{internode_encryption}} 
> and {{optional}} and it should therefore be removed prior to the 4.0 release 
> where we will have to start respecting that interface. 
> Current trunk yaml:
> {noformat}
> server_encryption_options:
>   
> # set to true for allowing secure incoming connections
>   
> enabled: false
>   
> # If enabled and optional are both set to true, encrypted and unencrypted 
> connections are handled on the storage_port
> optional: false   
>   
>   
>   
> 
> # if enabled, will open up an encrypted listening socket on 
> ssl_storage_port. Should be used
> # during upgrade to 4.0; otherwise, set to false. 
>   
> enable_legacy_ssl_storage_port: false 
>   
> # on outbound connections, determine which type of peers to securely 
> connect to. 'enabled' must be set to true.
> internode_encryption: none
>   
> keystore: conf/.keystore  
>   
> keystore_password: cassandra  
>   
> truststore: conf/.truststore  
>   
> truststore_password: cassandra
> {noformat}
> I propose we eliminate {{enabled}} and just use {{optional}} and 
> {{internode_encryption}} to determine the listener setup. I also propose we 
> change the default of {{optional}} to true. We could also re-name 
> {{optional}} since it's a new option but I think it's good to stay consistent 
> with the client and use {{optional}}.
> ||optional||internode_encryption||description||
> |true|none|(default) No encryption is used but if a server reaches out with 
> it we'll use it|
> |false|dc|Encryption is required for inter-dc communication, but not intra-dc|
> |false|all|Encryption is required for all communication|
> |false|none|We only listen for unencrypted connections|
> |true|dc|Encryption is used for inter-dc communication but is not required|
> |true|all|Encryption is used for all communication but is not required|
> From these states it is clear when we should be accepting TLS connections 
> (all except for false and none) as well as when we must enforce it.
> To transition without downtime from an un-encrypted cluster to an encrypted 
> cluster the user would do the following:
> 1. After adding valid truststores, change {{internode_encryption}} to the 
> desired level of encryption (recommended {{all}}) and restart Cassandra
>  2. Change {{optional=false}} and restart Cassandra to enforce #1
> If {{optional}} defaulted to {{false}} as it does right now we'd need a third 
> restart to first change {{optional}} to {{true}}, which given my 
> understanding of the OptionalSslHandler isn't really relevant.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15150) Update the contact us/community page to point to Slack rather than IRC

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15150:
-
Test and Documentation Plan: Perhaps just double check that the link works 
and there aren't any typos.
 Status: Patch Available  (was: Open)

> Update the contact us/community page to point to Slack rather than IRC
> --
>
> Key: CASSANDRA-15150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Low
> Attachments: CASSANDRA-15150.txt
>
>
> Update the contact us/community page to point to ASF Slack rather than IRC.  
> We can remove cassandra-builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15150) Update the contact us/community page to point to Slack rather than IRC

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15150:
-
Attachment: CASSANDRA-15150.txt

> Update the contact us/community page to point to Slack rather than IRC
> --
>
> Key: CASSANDRA-15150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Low
> Attachments: CASSANDRA-15150.txt
>
>
> Update the contact us/community page to point to ASF Slack rather than IRC.  
> We can remove cassandra-builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15150) Update the contact us/community page to point to Slack rather than IRC

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15150:
-
 Complexity: Low Hanging Fruit
Change Category: Semantic
 Status: Open  (was: Triage Needed)

> Update the contact us/community page to point to Slack rather than IRC
> --
>
> Key: CASSANDRA-15150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Low
> Attachments: CASSANDRA-15150.txt
>
>
> Update the contact us/community page to point to ASF Slack rather than IRC.  
> We can remove cassandra-builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15150) Update the contact us/community page to point to Slack rather than IRC

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15150:
-
Priority: Low  (was: Normal)

> Update the contact us/community page to point to Slack rather than IRC
> --
>
> Key: CASSANDRA-15150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Low
>
> Update the contact us/community page to point to ASF Slack rather than IRC.  
> We can remove cassandra-builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15150) Update the contact us/community page to point to Slack rather than IRC

2019-06-04 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-15150:


 Summary: Update the contact us/community page to point to Slack 
rather than IRC
 Key: CASSANDRA-15150
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15150
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation/Website
Reporter: Jeremy Hanna
Assignee: Jeremy Hanna


Update the contact us/community page to point to ASF Slack rather than IRC.  We 
can remove cassandra-builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15071) Materialized View Inconsistent With Base Table Update After Migrating To New DC

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15071:
-
Description: 
We've recently completed a successful migration between two data centers in our 
Cassandra cluster.

After adding the new DC nodes onto the existing cluster, and setting the 
keyspaces to replicate to both DCs and rebuilding the new DC nodes from the old 
one,  we've cut-over the applications using those keyspaces o start using the 
new DC exclusively by connecting to its end-points and performing `LOCAL_` 
consistency level requests there (DCAwareRoundRobinPolicy on LOCAL DC).

We noticed that once the apps started to read data from the materialized views 
in the new DC, that an inconsistency emerged, which wasn't there in the 
original DC from which we've migrated.
I.e - source/old DC had the materialized view reflecting the column update on 
the base table, while target/new DC didn't (the column value in the MV remained 
the same as it was in the base table, prior to the update).

We only found out about it being logged with a support ticket, and for now, 
mitigated it by simply recreating the materialized view.

Looking for a root cause for such behavior, is this expected, is this somewhat 
of a requirement which wasn't fulfilled yet for the MV mechanism, such as the 
ones mentioned in CASSANDRA-13826?

Thanks,
Avi K

  was:
We've recently completed a successful migration between two data centers in our 
Cassandra cluster.

After adding the new DC nodes onto the existing cluster, and setting the 
keyspaces to replicate to both DCs and rebuilding the new DC nodes from the old 
one,  we've cut-over the applications using those keyspaces o start using the 
new DC exclusively by connecting to its end-points and performing `LOCAL_` 
consistency level requests there (DCAwareRoundRobinPolicy on LOCAL DC).

We noticed that once the apps started to read data from the materialized views 
in the new DC, that an inconsistency emerged, which wasn't there in the 
original DC from which we've migrated.
I.e - source/old DC had the materialized view reflecting the column update on 
the base table, while target/new DC didn't (the column value in the MV remained 
the same as it was in the base table, prior to the update).

We only found out about it being logged with a support ticket, and for now, 
mitigated it by simply recreating the materialized view.

Looking for a root cause for such behavior, is this expected, is this somewhat 
of a requirement which wasn't fulfilled yet for the MV mechanism, such as the 
ones mentioned in https://issues.apache.org/jira/browse/CASSANDRA-13826?

Thanks,
Avi K


> Materialized View Inconsistent With Base Table Update After Migrating To New 
> DC
> ---
>
> Key: CASSANDRA-15071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Bootstrap and Decommission, 
> Feature/Materialized Views
>Reporter: Avraham Kalvo
>Priority: High
>  Labels: cassandra, materializedviews, rebuilding
>
> We've recently completed a successful migration between two data centers in 
> our Cassandra cluster.
> After adding the new DC nodes onto the existing cluster, and setting the 
> keyspaces to replicate to both DCs and rebuilding the new DC nodes from the 
> old one,  we've cut-over the applications using those keyspaces o start using 
> the new DC exclusively by connecting to its end-points and performing 
> `LOCAL_` consistency level requests there (DCAwareRoundRobinPolicy on LOCAL 
> DC).
> We noticed that once the apps started to read data from the materialized 
> views in the new DC, that an inconsistency emerged, which wasn't there in the 
> original DC from which we've migrated.
> I.e - source/old DC had the materialized view reflecting the column update on 
> the base table, while target/new DC didn't (the column value in the MV 
> remained the same as it was in the base table, prior to the update).
> We only found out about it being logged with a support ticket, and for now, 
> mitigated it by simply recreating the materialized view.
> Looking for a root cause for such behavior, is this expected, is this 
> somewhat of a requirement which wasn't fulfilled yet for the MV mechanism, 
> such as the ones mentioned in CASSANDRA-13826?
> Thanks,
> Avi K



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15095) Split Gossiper into separate interfaces

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15095:
-
Component/s: Cluster/Gossip

> Split Gossiper into separate interfaces
> ---
>
> Key: CASSANDRA-15095
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15095
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Cluster/Gossip
>Reporter: Blake Eggleston
>Priority: Normal
>
> As [~aweisberg] suggests 
> [here|https://issues.apache.org/jira/browse/CASSANDRA-15059?focusedCommentId=16802986=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16802986],
>  a better way to encourage Gossiper threadsafety would be to split it into 
> interfaces depending on which methods are safe to be called from where. At 
> minimum, one for outside the gossiper stage, on for inside.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15071) Materialized View Inconsistent With Base Table Update After Migrating To New DC

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15071:
-
Component/s: Feature/Materialized Views

> Materialized View Inconsistent With Base Table Update After Migrating To New 
> DC
> ---
>
> Key: CASSANDRA-15071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Bootstrap and Decommission, 
> Feature/Materialized Views
>Reporter: Avraham Kalvo
>Priority: High
>  Labels: cassandra, materializedviews, rebuilding
>
> We've recently completed a successful migration between two data centers in 
> our Cassandra cluster.
> After adding the new DC nodes onto the existing cluster, and setting the 
> keyspaces to replicate to both DCs and rebuilding the new DC nodes from the 
> old one,  we've cut-over the applications using those keyspaces o start using 
> the new DC exclusively by connecting to its end-points and performing 
> `LOCAL_` consistency level requests there (DCAwareRoundRobinPolicy on LOCAL 
> DC).
> We noticed that once the apps started to read data from the materialized 
> views in the new DC, that an inconsistency emerged, which wasn't there in the 
> original DC from which we've migrated.
> I.e - source/old DC had the materialized view reflecting the column update on 
> the base table, while target/new DC didn't (the column value in the MV 
> remained the same as it was in the base table, prior to the update).
> We only found out about it being logged with a support ticket, and for now, 
> mitigated it by simply recreating the materialized view.
> Looking for a root cause for such behavior, is this expected, is this 
> somewhat of a requirement which wasn't fulfilled yet for the MV mechanism, 
> such as the ones mentioned in 
> https://issues.apache.org/jira/browse/CASSANDRA-13826?
> Thanks,
> Avi K



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15082) SASI SPARSE mode 5 limit

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15082:
-
Component/s: Feature/SASI

> SASI SPARSE mode 5 limit
> 
>
> Key: CASSANDRA-15082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15082
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/SASI
>Reporter: Edward Capriolo
>Priority: Normal
>
> I do not know what the "improvement" should be here, but I ran into this:
> [https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndexBuilder.java#L585]
> Term '55.3' belongs to more than 5 keys in sparse mode, which is not allowed.
> The only reference I can find to the limit is here:
>  [http://www.doanduyhai.com/blog/?p=2058]
> Why is it 5? Could it be a variable? Could it be an option when creating the 
> table? Why or why not?
> This seems awkward. A user can insert more then 5 rows into a table, and it 
> "works". IE you can write and you can query that table getting more than 5 
> results, but the index will not flush to disk. It throws an IOException.
> Maybe I am misunderstanding, but this seems impossible to support, if users 
> inserts the same value 5 times, the entire index will not flush to disk?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15149:
-
Test and Documentation Plan: Just double check the doc change.  (was: 
Simple patch to change the text/link.)

> Change the text and link for low hanging fruit tickets to include the LHF 
> ticket complexity
> ---
>
> Key: CASSANDRA-15149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Normal
> Attachments: CASSANDRA-15149.txt
>
>
> Right now in the docs for how to contribute, it points to the lhf (low 
> hanging fruit) label which is how things were marked before the Jira flow 
> evolution.  We should note the complexity field in there and include both the 
> label and the complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15149:
-
Test and Documentation Plan: Simple patch to change the text/link.
 Status: Patch Available  (was: Open)

> Change the text and link for low hanging fruit tickets to include the LHF 
> ticket complexity
> ---
>
> Key: CASSANDRA-15149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Normal
> Attachments: CASSANDRA-15149.txt
>
>
> Right now in the docs for how to contribute, it points to the lhf (low 
> hanging fruit) label which is how things were marked before the Jira flow 
> evolution.  We should note the complexity field in there and include both the 
> label and the complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15149:
-
 Complexity: Low Hanging Fruit
Change Category: Semantic
 Status: Open  (was: Triage Needed)

> Change the text and link for low hanging fruit tickets to include the LHF 
> ticket complexity
> ---
>
> Key: CASSANDRA-15149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Normal
> Attachments: CASSANDRA-15149.txt
>
>
> Right now in the docs for how to contribute, it points to the lhf (low 
> hanging fruit) label which is how things were marked before the Jira flow 
> evolution.  We should note the complexity field in there and include both the 
> label and the complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15149:
-
Status: Triage Needed  (was: Awaiting Feedback)

> Change the text and link for low hanging fruit tickets to include the LHF 
> ticket complexity
> ---
>
> Key: CASSANDRA-15149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Normal
> Attachments: CASSANDRA-15149.txt
>
>
> Right now in the docs for how to contribute, it points to the lhf (low 
> hanging fruit) label which is how things were marked before the Jira flow 
> evolution.  We should note the complexity field in there and include both the 
> label and the complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15149:
-
Status: Awaiting Feedback  (was: Triage Needed)

> Change the text and link for low hanging fruit tickets to include the LHF 
> ticket complexity
> ---
>
> Key: CASSANDRA-15149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Normal
> Attachments: CASSANDRA-15149.txt
>
>
> Right now in the docs for how to contribute, it points to the lhf (low 
> hanging fruit) label which is how things were marked before the Jira flow 
> evolution.  We should note the complexity field in there and include both the 
> label and the complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15149:
-
Attachment: (was: CASSANDRA-15149.txt)

> Change the text and link for low hanging fruit tickets to include the LHF 
> ticket complexity
> ---
>
> Key: CASSANDRA-15149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Normal
> Attachments: CASSANDRA-15149.txt
>
>
> Right now in the docs for how to contribute, it points to the lhf (low 
> hanging fruit) label which is how things were marked before the Jira flow 
> evolution.  We should note the complexity field in there and include both the 
> label and the complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15149:
-
Attachment: CASSANDRA-15149.txt

> Change the text and link for low hanging fruit tickets to include the LHF 
> ticket complexity
> ---
>
> Key: CASSANDRA-15149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Normal
> Attachments: CASSANDRA-15149.txt
>
>
> Right now in the docs for how to contribute, it points to the lhf (low 
> hanging fruit) label which is how things were marked before the Jira flow 
> evolution.  We should note the complexity field in there and include both the 
> label and the complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna reassigned CASSANDRA-15149:


Assignee: Jeremy Hanna

> Change the text and link for low hanging fruit tickets to include the LHF 
> ticket complexity
> ---
>
> Key: CASSANDRA-15149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Normal
> Attachments: CASSANDRA-15149.txt
>
>
> Right now in the docs for how to contribute, it points to the lhf (low 
> hanging fruit) label which is how things were marked before the Jira flow 
> evolution.  We should note the complexity field in there and include both the 
> label and the complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15149:
-
Attachment: CASSANDRA-15149.txt

> Change the text and link for low hanging fruit tickets to include the LHF 
> ticket complexity
> ---
>
> Key: CASSANDRA-15149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Normal
> Attachments: CASSANDRA-15149.txt
>
>
> Right now in the docs for how to contribute, it points to the lhf (low 
> hanging fruit) label which is how things were marked before the Jira flow 
> evolution.  We should note the complexity field in there and include both the 
> label and the complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15149) Change the text and link for low hanging fruit tickets to include the LHF ticket complexity

2019-06-04 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-15149:


 Summary: Change the text and link for low hanging fruit tickets to 
include the LHF ticket complexity
 Key: CASSANDRA-15149
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15149
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation/Website
Reporter: Jeremy Hanna


Right now in the docs for how to contribute, it points to the lhf (low hanging 
fruit) label which is how things were marked before the Jira flow evolution.  
We should note the complexity field in there and include both the label and the 
complexity field in the associated link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15046) Add a "history" command to cqlsh. Perhaps "show history"?

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15046:
-
Complexity: Low Hanging Fruit

> Add a "history" command to cqlsh.  Perhaps "show history"?
> --
>
> Key: CASSANDRA-15046
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15046
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Interpreter
>Reporter: Wes Peters
>Priority: Low
>  Labels: lhf
>
> I was trying to capture some create key space and create table commands from 
> a running cqlsh, and found there was no equivalent to the '\s' history 
> command in Postgres' psql shell.  It's a great tool for figuring out what you 
> were doing yesterday.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15048) Upgrade Netty to support TLS 1.3

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15048:
-
Component/s: Feature/Encryption

> Upgrade Netty to support TLS 1.3
> 
>
> Key: CASSANDRA-15048
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15048
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Encryption
>Reporter: Michaël Figuière
>Priority: Low
>
> TLS 1.3 support has been [added to Netty 
> 4.1.31|https://netty.io/news/2018/10/30/4-1-31-Final.html]. As the Cassandra 
> trunk is already relying on Netty 4.1.28, it would take a patch version 
> upgrade and a \{{SslContext}} configuration change for Cassandra 4.0 to 
> support TLS 1.3. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15046) Add a "history" command to cqlsh. Perhaps "show history"?

2019-06-04 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15046:
-
Labels: lhf  (was: )

> Add a "history" command to cqlsh.  Perhaps "show history"?
> --
>
> Key: CASSANDRA-15046
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15046
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Interpreter
>Reporter: Wes Peters
>Priority: Low
>  Labels: lhf
>
> I was trying to capture some create key space and create table commands from 
> a running cqlsh, and found there was no equivalent to the '\s' history 
> command in Postgres' psql shell.  It's a great tool for figuring out what you 
> were doing yesterday.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15008) NamedThreadLocalFactory unnecessarily wraps runnable into thread local deallocator

2019-05-31 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853239#comment-16853239
 ] 

Jeremy Hanna commented on CASSANDRA-15008:
--

Should the fix version be set to 4.0 for this ticket?

> NamedThreadLocalFactory unnecessarily wraps runnable into thread local 
> deallocator 
> ---
>
> Key: CASSANDRA-15008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15008
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Low
>
> FastThreadLocalThread already does wrapping of runnable by calling 
> [FastThreadLocalRunnable.wrap in constructor in Netty 
> code|https://github.com/netty/netty/blob/netty-4.1.18.Final/common/src/main/java/io/netty/util/concurrent/FastThreadLocalThread.java#L60].
>  Second call is unnecessary and incurs unnecessary additional wrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15019) Repaired data tracking isn't working for range queries

2019-05-31 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15019:
-
Resolution: Fixed
Status: Resolved  (was: Open)

Re-resolving as fixed.

> Repaired data tracking isn't working for range queries
> --
>
> Key: CASSANDRA-15019
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15019
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Consistency/Repair, Test/dtest
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
> Fix For: 4.0
>
> Attachments: RepairDigestTrackingTest.java
>
>
> CASSANDRA-14145 introduced optional tracking of the repaired dataset used to 
> construct a read response. If enabled, each replica computes a digest for the 
> repaired portion of the data, which the coordinator compares in order to 
> detect divergence between replicas. This isn't working correctly for range 
> reads, as the ReadCommand instance that the DataResolver is intialized with 
> does not have the tracking flag set. This has been undetected up until now as 
> the dtest which should verify it also has a bug in that when the relevant 
> range query is issued the test expectations are being incorrectly set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15019) Repaired data tracking isn't working for range queries

2019-05-31 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15019:
-
Status: Open  (was: Resolved)

> Repaired data tracking isn't working for range queries
> --
>
> Key: CASSANDRA-15019
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15019
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Consistency/Repair, Test/dtest
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
> Fix For: 4.0
>
> Attachments: RepairDigestTrackingTest.java
>
>
> CASSANDRA-14145 introduced optional tracking of the repaired dataset used to 
> construct a read response. If enabled, each replica computes a digest for the 
> repaired portion of the data, which the coordinator compares in order to 
> detect divergence between replicas. This isn't working correctly for range 
> reads, as the ReadCommand instance that the DataResolver is intialized with 
> does not have the tracking flag set. This has been undetected up until now as 
> the dtest which should verify it also has a bug in that when the relevant 
> range query is issued the test expectations are being incorrectly set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15000) Invalidate counter caches on `nodetool import`

2019-01-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15000:
-
Description: {{nodetool import}} currently invalidates row caches on import 
[here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SSTableImporter.java#L291],
 but doesn't invalidate the corresponding counter caches, if they exist. It 
should invalidate both.  (was: `nodetool import` currently invalidates row 
caches on import 
[here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SSTableImporter.java#L291],
 but doesn't invalidate the corresponding counter caches, if they exist. It 
should invalidate both.)

> Invalidate counter caches on `nodetool import`
> --
>
> Key: CASSANDRA-15000
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15000
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Doug Rohrer
>Priority: Major
>
> {{nodetool import}} currently invalidates row caches on import 
> [here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SSTableImporter.java#L291],
>  but doesn't invalidate the corresponding counter caches, if they exist. It 
> should invalidate both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15001) counter not accurate

2019-01-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-15001:
-
Description: The issue has been described in CASSANDRA-2495 and said to be 
improved in the later post. However I still encounter the issue in version 
2.2.5. When a write to the counter column fails, a retry operation may result 
in over-count.  (was: The issue has been described in 
[2495|https://issues.apache.org/jira/browse/CASSANDRA-2495] and said to be 
improved in the later post. However I still encounter the issue in version 
2.2.5. When a write to the counter column fails, a retry operation may result 
in over-count.)

> counter not accurate
> 
>
> Key: CASSANDRA-15001
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15001
> Project: Cassandra
>  Issue Type: Bug
>Reporter: lin
>Priority: Major
>
> The issue has been described in CASSANDRA-2495 and said to be improved in the 
> later post. However I still encounter the issue in version 2.2.5. When a 
> write to the counter column fails, a retry operation may result in over-count.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14988) Building javadoc with Java11 fails

2019-01-18 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14988:
-
Labels: Java11  (was: )

> Building javadoc with Java11 fails
> --
>
> Key: CASSANDRA-14988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14988
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Javadoc
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>Priority: Blocker
>  Labels: Java11
> Fix For: 4.0
>
>
> When building trunk with Java11 building javadoc fails with this error:
> {noformat}
> [javadoc] 
> /repos/tmp/cassandra/src/java/org/apache/cassandra/hints/HintsBufferPool.java:28:
>  error: package sun.nio.ch is not visible
> [javadoc] import sun.nio.ch.DirectBuffer;
> [javadoc] ^
> [javadoc] (package sun.nio.ch is declared in module java.base, which does not 
> export it to the unnamed module)
> [javadoc] 1 error{noformat}
> This import is unused and was probably added by mistake, removing it fixes 
> the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14953) Failed to reclaim the memory and too many MemtableReclaimMemory pending task

2019-01-08 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737416#comment-16737416
 ] 

Jeremy Hanna commented on CASSANDRA-14953:
--

This appears to be a use case/configuration specific problem and not a bug with 
the software itself.  I would engage with those on the Cassandra user list or 
stack overflow to troubleshoot further.  See 
http://cassandra.apache.org/community/ for links to both.  Jira is primarily 
meant for development and bugs rather than operational issues.

> Failed to reclaim the memory and too many MemtableReclaimMemory pending task
> 
>
> Key: CASSANDRA-14953
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14953
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Memtable
> Environment: version : cassandra 2.1.15
> jdk: 8
> os:suse
>Reporter: HUANG DUICAN
>Priority: Major
> Attachments: 1.PNG, 2.PNG, cassandra_20190105.zip
>
>
> We found that Cassandra has a lot of write accumulation in the production 
> environment, and our business has experienced a lot of write failures.
>  Through the system.log, it was found that MemtableReclaimMemory was pending 
> at the beginning, and then a large number of MutationStage stacks appeared at 
> a certain moment.
>  Finally, the heap memory is full, the GC time reaches tens of seconds, the 
> node status is DN through nodetool, but the Cassandra process is still 
> running.We killed the node and restarted the node, and the above situation 
> disappeared.
>  
> Also the number of Active MemtableReclaimMemory threads seems to stay at 1.
> (you can see the 1.PNG)
> a large number of MutationStage stacks appeared at a certain moment.
> (you can see the 2.PNG)
>  
> long GC time:
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Old Generation GC in 87121ms. G1 Old Gen: 51175946656 -> 50082999760;
>  - MutationStage 128 11931622 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Young Generation GC in {color:#FF}969ms{color}. G1 Eden Space: 
> 1090519040 -> 0; G1 Old Gen: 50082999760 -> 51156741584;
>  - MutationStage 128 11953653 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Old Generation GC in {color:#FF}84785ms{color}. G1 Old Gen: 
> 51173518800 -> 50180911432;
>  - MutationStage 128 11967484 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Young Generation GC in 611ms. G1 Eden Space: 989855744 -> 0; G1 Old 
> Gen: 50180911432 -> 51153989960;
>  - MutationStage 128 11975849 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Old Generation GC in {color:#FF}85845ms{color}. G1 Old Gen: 
> 51170767176 -> 50238295416;
>  - MutationStage 128 11978192 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Young Generation GC in 602ms. G1 Eden Space: 939524096 -> 0; G1 Old 
> Gen: 50238295416 -> 51161042296;
>  - MutationStage 128 11994295 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Old Generation GC in {color:#FF}85307ms{color}. G1 Old Gen: 
> 51177819512 -> 50288829624; Metaspace: 36544536 -> 36525696
>  - MutationStage 128 12001932 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
> 66 - MutationStage 128 12004395 1983820772 0 0
> 66 - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
> 66 - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Young Generation GC in 610ms. G1 Eden Space: 889192448 -> 0; G1 Old 
> Gen: 50288829624 -> 51178022072;
>  - MutationStage 128 12023677 1983820772 0 0
> Why is this happening? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14953) Failed to reclaim the memory and too many MemtableReclaimMemory pending task

2019-01-08 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737416#comment-16737416
 ] 

Jeremy Hanna edited comment on CASSANDRA-14953 at 1/8/19 6:55 PM:
--

This appears to be a use case/configuration specific problem and not a bug with 
the software itself.  I would engage with those on the Cassandra user list or 
stack overflow to troubleshoot further.  See 
http://cassandra.apache.org/community/ for links to both.  Jira is primarily 
meant for development and bugs rather than operational questions.


was (Author: jeromatron):
This appears to be a use case/configuration specific problem and not a bug with 
the software itself.  I would engage with those on the Cassandra user list or 
stack overflow to troubleshoot further.  See 
http://cassandra.apache.org/community/ for links to both.  Jira is primarily 
meant for development and bugs rather than operational issues.

> Failed to reclaim the memory and too many MemtableReclaimMemory pending task
> 
>
> Key: CASSANDRA-14953
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14953
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Memtable
> Environment: version : cassandra 2.1.15
> jdk: 8
> os:suse
>Reporter: HUANG DUICAN
>Priority: Major
> Attachments: 1.PNG, 2.PNG, cassandra_20190105.zip
>
>
> We found that Cassandra has a lot of write accumulation in the production 
> environment, and our business has experienced a lot of write failures.
>  Through the system.log, it was found that MemtableReclaimMemory was pending 
> at the beginning, and then a large number of MutationStage stacks appeared at 
> a certain moment.
>  Finally, the heap memory is full, the GC time reaches tens of seconds, the 
> node status is DN through nodetool, but the Cassandra process is still 
> running.We killed the node and restarted the node, and the above situation 
> disappeared.
>  
> Also the number of Active MemtableReclaimMemory threads seems to stay at 1.
> (you can see the 1.PNG)
> a large number of MutationStage stacks appeared at a certain moment.
> (you can see the 2.PNG)
>  
> long GC time:
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Old Generation GC in 87121ms. G1 Old Gen: 51175946656 -> 50082999760;
>  - MutationStage 128 11931622 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Young Generation GC in {color:#FF}969ms{color}. G1 Eden Space: 
> 1090519040 -> 0; G1 Old Gen: 50082999760 -> 51156741584;
>  - MutationStage 128 11953653 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Old Generation GC in {color:#FF}84785ms{color}. G1 Old Gen: 
> 51173518800 -> 50180911432;
>  - MutationStage 128 11967484 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Young Generation GC in 611ms. G1 Eden Space: 989855744 -> 0; G1 Old 
> Gen: 50180911432 -> 51153989960;
>  - MutationStage 128 11975849 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Old Generation GC in {color:#FF}85845ms{color}. G1 Old Gen: 
> 51170767176 -> 50238295416;
>  - MutationStage 128 11978192 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Young Generation GC in 602ms. G1 Eden Space: 939524096 -> 0; G1 Old 
> Gen: 50238295416 -> 51161042296;
>  - MutationStage 128 11994295 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Old Generation GC in {color:#FF}85307ms{color}. G1 Old Gen: 
> 51177819512 -> 50288829624; Metaspace: 36544536 -> 36525696
>  - MutationStage 128 12001932 1983820772 0 0
>  - CounterMutationStage 0 0 0 0 0
> 66 - MutationStage 128 12004395 1983820772 0 0
> 66 - CounterMutationStage 0 0 0 0 0
>  - MemtableReclaimMemory 1 156 24565 0 0
> 66 - MemtableReclaimMemory 1 156 24565 0 0
>  - G1 Young Generation GC in 610ms. G1 Eden Space: 889192448 -> 0; G1 Old 
> Gen: 50288829624 -> 51178022072;
>  - MutationStage 128 12023677 1983820772 0 0
> Why is this happening? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14957) Rolling Restart Of Nodes Cause Dataloss Due To Schema Collision

2019-01-08 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737379#comment-16737379
 ] 

Jeremy Hanna commented on CASSANDRA-14957:
--

The schema has to agree across the cluster.  If a node is being restarted, it 
has to catch up with the schema before being able to process writes to the new 
table.  Until then, it will probably have messages in the logs that it can't 
identify a table with a certain id.

How did you determine that there was data loss outside of temporary 
inconsistency between nodes?  If the writes succeeded on other nodes at the 
consistency level you specified, then there wasn't data loss.  You just had a 
temporary inconsistency on the node being restarted.  So the normal anti 
entropy operations like read repair and full repair should get it back into a 
consistent state.

> Rolling Restart Of Nodes Cause Dataloss Due To Schema Collision
> ---
>
> Key: CASSANDRA-14957
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14957
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema
>Reporter: Avraham Kalvo
>Priority: Major
>
> We were issuing a rolling restart on a mission-critical five node C* cluster.
> The first node which was restarted got the following messages in its 
> system.log:
> ```
> January 2nd 2019, 12:06:37.310 - INFO 12:06:35 Initializing 
> tasks_scheduler_external.tasks
> ```
> ```
> WARN 12:06:39 UnknownColumnFamilyException reading from socket; closing
> org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find table for 
> cfId bd7200a0-1567-11e8-8974-855d74ee356f. If a table was just created, this 
> is likely due to the schema not being fully propagated. Please wait for 
> schema agreement on table creation.
> at 
> org.apache.cassandra.config.CFMetaData$Serializer.deserialize(CFMetaData.java:1336)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:660)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:635)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:349)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:286)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> ```
> The latter was then repeated several times across the cluster.
> It was then found out that the table in question 
> `tasks_scheduler_external.tasks` was created with a new schema version after 
> the entire cluster was restarted consecutively and schema agreement settled, 
> which started taking requests leaving the previous version of the schema 
> unavailable for any request, thus generating a data loss to our online system.
> Data loss was recovered by manually copying SSTables from the previous 
> version directory of the schema to the new one followed by `nodetool refresh` 
> to the relevant table.
> The above has repeated itself for several tables across various keyspaces.
> One other thing to mention is that a repair was in place for the first node 
> to be restarted, which was obviously stopped as the daemon was shut down, but 
> this doesn't seem to do with the above at first glance.
> Seems somewhat related to:
> https://issues.apache.org/jira/browse/CASSANDRA-13559



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14902) Update the default for compaction_throughput_mb_per_sec

2018-11-27 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14902:
-
Component/s: Configuration

> Update the default for compaction_throughput_mb_per_sec
> ---
>
> Key: CASSANDRA-14902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14902
> Project: Cassandra
>  Issue Type: Task
>  Components: Compaction, Configuration
>Reporter: Jeremy Hanna
>Priority: Minor
>
> compaction_throughput_mb_per_sec has been at 16 since probably 0.6 or 0.7 
> back when a lot of people had to deploy on spinning disks.  It seems like it 
> would make sense to update the default to something more reasonable - 
> assuming a reasonably decent SSD and competing IO.  One idea that could be 
> bikeshedded to death could be to just default it to 64 - simply to avoid 
> people from having to always change that any time they download a new version 
> as well as avoid problems with new users thinking that the defaults are sane.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14880) drop table and materialized view frequently get error over time

2018-11-20 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693867#comment-16693867
 ] 

Jeremy Hanna commented on CASSANDRA-14880:
--

Also jhon, can you populate the "Reproduced In" version field of the ticket?  
There have been several fixes over time, so just to see if it has already been 
addressed directly or indirectly.

> drop table and materialized view frequently get error over time
> ---
>
> Key: CASSANDRA-14880
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14880
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: jhon
>Assignee: jhon
>Priority: Major
>
>     when i create table and materialized view,then i drop it, if i drop it 
> frequently it will got error: "no response received from cassandra within 
> timeout period",for example:
>  for i=0; i<100; i++ {            
>           create table;            
>           create materialized view;          
>           drop materialized view;          
>           drop table;
> how can i solve it ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14902) Update the default for compaction_throughput_mb_per_sec

2018-11-19 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14902:
-
Summary: Update the default for compaction_throughput_mb_per_sec  (was: 
Update the default for compaction_throughput_in_mb)

> Update the default for compaction_throughput_mb_per_sec
> ---
>
> Key: CASSANDRA-14902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14902
> Project: Cassandra
>  Issue Type: Task
>  Components: Compaction
>Reporter: Jeremy Hanna
>Priority: Minor
>
> compaction_throughput_in_mb has been at 16 since probably 0.6 or 0.7 back 
> when a lot of people had to deploy on spinning disks.  It seems like it would 
> make sense to update the default to something more reasonable - assuming a 
> reasonably decent SSD and competing IO.  One idea that could be bikeshedded 
> to death could be to just default it to 64 - simply to avoid people from 
> having to always change that any time they download a new version as well as 
> avoid problems with new users thinking that the defaults are sane.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14902) Update the default for compaction_throughput_mb_per_sec

2018-11-19 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14902:
-
Description: compaction_throughput_mb_per_sec has been at 16 since probably 
0.6 or 0.7 back when a lot of people had to deploy on spinning disks.  It seems 
like it would make sense to update the default to something more reasonable - 
assuming a reasonably decent SSD and competing IO.  One idea that could be 
bikeshedded to death could be to just default it to 64 - simply to avoid people 
from having to always change that any time they download a new version as well 
as avoid problems with new users thinking that the defaults are sane.  (was: 
compaction_throughput_in_mb has been at 16 since probably 0.6 or 0.7 back when 
a lot of people had to deploy on spinning disks.  It seems like it would make 
sense to update the default to something more reasonable - assuming a 
reasonably decent SSD and competing IO.  One idea that could be bikeshedded to 
death could be to just default it to 64 - simply to avoid people from having to 
always change that any time they download a new version as well as avoid 
problems with new users thinking that the defaults are sane.)

> Update the default for compaction_throughput_mb_per_sec
> ---
>
> Key: CASSANDRA-14902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14902
> Project: Cassandra
>  Issue Type: Task
>  Components: Compaction
>Reporter: Jeremy Hanna
>Priority: Minor
>
> compaction_throughput_mb_per_sec has been at 16 since probably 0.6 or 0.7 
> back when a lot of people had to deploy on spinning disks.  It seems like it 
> would make sense to update the default to something more reasonable - 
> assuming a reasonably decent SSD and competing IO.  One idea that could be 
> bikeshedded to death could be to just default it to 64 - simply to avoid 
> people from having to always change that any time they download a new version 
> as well as avoid problems with new users thinking that the defaults are sane.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14902) Update the default for compaction_throughput_in_mb

2018-11-19 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-14902:


 Summary: Update the default for compaction_throughput_in_mb
 Key: CASSANDRA-14902
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14902
 Project: Cassandra
  Issue Type: Task
  Components: Compaction
Reporter: Jeremy Hanna


compaction_throughput_in_mb has been at 16 since probably 0.6 or 0.7 back when 
a lot of people had to deploy on spinning disks.  It seems like it would make 
sense to update the default to something more reasonable - assuming a 
reasonably decent SSD and competing IO.  One idea that could be bikeshedded to 
death could be to just default it to 64 - simply to avoid people from having to 
always change that any time they download a new version as well as avoid 
problems with new users thinking that the defaults are sane.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14856) Thoroughly test how the GossipingPropertyFileSnitch (GPFS) behaves when fields go missing in gossip

2018-10-29 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-14856:


 Summary: Thoroughly test how the GossipingPropertyFileSnitch 
(GPFS) behaves when fields go missing in gossip
 Key: CASSANDRA-14856
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14856
 Project: Cassandra
  Issue Type: Test
  Components: Configuration
Reporter: Jeremy Hanna


>From the [dev list 
>discussion|https://lists.apache.org/thread.html/998f5f674ba244c4003893364ee068651da40902559ee1e50bb1c602@%3Cdev.cassandra.apache.org%3E]
> about deprecating the PropertyFileSnitch (PFS).  It appears that there are 
>still times that fields go missing from gossip.  Theoretically and anecdotally 
>in practice, that shouldn't cause problems for GPFS.  However it would be nice 
>to do more thorough testing of the effects missing gossip fields have on 
>things that rely on it (e.g. GPFS).  That would allow us to more confidently 
>deprecate/remove PFS as we would have more confidence in how solid GPFS is.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14847) improvement of nodetool status -r

2018-10-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14847:
-
Description: 
Hello,

When using "nodetool status -r", I found a problem that the response time 
becomes longer depending on the number of vnodes.
 In my testing environment, when the num_token is 256 and the number of nodes 
is 6, the response takes about 60 seconds.

It turned out that the findMaxAddressLength method in status.java is causing 
the delay.
 Despite only obtaining the maximum length of the address by the number of 
vnodes, `tokenrange * vnode` times also loop processing, there is redundancy.

To prevent duplicate host names from being referenced every time, I modified to 
check with hash.
 In my environment, the response time has been reduced from 60 seconds to 2 
seconds.

I attached the patch, so please check it.
 Thank you
{code:java}
[before]
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN *** 559.32 KB 256 48.7% 0555746a-60c2-4717-b042-94ba951ef679 ***
UN *** 721.48 KB 256 51.4% 1af4acb6-e0a0-4bcb-8bba-76ae2e225cd5 ***
UN *** 699.98 KB 256 48.3% 5215c728-9b80-4e3c-b46b-c5b8e5eb753f ***
UN *** 691.65 KB 256 48.1% 57da4edf-4acb-474d-b26c-27f048c37bd6 ***
UN *** 705.66 KB 256 52.8% 07520eab-47d2-4f5d-aeeb-f6e599c9b084 ***
UN *** 610.87 KB 256 50.7% 6b39acaf-6ed6-42e4-a357-0d258bdf87b7 ***

time : 66s

[after]
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN *** 559.32 KB 256 48.7% 0555746a-60c2-4717-b042-94ba951ef679 ***
UN *** 721.48 KB 256 51.4% 1af4acb6-e0a0-4bcb-8bba-76ae2e225cd5 ***
UN *** 699.98 KB 256 48.3% 5215c728-9b80-4e3c-b46b-c5b8e5eb753f ***
UN *** 691.65 KB 256 48.1% 57da4edf-4acb-474d-b26c-27f048c37bd6 ***
UN *** 705.66 KB 256 52.8% 07520eab-47d2-4f5d-aeeb-f6e599c9b084 ***
UN *** 610.87 KB 256 50.7% 6b39acaf-6ed6-42e4-a357-0d258bdf87b7 ***

time : 2s
{code}

  was:
Hello,

When using "nodetool -r", I found a problem that the response time becomes 
longer depending on the number of vnodes.
 In my testing environment, when the num_token is 256 and the number of nodes 
is 6, the response takes about 60 seconds.

It turned out that the findMaxAddressLength method in status.java is causing 
the delay.
 Despite only obtaining the maximum length of the address by the number of 
vnodes, `tokenrange * vnode` times also loop processing, there is redundancy.

To prevent duplicate host names from being referenced every time, I modified to 
check with hash.
 In my environment, the response time has been reduced from 60 seconds to 2 
seconds.

I attached the patch, so please check it.
 Thank you
{code:java}
[before]
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN *** 559.32 KB 256 48.7% 0555746a-60c2-4717-b042-94ba951ef679 ***
UN *** 721.48 KB 256 51.4% 1af4acb6-e0a0-4bcb-8bba-76ae2e225cd5 ***
UN *** 699.98 KB 256 48.3% 5215c728-9b80-4e3c-b46b-c5b8e5eb753f ***
UN *** 691.65 KB 256 48.1% 57da4edf-4acb-474d-b26c-27f048c37bd6 ***
UN *** 705.66 KB 256 52.8% 07520eab-47d2-4f5d-aeeb-f6e599c9b084 ***
UN *** 610.87 KB 256 50.7% 6b39acaf-6ed6-42e4-a357-0d258bdf87b7 ***

time : 66s

[after]
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN *** 559.32 KB 256 48.7% 0555746a-60c2-4717-b042-94ba951ef679 ***
UN *** 721.48 KB 256 51.4% 1af4acb6-e0a0-4bcb-8bba-76ae2e225cd5 ***
UN *** 699.98 KB 256 48.3% 5215c728-9b80-4e3c-b46b-c5b8e5eb753f ***
UN *** 691.65 KB 256 48.1% 57da4edf-4acb-474d-b26c-27f048c37bd6 ***
UN *** 705.66 KB 256 52.8% 07520eab-47d2-4f5d-aeeb-f6e599c9b084 ***
UN *** 610.87 KB 256 50.7% 6b39acaf-6ed6-42e4-a357-0d258bdf87b7 ***

time : 2s
{code}


> improvement of nodetool status -r
> -
>
> Key: CASSANDRA-14847
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14847
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Fumiya Yamashita
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: 3.11.1.patch
>
>
> Hello,
> When using "nodetool status -r", I found a problem that the response time 
> becomes longer depending on the number of vnodes.
>  In my testing environment, when the num_token is 256 and the number of nodes 
> is 6, the response takes about 60 seconds.
> It turned out that the findMaxAddressLength method in status.java is causing 
> the delay.
>  Despite only obtaining the 

[jira] [Updated] (CASSANDRA-14848) When upgrading 3.11.3->4.0 using SSL 4.0 nodes does not connect to old non seed nodes

2018-10-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14848:
-
Labels: security  (was: )

> When upgrading 3.11.3->4.0 using SSL 4.0 nodes does not connect to old non 
> seed nodes
> -
>
> Key: CASSANDRA-14848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14848
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tommy Stendahl
>Priority: Major
>  Labels: security
>
> When upgrading from 3.11.3 to 4.0 with server encryption enabled the new 4.0 
> node only connects to 3.11.3 seed node, there are no connection established 
> to non-seed nodes on the old version.
> I have four nodes, *.242 is upgraded to 4.0, *.243 and *.244 are 3.11.3 
> non-seed and *.246 are 3.11.3 seed. After starting the 4.0 node I get this 
> nodetool status on the different nodes:
> {noformat}
> *.242
> -- Address Load Tokens Owns (effective) Host ID Rack
> UN 10.216.193.242 1017.77 KiB 256 75,1% 7d278e14-d549-42f3-840d-77cfd852fbf4 
> RAC1
> DN 10.216.193.243 743.32 KiB 256 74,8% 5586243a-ca74-4125-8e7e-09e82e23c4e5 
> RAC1
> DN 10.216.193.244 711.54 KiB 256 75,2% c155e262-b898-4e86-9e1d-d4d0f97e88f6 
> RAC1
> UN 10.216.193.246 659.81 KiB 256 74,9% 502dd00f-fc02-4024-b65f-b98ba3808291 
> RAC1
> *.243 and *.244
> -- Address Load Tokens Owns (effective) Host ID Rack
> DN 10.216.193.242 657.4 KiB 256 75,1% 7d278e14-d549-42f3-840d-77cfd852fbf4 
> RAC1
> UN 10.216.193.243 471 KiB 256 74,8% 5586243a-ca74-4125-8e7e-09e82e23c4e5 RAC1
> UN 10.216.193.244 471.71 KiB 256 75,2% c155e262-b898-4e86-9e1d-d4d0f97e88f6 
> RAC1
> UN 10.216.193.246 388.54 KiB 256 74,9% 502dd00f-fc02-4024-b65f-b98ba3808291 
> RAC1
> *.246
> -- Address Load Tokens Owns (effective) Host ID Rack
> UN 10.216.193.242 657.4 KiB 256 75,1% 7d278e14-d549-42f3-840d-77cfd852fbf4 
> RAC1
> UN 10.216.193.243 471 KiB 256 74,8% 5586243a-ca74-4125-8e7e-09e82e23c4e5 RAC1
> UN 10.216.193.244 471.71 KiB 256 75,2% c155e262-b898-4e86-9e1d-d4d0f97e88f6 
> RAC1
> UN 10.216.193.246 388.54 KiB 256 74,9% 502dd00f-fc02-4024-b65f-b98ba3808291 
> RAC1
> {noformat}
>  
> I have built 4.0 with wire tracing activated and in my config the 
> storage_port=12700 and ssl_storage_port=12701. In the log I can see that the 
> 4.0 node start to connect to the 3.11.3 seed node on the storage_port but 
> quickly switch to the ssl_storage_port, but when connecting to the non-seed 
> nodes it never switch to the ssl_storage_port.
> {noformat}
> >grep 193.246 system.log | grep Outbound
> 2018-10-25T10:57:36.799+0200 [MessagingService-NettyOutbound-Thread-4-1] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x2f0e5e55] CONNECT: 
> /10.216.193.246:12700
> 2018-10-25T10:57:36.902+0200 [MessagingService-NettyOutbound-Thread-4-2] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x9e81f62c] CONNECT: 
> /10.216.193.246:12701
> 2018-10-25T10:57:36.905+0200 [MessagingService-NettyOutbound-Thread-4-2] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x9e81f62c, 
> L:/10.216.193.242:37252 - R:10.216.193.246/10.216.193.246:12701] ACTIVE
> 2018-10-25T10:57:36.906+0200 [MessagingService-NettyOutbound-Thread-4-2] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x9e81f62c, 
> L:/10.216.193.242:37252 - R:10.216.193.246/10.216.193.246:12701] WRITE: 8B
> >grep 193.243 system.log | grep Outbound
> 2018-10-25T10:57:38.438+0200 [MessagingService-NettyOutbound-Thread-4-3] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0xd8f1d6c4] CONNECT: 
> /10.216.193.243:12700
> 2018-10-25T10:57:38.540+0200 [MessagingService-NettyOutbound-Thread-4-4] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0xfde6cc9f] CONNECT: 
> /10.216.193.243:12700
> 2018-10-25T10:57:38.694+0200 [MessagingService-NettyOutbound-Thread-4-5] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x7e87fc4e] CONNECT: 
> /10.216.193.243:12700
> 2018-10-25T10:57:38.741+0200 [MessagingService-NettyOutbound-Thread-4-7] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x39395296] CONNECT: 
> /10.216.193.243:12700{noformat}
>  
> When I had the dbug log activated and started the 4.0 node I can see that it 
> switch port for *.246 but not for *.243 and *.244.
> {noformat}
> >grep DEBUG system.log| grep OutboundMessagingConnection | grep 
> >maybeUpdateConnectionId
> 2018-10-25T13:12:56.095+0200 [ScheduledFastTasks:1] DEBUG 
> o.a.c.n.a.OutboundMessagingConnection:314 maybeUpdateConnectionId changing 
> connectionId to 10.216.193.246:12701 (GOSSIP), with a different port for 
> secure communication, because peer version is 11
> 2018-10-25T13:12:58.100+0200 [ReadStage-1] DEBUG 
> o.a.c.n.a.OutboundMessagingConnection:314 maybeUpdateConnectionId changing 
> connectionId to 10.216.193.246:12701 

[jira] [Updated] (CASSANDRA-14850) Make it possible to connect with user/pass + port in fqltool replay

2018-10-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14850:
-
Labels: fqltool security  (was: )

> Make it possible to connect with user/pass + port in fqltool replay
> ---
>
> Key: CASSANDRA-14850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14850
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: fqltool, security
> Fix For: 4.x
>
>
> We also need to close the executor service



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14833) change client keystore from jks to pkcs12 doesn't work

2018-10-19 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14833:
-
Labels: security  (was: )

> change client keystore from jks to pkcs12 doesn't work 
> ---
>
> Key: CASSANDRA-14833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Cassandra version: 2.2.12 Java: 1.8.0_181 SLES11
>Reporter: Michael Maier
>Priority: Minor
>  Labels: security
>
> Changing from JKS to PKS12 store_type doesn't work for 
> client_encryption_options. for server_encryption_options it is not a problem.
> I use:
> {{client_encryption_options:}}
> {{    enabled: true}}
> {{    optional: false}}
> {{    keystore: keystore.p12}}
> {{    keystore_password: keystorepass}}
> {{    truststore: truststore.p12}}
> {{    truststore_password: keystorepass}}
> {{    store_type: PKCS12}}
> but get this error:
> {{ERROR 06:34:36 Exception encountered during startup}}
> {{java.lang.RuntimeException: Unable to create thrift socket to 
> /192.168.1.2:9160}}
> {{ at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$Factory.buildTServer(CustomTThreadPoolServer.java:270)
>  ~[apache-cassandra-2.2.12.jar:2.2.12]}}
> {{ at 
> org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:46)
>  ~[apache-cassandra-2.2.12.jar:2.2.12]}}
> {{ at 
> org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.(ThriftServer.java:131)
>  ~[apache-cassandra-2.2.12.jar:2.2.12]}}
> {{ at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:58) 
> ~[apache-cassandra-2.2.12.jar:2.2.12]}}
> {{ at 
> org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:453) 
> [apache-cassandra-2.2.12.jar:2.2.12]}}
> {{ at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:548)
>  [apache-cassandra-2.2.12.jar:2.2.12]}}
> {{ at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:642) 
> [apache-cassandra-2.2.12.jar:2.2.12]}}
> {{Caused by: org.apache.thrift.transport.TTransportException: Error creating 
> the transport}}
> {{ at 
> org.apache.thrift.transport.TSSLTransportFactory.createSSLContext(TSSLTransportFactory.java:210)
>  ~[libthrift-0.9.2.jar:0.9.2]}}
> {{ at 
> org.apache.thrift.transport.TSSLTransportFactory.getServerSocket(TSSLTransportFactory.java:104)
>  ~[libthrift-0.9.2.jar:0.9.2]}}
> {{ at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$Factory.buildTServer(CustomTThreadPoolServer.java:256)
>  ~[apache-cassandra-2.2.12.jar:2.2.12]}}
> {{ ... 6 common frames omitted}}
> {{Caused by: java.io.IOException: Invalid keystore format}}
> {{ at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:658) 
> ~[na:1.8.0_181]}}
> {{ at 
> sun.security.provider.{color:#FF}JavaKeyStore$JKS.engineLoad({color}JavaKeyStore.java:56)
>  ~[na:1.8.0_181]}}
> {{ at 
> sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:215)
>  ~[na:1.8.0_181]}}
> {{ at 
> sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)
>  ~[na:1.8.0_181]}}
> {{ at java.security.KeyStore.load(KeyStore.java:1445) ~[na:1.8.0_181]}}
> {{ at 
> org.apache.thrift.transport.TSSLTransportFactory.createSSLContext(TSSLTransportFactory.java:195)
>  ~[libthrift-0.9.2.jar:0.9.2]}}
> {{ ... 8 common frames omitted}}
>  
> Looks like the store_type option is not set properly for client encryption.
> If I don't use the  store_type: PKCS12 option the error accuses earlier at 
> the startup 
> {{INFO 06:43:46 Enabling encrypted CQL connections between client and server}}
> {{Exception (java.lang.RuntimeException) encountered during startup: Failed 
> to setup secure pipeline}}
> {{java.lang.RuntimeException: Failed to setup secure pipeline}}
> so from my point of view it looks like the option is set, but not everywhere 
> it should.
> I also use PKCS12 stores for server encryption. It works fine there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14739) calculatePendingRanges when multiple concurrent range movements is unsafe

2018-09-12 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14739:
-
Description: 
If two nodes are bootstrapped at the same time that own adjacent portions of 
the ring (i.e. always for nodes), they will not receive the correct data for 
pending writes (and perhaps not for streaming - TBC)

By default, we don't "permit" multiple nodes to bootstrap at once, but:
  
 # The logic we use to prevent this itself isn’t strongly consistent (or 
atomically applied).  If two nodes start bootstrapping close together in time, 
or simply get divergent gossip state, they can both believe there is no other 
node bootstrapping and proceed.
 #  The bug doesn’t require two nodes to _actually_ bootstrap at the same time, 
there only needs to be divergent gossip state on a coordinator, so that the 
coordinator _believes_ there are multiple bootstrapping, even though one of 
them may have completed, and they never overlapped in reality.
 # We can bootstrap and remove nodes concurrently, I think?  I’m pretty sure 
this can also be unsafe, but needs some more thought.

  was:
If two nodes are bootstrapped at the same time that own adjacent portions of 
the ring (i.e. always for nodes), they will not receive the correct data for 
pending writes (and perhaps not for streaming - TBC)

By default, we don't "permit" multiple nodes to bootstrap at once, but:
 
# The logic we use to prevent this itself isn’t strongly consistent (or 
atomically applied).  If two nodes start bootstrapping close together in time, 
or simply get divergent gossip state, they can both believe there is no other 
node bootstrapping and proceed.
# The bug doesn’t require two nodes to _actually_ bootstrap at the same time, 
there only needs to be divergent gossip state on a coordinator, so that the 
coordinator _believes_ there are multiple bootstrapping, even though one of 
them may have completed, and they never overlapped in reality.
# We can bootstrap and remove nodes concurrently, I think?  I’m pretty sure 
this can also be unsafe, but needs some more thought.


> calculatePendingRanges when multiple concurrent range movements is unsafe
> -
>
> Key: CASSANDRA-14739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14739
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Priority: Major
>  Labels: correctness
>
> If two nodes are bootstrapped at the same time that own adjacent portions of 
> the ring (i.e. always for nodes), they will not receive the correct data for 
> pending writes (and perhaps not for streaming - TBC)
> By default, we don't "permit" multiple nodes to bootstrap at once, but:
>   
>  # The logic we use to prevent this itself isn’t strongly consistent (or 
> atomically applied).  If two nodes start bootstrapping close together in 
> time, or simply get divergent gossip state, they can both believe there is no 
> other node bootstrapping and proceed.
>  #  The bug doesn’t require two nodes to _actually_ bootstrap at the same 
> time, there only needs to be divergent gossip state on a coordinator, so that 
> the coordinator _believes_ there are multiple bootstrapping, even though one 
> of them may have completed, and they never overlapped in reality.
>  # We can bootstrap and remove nodes concurrently, I think?  I’m pretty sure 
> this can also be unsafe, but needs some more thought.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14736) checkHintOverload will fully reject a write or paxosCommit when only one of the recipients has a hint backlog

2018-09-12 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14736:
-
Summary: checkHintOverload will fully reject a write or paxosCommit when 
only one of the recipients has a hint backlog  (was: checkHintOverloaded will 
fully reject a write or paxosCommit when only one of the recipients has a hint 
backlog)

> checkHintOverload will fully reject a write or paxosCommit when only one of 
> the recipients has a hint backlog
> -
>
> Key: CASSANDRA-14736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14736
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Priority: Major
>
> This seems like a fairly bad availability failure, if the failure detector 
> does not kick in either successfully, or quickly enough.  Probably we should 
> simply not write a hint for this node?  A single node can struggle for 
> reasons besides overload, so surely we should throw an exception only when it 
> appears to be widespread?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14739) calculatePendingRanges when multiple concurrent range movements is unsafe

2018-09-12 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14739:
-
Description: 
If two nodes are bootstrapped at the same time that own adjacent portions of 
the ring (i.e. always for nodes), they will not receive the correct data for 
pending writes (and perhaps not for streaming - TBC)

By default, we don't "permit" multiple nodes to bootstrap at once, but:
  
 # The logic we use to prevent this itself isn’t strongly consistent (or 
atomically applied).  If two nodes start bootstrapping close together in time, 
or simply get divergent gossip state, they can both believe there is no other 
node bootstrapping and proceed.
 # The bug doesn’t require two nodes to _actually_ bootstrap at the same time, 
there only needs to be divergent gossip state on a coordinator, so that the 
coordinator _believes_ there are multiple bootstrapping, even though one of 
them may have completed, and they never overlapped in reality.
 # We can bootstrap and remove nodes concurrently, I think?  I’m pretty sure 
this can also be unsafe, but needs some more thought.

  was:
If two nodes are bootstrapped at the same time that own adjacent portions of 
the ring (i.e. always for nodes), they will not receive the correct data for 
pending writes (and perhaps not for streaming - TBC)

By default, we don't "permit" multiple nodes to bootstrap at once, but:
  
 # The logic we use to prevent this itself isn’t strongly consistent (or 
atomically applied).  If two nodes start bootstrapping close together in time, 
or simply get divergent gossip state, they can both believe there is no other 
node bootstrapping and proceed.
 #  The bug doesn’t require two nodes to _actually_ bootstrap at the same time, 
there only needs to be divergent gossip state on a coordinator, so that the 
coordinator _believes_ there are multiple bootstrapping, even though one of 
them may have completed, and they never overlapped in reality.
 # We can bootstrap and remove nodes concurrently, I think?  I’m pretty sure 
this can also be unsafe, but needs some more thought.


> calculatePendingRanges when multiple concurrent range movements is unsafe
> -
>
> Key: CASSANDRA-14739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14739
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Priority: Major
>  Labels: correctness
>
> If two nodes are bootstrapped at the same time that own adjacent portions of 
> the ring (i.e. always for nodes), they will not receive the correct data for 
> pending writes (and perhaps not for streaming - TBC)
> By default, we don't "permit" multiple nodes to bootstrap at once, but:
>   
>  # The logic we use to prevent this itself isn’t strongly consistent (or 
> atomically applied).  If two nodes start bootstrapping close together in 
> time, or simply get divergent gossip state, they can both believe there is no 
> other node bootstrapping and proceed.
>  # The bug doesn’t require two nodes to _actually_ bootstrap at the same 
> time, there only needs to be divergent gossip state on a coordinator, so that 
> the coordinator _believes_ there are multiple bootstrapping, even though one 
> of them may have completed, and they never overlapped in reality.
>  # We can bootstrap and remove nodes concurrently, I think?  I’m pretty sure 
> this can also be unsafe, but needs some more thought.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14701) Cleanup (and other) compaction type(s) not counted in compaction remaining time

2018-09-07 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14701:
-
Component/s: Observability

> Cleanup (and other) compaction type(s) not counted in compaction remaining 
> time
> ---
>
> Key: CASSANDRA-14701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14701
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Thomas Steinmaurer
>Priority: Critical
>
> Opened a ticket, as discussed in user list.
> Looks like compaction remaining time only includes compactions of type 
> COMPACTION and other compaction types like cleanup etc. aren't part of the 
> estimation calculation.
> E.g. from one of our environments:
> {noformat}
> nodetool compactionstats -H
> pending tasks: 1
>compaction type   keyspace   table   completed totalunit   
> progress
>CleanupXXX YYY   908.16 GB   1.13 TB   bytes   
>   78.63%
> Active compaction remaining time :   0h00m00s
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14690) Add dtests for fqltool replay/compare

2018-09-05 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14690:
-
Labels: fqltool  (was: )

> Add dtests for fqltool replay/compare
> -
>
> Key: CASSANDRA-14690
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14690
> Project: Cassandra
>  Issue Type: Test
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: fqltool
>
> We should add some basic round-trip dtests for {{fqltool replay}} and 
> {{compare}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14684) Queries with a clause involving tokens and Long.MIN_VALUE include rows incorrectly

2018-08-31 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598764#comment-16598764
 ] 

Jeremy Hanna edited comment on CASSANDRA-14684 at 8/31/18 2:25 PM:
---

It depends on the partitioner.  For the default murmur3 partitioner, the range 
is -2^63^ to +2^63^-1 which is Long.MIN_VALUE to Long.MAX_VALUE.  So it should 
work up to those boundaries.  For the random partitioner which is an md5 based 
hash, it's 0 to 2^127-1 but I doubt you're using that.

However reading through the code, even though the [minimum value set for 
murmur3 is 
Long.MIN_VALUE|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L43],
 we exclude that for [reasons described in the getToken() 
method|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L208-L213].

{code}
/**
 * Generate the token of a key.
 * Note that we need to ensure all generated token are strictly bigger than 
MINIMUM.
 * In particular we don't want MINIMUM to correspond to any key because the 
range (MINIMUM, X] doesn't
 * include MINIMUM but we use such range to select all data whose token is 
smaller than X.
 */
{code}

So good point [~benedict], we should probably document that beyond the comment 
in the code.


was (Author: jeromatron):
It depends on the partitioner.  For the default murmur3 partitioner, the range 
is -2^63^ to +2^63^-1 which is Long.MIN_VALUE to Long.MAX_VALUE.  So it should 
work up to those boundaries.  For the random partitioner which is an md5 based 
hash, it's 0 to 2^127-1 but I doubt you're using that.

However reading through the code, even though the [minimum value set for 
murmur3 is 
Long.MIN_VALUE|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L43],
 we exclude that for [reasons described in the getToken() 
method|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L208-L213].

{code}
/**
 * Generate the token of a key.
 * Note that we need to ensure all generated token are strictly bigger than 
MINIMUM.
 * In particular we don't want MINIMUM to correspond to any key because the 
range (MINIMUM, X] doesn't
 * include MINIMUM but we use such range to select all data whose token is 
smaller than X.
 */
{code}

> Queries with a clause involving tokens and Long.MIN_VALUE include rows 
> incorrectly
> --
>
> Key: CASSANDRA-14684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14684
> Project: Cassandra
>  Issue Type: Bug
>Reporter: George Boyle
>Priority: Major
>
> [cqlsh 5.0.1 | Cassandra 2.2.9 | CQL spec 3.3.1 | Native protocol v4]
> When running a CQL query where we filter on a token compared to 
> -9223372036854775808 (the minimum value for a long), the filter appears to 
> have no effect.
> For example:
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) = -9223372036854775808 
> LIMIT 3;
> system.token(user_id)
>  ---
>   -9223371814601747988
>   -9223371814601747988
>   -9223371814601747988
> {code}
> It doesn't matter whether `=`, `<`, `<=`, `>` or `>=` are used in the 
> comparison, the results appear the same.
> In contrast, if using `Long.MIN_VALUE + 1`, it returns no results as 
> expected: 
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) <= 
> -9223372036854775807 LIMIT 3;
> system.token(user_id)
> ---
> (0 rows)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14684) Queries with a clause involving tokens and Long.MIN_VALUE include rows incorrectly

2018-08-31 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598764#comment-16598764
 ] 

Jeremy Hanna edited comment on CASSANDRA-14684 at 8/31/18 2:24 PM:
---

It depends on the partitioner.  For the default murmur3 partitioner, the range 
is -2^63^ to +2^63^-1 which is Long.MIN_VALUE to Long.MAX_VALUE.  So it should 
work up to those boundaries.  For the random partitioner which is an md5 based 
hash, it's 0 to 2^127-1 but I doubt you're using that.

However reading through the code, even though the [minimum value set for 
murmur3 is 
Long.MIN_VALUE|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L43],
 we exclude that for [reasons described in the getToken() 
method|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L208-L213].

{code}
/**
 * Generate the token of a key.
 * Note that we need to ensure all generated token are strictly bigger than 
MINIMUM.
 * In particular we don't want MINIMUM to correspond to any key because the 
range (MINIMUM, X] doesn't
 * include MINIMUM but we use such range to select all data whose token is 
smaller than X.
 */
{code}


was (Author: jeromatron):
It depends on the partitioner.  For the default murmur3 partitioner, the range 
is -2^63^ to +2^63^-1 which is Long.MIN_VALUE to Long.MAX_VALUE.  So it should 
work up to those boundaries.  For the random partitioner which is an md5 based 
hash, it's 0 to 2^127-1 but I doubt you're using that.

However reading through the code, even though the [minimum value set for 
murmur3 is 
Long.MIN_VALUE|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L43],
 we exclude that for reasons described in the getToken() method.

{code}
/**
 * Generate the token of a key.
 * Note that we need to ensure all generated token are strictly bigger than 
MINIMUM.
 * In particular we don't want MINIMUM to correspond to any key because the 
range (MINIMUM, X] doesn't
 * include MINIMUM but we use such range to select all data whose token is 
smaller than X.
 */
{code} 

See 
[here.|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L208-L213]

> Queries with a clause involving tokens and Long.MIN_VALUE include rows 
> incorrectly
> --
>
> Key: CASSANDRA-14684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14684
> Project: Cassandra
>  Issue Type: Bug
>Reporter: George Boyle
>Priority: Major
>
> [cqlsh 5.0.1 | Cassandra 2.2.9 | CQL spec 3.3.1 | Native protocol v4]
> When running a CQL query where we filter on a token compared to 
> -9223372036854775808 (the minimum value for a long), the filter appears to 
> have no effect.
> For example:
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) = -9223372036854775808 
> LIMIT 3;
> system.token(user_id)
>  ---
>   -9223371814601747988
>   -9223371814601747988
>   -9223371814601747988
> {code}
> It doesn't matter whether `=`, `<`, `<=`, `>` or `>=` are used in the 
> comparison, the results appear the same.
> In contrast, if using `Long.MIN_VALUE + 1`, it returns no results as 
> expected: 
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) <= 
> -9223372036854775807 LIMIT 3;
> system.token(user_id)
> ---
> (0 rows)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14684) Queries with a clause involving tokens and Long.MIN_VALUE include rows incorrectly

2018-08-31 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598764#comment-16598764
 ] 

Jeremy Hanna edited comment on CASSANDRA-14684 at 8/31/18 2:23 PM:
---

It depends on the partitioner.  For the default murmur3 partitioner, the range 
is -2^63^ to +2^63^-1 which is Long.MIN_VALUE to Long.MAX_VALUE.  So it should 
work up to those boundaries.  For the random partitioner which is an md5 based 
hash, it's 0 to 2^127-1 but I doubt you're using that.

However reading through the code, even though the [minimum value set for 
murmur3 is 
Long.MIN_VALUE|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L43],
 we exclude that for reasons described in the getToken() method.

{code}
/**
 * Generate the token of a key.
 * Note that we need to ensure all generated token are strictly bigger than 
MINIMUM.
 * In particular we don't want MINIMUM to correspond to any key because the 
range (MINIMUM, X] doesn't
 * include MINIMUM but we use such range to select all data whose token is 
smaller than X.
 */
{code} 

See 
[here.|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L208-L213]


was (Author: jeromatron):
It depends on the partitioner.  For the default murmur3 partitioner, the range 
is -2^63^ to +2^63^-1 which is Long.MIN_VALUE to Long.MAX_VALUE.  So it should 
work up to those boundaries.  For the random partitioner which is an md5 based 
hash, it's 0 to 2^127-1 but I doubt you're using that.

However reading through the code, even though the [minimum value set for 
murmur3 is 
Long.MIN_VALUE|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L43],
 we exclude that for reasons described in the getToken() method.

{code:java}
/**
 * Generate the token of a key.
 * Note that we need to ensure all generated token are strictly bigger than 
MINIMUM.
 * In particular we don't want MINIMUM to correspond to any key because the 
range (MINIMUM, X] doesn't
 * include MINIMUM but we use such range to select all data whose token is 
smaller than X.
 */
{code} 

See 
[here.|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L208-L213]

> Queries with a clause involving tokens and Long.MIN_VALUE include rows 
> incorrectly
> --
>
> Key: CASSANDRA-14684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14684
> Project: Cassandra
>  Issue Type: Bug
>Reporter: George Boyle
>Priority: Major
>
> [cqlsh 5.0.1 | Cassandra 2.2.9 | CQL spec 3.3.1 | Native protocol v4]
> When running a CQL query where we filter on a token compared to 
> -9223372036854775808 (the minimum value for a long), the filter appears to 
> have no effect.
> For example:
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) = -9223372036854775808 
> LIMIT 3;
> system.token(user_id)
>  ---
>   -9223371814601747988
>   -9223371814601747988
>   -9223371814601747988
> {code}
> It doesn't matter whether `=`, `<`, `<=`, `>` or `>=` are used in the 
> comparison, the results appear the same.
> In contrast, if using `Long.MIN_VALUE + 1`, it returns no results as 
> expected: 
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) <= 
> -9223372036854775807 LIMIT 3;
> system.token(user_id)
> ---
> (0 rows)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14684) Queries with a clause involving tokens and Long.MIN_VALUE include rows incorrectly

2018-08-31 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598764#comment-16598764
 ] 

Jeremy Hanna edited comment on CASSANDRA-14684 at 8/31/18 2:22 PM:
---

It depends on the partitioner.  For the default murmur3 partitioner, the range 
is -2^63^ to +2^63^-1 which is Long.MIN_VALUE to Long.MAX_VALUE.  So it should 
work up to those boundaries.  For the random partitioner which is an md5 based 
hash, it's 0 to 2^127-1 but I doubt you're using that.

However reading through the code, even though the [minimum value set for 
murmur3 is 
Long.MIN_VALUE|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L43],
 we exclude that for reasons described in the getToken() method.

{code:java}
/**
 * Generate the token of a key.
 * Note that we need to ensure all generated token are strictly bigger than 
MINIMUM.
 * In particular we don't want MINIMUM to correspond to any key because the 
range (MINIMUM, X] doesn't
 * include MINIMUM but we use such range to select all data whose token is 
smaller than X.
 */
{code} 

See 
[here.|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L208-L213]


was (Author: jeromatron):
It depends on the partitioner.  For the default murmur3 partitioner, the range 
is -2^63^ to +2^63^-1 which is Long.MIN_VALUE to Long.MAX_VALUE.  So it should 
work up to those boundaries.  For the random partitioner which is an md5 based 
hash, it's 0 to 2^127-1 but I doubt you're using that.

> Queries with a clause involving tokens and Long.MIN_VALUE include rows 
> incorrectly
> --
>
> Key: CASSANDRA-14684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14684
> Project: Cassandra
>  Issue Type: Bug
>Reporter: George Boyle
>Priority: Major
>
> [cqlsh 5.0.1 | Cassandra 2.2.9 | CQL spec 3.3.1 | Native protocol v4]
> When running a CQL query where we filter on a token compared to 
> -9223372036854775808 (the minimum value for a long), the filter appears to 
> have no effect.
> For example:
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) = -9223372036854775808 
> LIMIT 3;
> system.token(user_id)
>  ---
>   -9223371814601747988
>   -9223371814601747988
>   -9223371814601747988
> {code}
> It doesn't matter whether `=`, `<`, `<=`, `>` or `>=` are used in the 
> comparison, the results appear the same.
> In contrast, if using `Long.MIN_VALUE + 1`, it returns no results as 
> expected: 
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) <= 
> -9223372036854775807 LIMIT 3;
> system.token(user_id)
> ---
> (0 rows)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14684) Queries with a clause involving tokens and Long.MIN_VALUE include rows incorrectly

2018-08-31 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598764#comment-16598764
 ] 

Jeremy Hanna commented on CASSANDRA-14684:
--

It depends on the partitioner.  For the default murmur3 partitioner, the range 
is -2^63^ to +2^63^-1 which is Long.MIN_VALUE to Long.MAX_VALUE.  So it should 
work up to those boundaries.  For the random partitioner which is an md5 based 
hash, it's 0 to 2^127-1 but I doubt you're using that.

> Queries with a clause involving tokens and Long.MIN_VALUE include rows 
> incorrectly
> --
>
> Key: CASSANDRA-14684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14684
> Project: Cassandra
>  Issue Type: Bug
>Reporter: George Boyle
>Priority: Major
>
> [cqlsh 5.0.1 | Cassandra 2.2.9 | CQL spec 3.3.1 | Native protocol v4]
> When running a CQL query where we filter on a token compared to 
> -9223372036854775808 (the minimum value for a long), the filter appears to 
> have no effect.
> For example:
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) = -9223372036854775808 
> LIMIT 3;
> system.token(user_id)
>  ---
>   -9223371814601747988
>   -9223371814601747988
>   -9223371814601747988
> {code}
> It doesn't matter whether `=`, `<`, `<=`, `>` or `>=` are used in the 
> comparison, the results appear the same.
> In contrast, if using `Long.MIN_VALUE + 1`, it returns no results as 
> expected: 
> {code:java}
> SELECT token(user_id) FROM events WHERE token(user_id) <= 
> -9223372036854775807 LIMIT 3;
> system.token(user_id)
> ---
> (0 rows)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14682) CASSANDRA-12296 changes the message for RF=1 only, it should display correct message for other RF values also

2018-08-30 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14682:
-
Summary: CASSANDRA-12296 changes the message for RF=1 only, it should 
display correct message for other RF values also  (was: AXAAS-12296 changes the 
message for RF=1 only, it should display correct message for other RF values 
also)

> CASSANDRA-12296 changes the message for RF=1 only, it should display correct 
> message for other RF values also
> -
>
> Key: CASSANDRA-14682
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14682
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shaurya Gupta
>Priority: Minor
>
> CASSANDRA-12296 changes the message for RF=1 only, it should display correct 
> message for other RF values also.
> Encountered the message:
> Exception: Unable to find sufficient sources for streaming range 
> (8586721931686955021,8587189655497704775] in keyspace testlcl_v12val2b
> when I tried to rebuild a keyspace from a DC which didn't have the replicas.
> It should display a more explanatory message as displayed as part of 
> CASSANDRA-12296.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14504) fqltool should open chronicle queue read only and a GC bug

2018-08-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14504:
-
Labels: fqltool  (was: )

> fqltool should open chronicle queue read only and a GC bug
> --
>
> Key: CASSANDRA-14504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14504
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
>  Labels: fqltool
> Fix For: 4.0
>
>
> There are two issues with fqltool.
> The first is that it doesn't open the chronicle queue read only so it won't 
> work if it doesn't have write permissions and it's not clear if it's safe to 
> open the queue to write if the server is also still appending.
> The next issue is that NativeBytesStore.toTemporaryDirectByteBuffer() returns 
> a ByteBuffer that doesn't strongly reference the memory it refers to 
> resulting it in sometimes being reclaimed and containing the wrong data when 
> we go to read from it. At least that is the theory. Simple solution is to use 
> toByteArray() and that seems to make it work consistently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12151) Audit logging for database activity

2018-08-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12151:
-
Labels: fqltool  (was: )

> Audit logging for database activity
> ---
>
> Key: CASSANDRA-12151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: stefan setyadi
>Assignee: Vinay Chella
>Priority: Major
>  Labels: fqltool
> Fix For: 4.0
>
> Attachments: 12151.txt, CASSANDRA_12151-benchmark.html, 
> DesignProposal_AuditingFeature_ApacheCassandra_v1.docx
>
>
> we would like a way to enable cassandra to log database activity being done 
> on our server.
> It should show username, remote address, timestamp, action type, keyspace, 
> column family, and the query statement.
> it should also be able to log connection attempt and changes to the 
> user/roles.
> I was thinking of making a new keyspace and insert an entry for every 
> activity that occurs.
> Then It would be possible to query for specific activity or a query targeting 
> a specific keyspace and column family.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14671) Log the actual nowInSeconds used by queries in full query log

2018-08-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14671:
-
Labels: fqltool  (was: )

> Log the actual nowInSeconds used by queries in full query log
> -
>
> Key: CASSANDRA-14671
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14671
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
>  Labels: fqltool
> Fix For: 4.0.x
>
>
> FQL doesn't currently use the actual {{nowInSeconds}} value used for request 
> execution. It needs to, to allow for - in conjunction with CASSANDRA-14664 - 
> less indeterministic playback tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14664) Allow providing and overriding nowInSeconds via native protocol

2018-08-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14664:
-
Labels: client-impacting fqltool protocolv5  (was: client-impacting 
protocolv5)

> Allow providing and overriding nowInSeconds via native protocol
> ---
>
> Key: CASSANDRA-14664
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14664
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
>  Labels: client-impacting, fqltool, protocolv5
> Fix For: 4.0.x
>
>
> For FQL replay testing, to allow for deterministic and repeatable workload 
> replay comparisons, we need to be able to set custom nowInSeconds via native 
> protocol - primarily to control TTL expiration, both on read and write paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14675) Log the actual (if server-generated) timestamp used by queries in full query log

2018-08-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14675:
-
Labels: fqltool  (was: )

> Log the actual (if server-generated) timestamp used by queries in full query 
> log
> 
>
> Key: CASSANDRA-14675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14675
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
>  Labels: fqltool
> Fix For: 4.0.x
>
>
> FQL doesn't currently use the actual timestamp - in microseconds - if it's 
> been server generated, used for request execution. It needs to, to allow for 
> - in conjunction with CASSANDRA-14664 and CASSANDRA-14671 - deterministic 
> playback tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14673) Removing user defined type column results in ReadFailure due to CorruptSSTableException or IllegalStateException

2018-08-29 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14673:
-
Summary: Removing user defined type column results in ReadFailure due to 
CorruptSSTableException or IllegalStateException  (was: Removing user defined 
type column results in ReadFailure due to CorruptSSTableExceptio or 
IllegalStateException)

> Removing user defined type column results in ReadFailure due to 
> CorruptSSTableException or IllegalStateException
> 
>
> Key: CASSANDRA-14673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14673
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Horia Mocioi
>Priority: Minor
> Attachments: script
>
>
> Steps to reproduce:
>  # create keyspace
>  # create user defined type
>  # create a table that would use the udt
>  # insert some data
>  # drop the udt column
>  # restart cassandra
>  # query the table
> See the attached script for steps 1-5.
> When querying the table in cqlsh I got the following error:
> {code:java}
> ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] 
> message="Operation failed - received 0 responses and 1 failures" 
> info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 
> 'consistency': 'ONE'}{code}
> On system.log I get errors. The errors are different, meaning that some times 
> I get:
> {code:java}
> WARN [ReadStage-1] 2018-08-29 10:22:30,229 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-1,10,main]: {}
> java.lang.RuntimeException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /.ccm/3113/node1/data0/my_ks/my_table-90f73700ab6411e8a72e45b164590124/mc-1-big-Data.db
>  at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2601)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_131]
>  at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.11.3.jar:3.11.3]
>  at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
> Corrupted: 
> /.ccm/3113/node1/data0/my_ks/my_table-90f73700ab6411e8a72e45b164590124/mc-1-big-Data.db
>  at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:391)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:258)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:100)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:100)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:136)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:92)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
>  

[jira] [Updated] (CASSANDRA-14656) Full query log needs to log the keyspace

2018-08-27 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14656:
-
Labels: fqltool  (was: )

> Full query log needs to log the keyspace
> 
>
> Key: CASSANDRA-14656
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14656
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: fqltool
> Fix For: 4.x
>
>
> If the full query log is enabled and a set of clients have already executed 
> "USE " we can't figure out which keyspace the following queries are 
> executed against.
> We need this for CASSANDRA-14618



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14662) Refactor AuthCache

2018-08-22 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14662:
-
Labels: security  (was: )

> Refactor AuthCache
> --
>
> Key: CASSANDRA-14662
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14662
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Auth
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Major
>  Labels: security
> Fix For: 4.x
>
>
> When building an LDAP IAuthenticator plugin I ran into a few issues when 
> trying to reuse the AuthCache similar to how PasswordAuthenticator implements 
> it. Most of the problems stemmed from the underlying cache being inaccessible 
> and not being able to override {{initCache}} properly.
> Anyway, I've had a stab at refactoring AuthCache with the following 
> improvements:
> # Make it possible to extend and override all necessary methods (initCache, 
> init, validate)
> # Makes it possible to specify a {{CacheLoader}} rather than just a 
> {{Function}}, allowing you to have a get/load that throws exceptions.
> # Use AuthCache on its own rather than extending it for each use case 
> ({{invalidate(K)}} moved to be part of MBean)
> # Provided a builder that uses sane defaults so we don't have unnecessary 
> repeated code everywhere
> The refactor made all the extensions of AuthCache unnecessary, so I've 
> simplified those cases to use AuthCache and removed any classes extending 
> AuthCache. I also removed some noop compatibility classes that were marked to 
> be removed in 4.0.
> Also added some tests in AuthCacheTest.
> |[trunk|https://github.com/apache/cassandra/compare/trunk...kgreav:authcache]|
> |[utests|https://circleci.com/gh/kgreav/cassandra/206]|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14631) Add RSS support for Cassandra blog

2018-08-21 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14631:
-
Labels: blog  (was: )

> Add RSS support for Cassandra blog
> --
>
> Key: CASSANDRA-14631
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14631
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Jacques-Henri Berthemet
>Assignee: Jeff Beck
>Priority: Major
>  Labels: blog
> Attachments: 14631-site.txt, Screen Shot 2018-08-17 at 5.32.08 
> PM.png, Screen Shot 2018-08-17 at 5.32.25 PM.png
>
>
> It would be convenient to add RSS support to Cassandra blog:
> [http://cassandra.apache.org/blog/2018/08/07/faster_streaming_in_cassandra.html]
> And maybe also for other resources like new versions, but this ticket is 
> about blog.
>  
> {quote}From: Scott Andreas
> Sent: Wednesday, August 08, 2018 6:53 PM
> To: [d...@cassandra.apache.org|mailto:d...@cassandra.apache.org]
> Subject: Re: Apache Cassandra Blog is now live
>  
> Please feel free to file a ticket (label: Documentation and Website).
>  
> It looks like Jekyll, the static site generator used to build the website, 
> has a plugin that generates Atom feeds if someone would like to work on 
> adding one: [https://github.com/jekyll/jekyll-feed]
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14661) Blog Post: "Testing Apache Cassandra 4.0"

2018-08-21 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14661:
-
Labels: blog  (was: )

> Blog Post: "Testing Apache Cassandra 4.0"
> -
>
> Key: CASSANDRA-14661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: C. Scott Andreas
>Assignee: C. Scott Andreas
>Priority: Minor
>  Labels: blog
> Attachments: CASSANDRA-14661.diff, rendered.png
>
>
> This is a blog post highlighting some of the approaches being used to test 
> Apache Cassandra 4.0. The patch attached applies as an SVN diff to the 
> website repo (outside the project's primary Git repo).
> SVN patch containing the post and rendered screenshot attached.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14654) Reduce heap pressure during compactions

2018-08-17 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14654:
-
Component/s: Compaction

> Reduce heap pressure during compactions
> ---
>
> Key: CASSANDRA-14654
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14654
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Major
>
> Small partition compactions are painfully slow with a lot of overhead per 
> partition. There also tends to be an excess of objects created (ie 
> 200-700mb/s) per compaction thread.
> The EncoderStats walks through all the partitions and with mergeWith it will 
> create a new one per partition as it walks the potentially millions of 
> partitions. In a test scenario of about 600byte partitions and a couple 100mb 
> of data this consumed ~16% of the heap pressure. Changing this to instead 
> mutably track the min values and create one in a EncodingStats.Collector 
> brought this down considerably (but not 100% since the 
> UnfilteredRowIterator.stats() still creates 1 per partition).
> The KeyCacheKey makes a full copy of the underlying byte array in 
> ByteBufferUtil.getArray in its constructor. This is the dominating heap 
> pressure as there are more sstables. By changing this to just keeping the 
> original it completely eliminates the current dominator of the compactions 
> and also improves read performance.
> Minor tweak included for this as well for operators when compactions are 
> behind on low read clusters is to make the preemptive opening setting a 
> hotprop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14652) Extend IAuthenticator to accept peer SSL certificates

2018-08-17 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14652:
-
Labels: Security  (was: )

> Extend IAuthenticator to accept peer SSL certificates
> -
>
> Key: CASSANDRA-14652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14652
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Auth
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Security
> Fix For: 4.0
>
>
> This patch will extend the IAuthenticator interface to accept peer's SSL 
> certificates. This will allow the Authenticator implementations to perform 
> additional checks from the client, if so desired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8969) Add indication in cassandra.yaml that rpc timeouts going too high will cause memory build up

2018-08-16 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582668#comment-16582668
 ] 

Jeremy Hanna commented on CASSANDRA-8969:
-

Any more thoughts on this change to make sure people don't unwittingly cause 
heap problems by setting the timeout extremely high?

> Add indication in cassandra.yaml that rpc timeouts going too high will cause 
> memory build up
> 
>
> Key: CASSANDRA-8969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Minor
>  Labels: lhf
> Fix For: 3.11.x
>
> Attachments: 8969.txt
>
>
> It would be helpful to communicate that setting the rpc timeouts too high may 
> cause memory problems on the server as it can become overloaded and has to 
> retain the in flight requests in memory.  I'll get this done but just adding 
> the ticket as a placeholder for memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14397) Stop compactions quicker when compacting wide partitions

2018-08-15 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14397:
-
Component/s: Compaction

> Stop compactions quicker when compacting wide partitions
> 
>
> Key: CASSANDRA-14397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.0
>
>
> We should allow compactions to be stopped when compacting wide partitions, 
> this will help when a user wants to run upgradesstables for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14388) Fix setting min/max compaction threshold with LCS

2018-08-15 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14388:
-
Labels: lcs  (was: )

> Fix setting min/max compaction threshold with LCS
> -
>
> Key: CASSANDRA-14388
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14388
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: lcs
> Fix For: 4.0
>
>
> To be able to actually set max/min_threshold in compaction options we need to 
> remove it from the options map when validating.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14388) Fix setting min/max compaction threshold with LCS

2018-08-15 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14388:
-
Component/s: Compaction

> Fix setting min/max compaction threshold with LCS
> -
>
> Key: CASSANDRA-14388
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14388
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: lcs
> Fix For: 4.0
>
>
> To be able to actually set max/min_threshold in compaction options we need to 
> remove it from the options map when validating.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14640) Set local partitioner for existing virtual tables

2018-08-13 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14640:
-
Labels: pull-request-available virtual-tables  (was: pull-request-available)

> Set local partitioner for existing virtual tables
> -
>
> Key: CASSANDRA-14640
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14640
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
>  Labels: pull-request-available, virtual-tables
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14637) Allocate RentrentLock on-demand in java11 AtoicBTreePartitionerBase

2018-08-10 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14637:
-
Labels: Java11  (was: )

> Allocate RentrentLock on-demand in java11 AtoicBTreePartitionerBase
> ---
>
> Key: CASSANDRA-14637
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14637
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: Java11
> Fix For: 4.x
>
>
> As an intermediate step before CASSANDRA-14607, let's allocate the 
> RentrentLock in the java11 version of {{AtoicBTreePartitionerBase}} on demand 
> rather than with every instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14634) Review Handling Crypto Rules and update ECCN page if needed

2018-08-09 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14634:
-
Labels: security  (was: )

> Review Handling Crypto Rules and update ECCN page if needed
> ---
>
> Key: CASSANDRA-14634
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14634
> Project: Cassandra
>  Issue Type: Task
>Reporter: Henri Yandell
>Priority: Blocker
>  Labels: security
>
> It is suggested in LEGAL-358 that Cassandra is containing/using cryptographic 
> functions and does not have an entry on the ECCN page ( 
> [http://www.apache.org/licenses/exports/] ).
> See [http://www.apache.org/dev/crypto.html] to review and confirm whether you 
> should add something to the ECCN page, and if needed, please do so.
> The text in LEGAL-358 was:
>  
> [~zznate] added a comment - 26/Dec/17 14:58
> Ok, I think I have this sorted. Our entry on that page will need to look like 
> this:
> {noformat}
> Product Name  VersionsECCNControlled Source
> Apache Cassandra  development 5D002   ASF
> 0.8 and later 5D002   ASF
> {noformat}
> We first added SSL support in 0.8 via CASSANDRA-1567
> We rely solely on the JDK functionality for all encryption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14631) Add RSS support for Cassandra blog

2018-08-09 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575018#comment-16575018
 ] 

Jeremy Hanna commented on CASSANDRA-14631:
--

I wonder if there is a way to auto generate a release blog post with the 
version, link, and the lines for that version from CHANGES.txt including Jira 
ticket links.

> Add RSS support for Cassandra blog
> --
>
> Key: CASSANDRA-14631
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14631
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Jacques-Henri Berthemet
>Priority: Major
>
> It would be convenient to add RSS support to Cassandra blog:
> [http://cassandra.apache.org/blog/2018/08/07/faster_streaming_in_cassandra.html]
> And maybe also for other resources like new versions, but this ticket is 
> about blog.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14624) Website: Add blog post for Faster Streaming

2018-08-08 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14624:
-
Labels: blog  (was: )

> Website: Add blog post for Faster Streaming
> ---
>
> Key: CASSANDRA-14624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14624
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation and Website
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: blog
> Attachments: 14624.patch, faster-streaming-blog.patch
>
>
> Please add a new blog post entry on the Cassandra website. It describes the 
> recent work on performance optimizations in Cassandra related to streaming 
> enhancements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14628) Clean up cache-related metrics

2018-08-08 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14628:
-
Labels: virtual-tables  (was: )

> Clean up cache-related metrics
> --
>
> Key: CASSANDRA-14628
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14628
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
>  Labels: virtual-tables
> Fix For: 4.0
>
>
> {{ChunkCache}} added {{CacheMissMetrics}} which is an almost exact duplicate 
> of pre-existing {{CacheMetrics}}. I believe it was done initially because the 
> authors thought there was no way to register hits with {{Caffeine}}, only 
> misses, but that's not quite true. All we need is to provide a 
> {{StatsCounter}} object when building the cache and update our metrics from 
> there.
> The patch removes the redundant code and streamlines chunk cache metrics to 
> use more idiomatic tracking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14626) Expose buffer cache metrics in caches virtual table

2018-08-07 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14626:
-
Labels: virtual-tables  (was: )

> Expose buffer cache metrics in caches virtual table
> ---
>
> Key: CASSANDRA-14626
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14626
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Benjamin Lerer
>Assignee: Aleksey Yeschenko
>Priority: Minor
>  Labels: virtual-tables
> Fix For: 4.0
>
>
> As noted by [~blerer] in CASSANDRA-14538, we should expose buffer cache 
> metrics in the caches virtual table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14621) Refactor CompactionStrategyManager

2018-08-03 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14621:
-
Description: CompactionStrategyManager grew a decent amount of duplicated 
code as part of CASSANDRA-9143, which added pendingRepairs alongside the 
repaired and unrepaired buckets. At this point, the logic that routes sstables 
between the different buckets, and the different partition range divisions has 
gotten a little complex, and editing it is tedious and error prone. With 
transient replication requiring yet another bucket for, this seems like a good 
time to split some of the functionality of CSM into other classes, and make 
sstable routing a bit more generalized.  (was: CompactionStrategyManager grew a 
decent amount of duplicated code as part of CASSANDRA-91423, which added 
pendingRepairs alongside the repaired and unrepaired buckets. At this point, 
the logic that routes sstables between the different buckets, and the different 
partition range divisions has gotten a little complex, and editing it is 
tedious and error prone. With transient replication requiring yet another 
bucket for, this seems like a good time to split some of the functionality of 
CSM into other classes, and make sstable routing a bit more generalized.)

> Refactor CompactionStrategyManager
> --
>
> Key: CASSANDRA-14621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14621
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> CompactionStrategyManager grew a decent amount of duplicated code as part of 
> CASSANDRA-9143, which added pendingRepairs alongside the repaired and 
> unrepaired buckets. At this point, the logic that routes sstables between the 
> different buckets, and the different partition range divisions has gotten a 
> little complex, and editing it is tedious and error prone. With transient 
> replication requiring yet another bucket for, this seems like a good time to 
> split some of the functionality of CSM into other classes, and make sstable 
> routing a bit more generalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14620) Make it possible for full query log to only record queries for a given subrange

2018-08-02 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14620:
-
Labels: fqltool  (was: )

> Make it possible for full query log to only record queries for a given 
> subrange
> ---
>
> Key: CASSANDRA-14620
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14620
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Marcus Eriksson
>Priority: Major
>  Labels: fqltool
>
> To avoid having to record all queries for CASSANDRA-14618 - we should allow 
> full query logging to only log queries in a given sub range.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14619) Create fqltool compare command

2018-08-02 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14619:
-
Labels: fqltool  (was: )

> Create fqltool compare command
> --
>
> Key: CASSANDRA-14619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14619
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Marcus Eriksson
>Priority: Major
>  Labels: fqltool
> Fix For: 4.x
>
>
> We need a {{fqltool compare}} command that can take the recorded runs from 
> CASSANDRA-14618 and compares them, it should output any differences and 
> potentially all queries against the mismatching partition up until the 
> mismatch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13983) Support a means of logging all queries as they were invoked

2018-08-02 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13983:
-
Labels: fqltool  (was: )

> Support a means of logging all queries as they were invoked
> ---
>
> Key: CASSANDRA-13983
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13983
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL, Observability, Testing, Tools
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
>  Labels: fqltool
> Fix For: 4.0
>
>
> For correctness testing it's useful to be able to capture production traffic 
> so that it can be replayed against both the old and new versions of Cassandra 
> while comparing the results.
> Implementing this functionality once inside the database is high performance 
> and presents less operational complexity.
> In [this patch|https://github.com/apache/cassandra/pull/169] there is an 
> implementation of a full query log that logs uses chronicle-queue (apache 
> licensed, the maven artifacts are labeled incorrectly in some cases, 
> dependencies are also apache licensed) to implement a rotating log of queries.
> * Single thread asynchronously writes log entries to disk to reduce impact on 
> query latency
> * Heap memory usage bounded by a weighted queue with configurable maximum 
> weight sitting in front of logging thread
> * If the weighted queue is full producers can be blocked or samples can be 
> dropped
> * Disk utilization is bounded by deleting old log segments once a 
> configurable size is reached
> * The on disk serialization uses a flexible schema binary format 
> (chronicle-wire) making it easy to skip unrecognized fields, add new ones, 
> and omit old ones.
> * Can be enabled and configured via JMX, disabled, and reset (delete on disk 
> data), logging path is configurable via both JMX and YAML
> * Introduce new {{fqltool}} in /bin that currently implements {{Dump}} which 
> can dump in a human readable format full query logs as well as follow active 
> full query logs
> Follow up work:
> * Introduce new {{fqltool}} command Replay which can replay N full query logs 
> to two different clusters and compare the result and check for 
> inconsistencies. <- Actively working on getting this done
> * Log not just queries but their results to facilitate a comparison between 
> the original query result and the replayed result. <- Really just don't have 
> specific use case at the moment
> * "Consistent" query logging allowing replay to fully replicate the original 
> order of execution and completion even in the face of races (including CAS). 
> <- This is more speculative



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14618) Create fqltool replay command

2018-08-02 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14618:
-
Labels: fqltool  (was: )

> Create fqltool replay command
> -
>
> Key: CASSANDRA-14618
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14618
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Marcus Eriksson
>Priority: Major
>  Labels: fqltool
> Fix For: 4.x
>
>
> Make it possible to replay the full query logs from CASSANDRA-13983 against 
> one or several clusters. The goal is to be able to compare different runs of 
> production traffic against different versions/configurations of Cassandra.
> * It should be possible to take logs from several machines and replay them in 
> "order" by the timestamps recorded
> * Record the results from each run to be able to compare different runs 
> (against different clusters/versions/etc)
> * If {{fqltool replay}} is run against 2 or more clusters, the results should 
> be compared as we go



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-6553) Benchmark counter improvements (counters++)

2018-08-02 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-6553:

Labels: counters qa-resolved  (was: qa-resolved)

> Benchmark counter improvements (counters++)
> ---
>
> Key: CASSANDRA-6553
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6553
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ryan McGuire
>Assignee: Russ Hatch
>Priority: Major
>  Labels: counters, qa-resolved
> Fix For: 2.1 beta2
>
> Attachments: 6553.txt, 6553.uber.quorum.bdplab.read.png, 
> 6553.uber.quorum.bdplab.write.png, high_cl_one.png, high_cl_quorum.png, 
> logs.tar.gz, low_cl_one.png, low_cl_quorum.png, tracing.txt, uber_cl_one.png, 
> uber_cl_quorum.png
>
>
> Benchmark the difference in performance between CASSANDRA-6504 and trunk.
> * Updating totally unrelated counters (different partitions)
> * Updating the same counters a lot (same cells in the same partition)
> * Different cells in the same few partitions (hot counter partition)
> benchmark: 
> https://github.com/apache/cassandra/tree/1218bcacba7edefaf56cf8440d0aea5794c89a1e
>  (old counters)
> compared to: 
> https://github.com/apache/cassandra/tree/714c423360c36da2a2b365efaf9c5c4f623ed133
>  (new counters)
> So far, the above changes should only affect the write path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-5942) bootstrapping new node after upgrading cluster causes counter columns to randomly have incorrect values

2018-08-02 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-5942:

Labels: counters  (was: )

> bootstrapping new node after upgrading cluster causes counter columns to 
> randomly have incorrect values
> ---
>
> Key: CASSANDRA-5942
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5942
> Project: Cassandra
>  Issue Type: Bug
> Environment: java version "1.7.0_25"
> Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
>Reporter: Daniel Meyer
>Assignee: Russ Hatch
>Priority: Major
>  Labels: counters
> Attachments: logs.tar, upgrade_through_versions_test.py
>
>
> Running the latest version of upgrade_through_versions_test will randomly 
> fail at a rate of about 1 out of 5 runs due to an incorrect counter value.  A 
> slightly modified version of the test is attached for reference.  This 
> version has trunk eliminated from the versions list and an extra debugging 
> statement.
> The problem occurs after upgrading to the 2.0 branch from the 1.2 branch and 
> after boostrapping a new node to the cluster.  Best way to repro is just run 
> the test in a loop 5 to 10 times. 
> Be sure to set PRINT_DEBUG env variable to true and run test with --nocapture 
> to see the debug output.  Logs are also included.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-6504) counters++

2018-08-02 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-6504:

Labels: counters  (was: )

> counters++
> --
>
> Key: CASSANDRA-6504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6504
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
>  Labels: counters
> Fix For: 2.1 beta1
>
>
> Continuing CASSANDRA-4775 here.
> We are changing counter write path to explicitly 
> lock-read-modify-unlock-replicate, thus getting rid of the previously used 
> 'local' (deltas) and 'remote' shards distinction. Unfortunately, we can't 
> simply start using 'remote' shards exclusively, since shard merge rules 
> prioritize the 'local' shards. Which is why we are introducing the third 
> shard type - 'global', the only shard type to be used in 2.1+.
> The updated merge rules are going to look like this:
> global + global = keep the shard with the highest logical clock
> global + local or remote = keep the global one
> local + local = sum counts (and logical clock)
> local + remote = keep the local one
> remote + remote = keep the shard with highest logical clock
> This is required for backward compatibility with pre-2.1 counters. To make 
> 2.0-2.1 live upgrade possible, 'global' shard merge logic will have to be 
> back ported to 2.0. 2.0 will not produce them, but will be able to understand 
> the global shards coming from the 2.1 nodes during the live upgrade. See 
> CASSANDRA-6505.
> Other changes introduced in this issue:
> 1. replicate_on_write is gone. From now on we only avoid replication at RF 1.
> 2. REPLICATE_ON_WRITE stage is gone
> 3. counter mutations are running in their own COUNTER_MUTATION stage now
> 4. counter mutations have a separate counter_write_request_timeout setting
> 5. mergeAndRemoveOldShards() code is gone, for now, until/unless a better 
> solution is found
> 6. we only replicate the fresh global shard now, not the complete 
> (potentially quite large) counter context
> 7. to help with concurrency and reduce lock contention, we cache node's 
> global shards in a new counter cache ({cf id, partition key, cell name} -> 
> {count, clock}). The cache is only used by counter writes, to help with 'hot' 
> counters being simultaneously updated.
> Improvements to be handled by separate JIRA issues:
> 1. Split counter context into separate cells - one shard per cell. See 
> CASSANDRA-6506. This goes into either 2.1 or 3.0.
> Potential improvements still being debated:
> 1. Coalesce the mutations in COUNTER_MUTATION stage if they share the same 
> partition key, and apply them together, to improve the locking situation when 
> updating different counter cells in one partition. See CASSANDRA-6508. Will 
> to into 2.1 or 3.0, if deemed beneficial.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-6506) counters++ split counter context shards into separate cells

2018-08-02 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-6506:

Labels: counters  (was: )

> counters++ split counter context shards into separate cells
> ---
>
> Key: CASSANDRA-6506
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6506
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Priority: Major
>  Labels: counters
> Fix For: 4.x
>
>
> This change is related to, but somewhat orthogonal to CASSANDRA-6504.
> Currently all the shard tuples for a given counter cell are packed, in sorted 
> order, in one binary blob. Thus reconciling N counter cells requires 
> allocating a new byte buffer capable of holding the union of the two 
> context's shards N-1 times.
> For writes, in post CASSANDRA-6504 world, it also means reading more data 
> than we have to (the complete context, when all we need is the local node's 
> global shard).
> Splitting the context into separate cells, one cell per shard, will help to 
> improve this. We did a similar thing with super columns for CASSANDRA-3237. 
> Incidentally, doing this split is now possible thanks to CASSANDRA-3237.
> Doing this would also simplify counter reconciliation logic. Getting rid of 
> old contexts altogether can be done trivially with upgradesstables.
> In fact, we should be able to put the logical clock into the cell's 
> timestamp, and use regular Cell-s and regular Cell reconcile() logic for the 
> shards, especially once we get rid of the local/remote shards some time in 
> the future (until then we still have to differentiate between 
> global/remote/local shards and their priority rules).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14615) Node status turns to UN although thrift and native transport port are not open to listen

2018-08-01 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565471#comment-16565471
 ] 

Jeremy Hanna commented on CASSANDRA-14615:
--

There are two characters that define a status: Up/Down in the first character 
and then its cluster membership status in the other (Normal, Leaving, Joining, 
Moving).  Up/Normal just means that the server process is running and is a 
member of the cluster - taking responsibility for token ranges within that 
datacenter.  It doesn't have to do with rpc access.  What are you trying to do 
with the information?  There might be an alternate way to get the information 
you're after.

> Node status turns to UN although thrift and native transport port are not 
> open to listen
> 
>
> Key: CASSANDRA-14615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Anshu Vajpayee
>Priority: Critical
>  Labels: startup, startup-time
>
> During startup , Node status turns to UN but not having the thrift and native 
> transport ports open.  It is misleading. 
> As per our understanding UN stands : node is up and running fine and ready to 
> connect from  clients. 
> It would be more appropriate if node shows UN status only when ports are open 
> for client connection. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14605) Major compaction of LCS tables very slow

2018-07-26 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14605:
-
Labels: lcs performance  (was: performance)

> Major compaction of LCS tables very slow
> 
>
> Key: CASSANDRA-14605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14605
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: AWS, i3.4xlarge instance (very fast local nvme storage), 
> Linux 4.13
> Cassandra 3.0.16
>Reporter: Joseph Lynch
>Priority: Minor
>  Labels: lcs, performance
> Attachments: slow_major_compaction_lcs.svg
>
>
> We've recently started deploying 3.0.16 more heavily in production and today 
> I noticed that full compaction of LCS tables takes a much longer time than it 
> should. In particular it appears to be faster to convert a large dataset to 
> STCS, run full compaction, and then convert it to LCS (with re-leveling) than 
> it is to just run full compaction on LCS (with re-leveling).
> I was able to get a CPU flame graph showing 50% of the major compaction's cpu 
> time being spent in 
> [{{SSTableRewriter::maybeReopenEarly}}|https://github.com/apache/cassandra/blob/6ba2fb9395226491872b41312d978a169f36fcdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java#L184]
>  calling 
> [{{SSTableRewriter::moveStarts}}|https://github.com/apache/cassandra/blob/6ba2fb9395226491872b41312d978a169f36fcdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java#L223].
> I've attached the flame graph here which was generated by running Cassandra 
> using {{-XX:+PreserveFramePointer}}, then using jstack to get the compaction 
> native thread id (nid) which I then used perf to get on cpu time:
> {noformat}
> perf record -t  -o  -F 49 -g sleep 60 
> >/dev/null
> {noformat}
> I took this data and collapsed it using the steps talked about in [Brendan 
> Gregg's java in flames 
> blogpost|https://medium.com/netflix-techblog/java-in-flames-e763b3d32166] 
> (Instructions section) to generate the graph.
> The results are that at least on this dataset (700GB of data compressed, 
> 2.2TB uncompressed), we are spending 50% of our cpu time in {{moveStarts}} 
> and I am unsure that we need to be doing that as frequently as we are. I'll 
> see if I can come up with a clean reproduction to confirm if it's a general 
> problem or just on this particular dataset.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14608) Confirm correctness of windows scripts post-9608

2018-07-26 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14608:
-
Labels: Windows  (was: )

> Confirm correctness of windows scripts post-9608
> 
>
> Key: CASSANDRA-14608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Jason Brown
>Priority: Blocker
>  Labels: Windows
> Fix For: 4.0
>
>
> In CASSANDRA-9608, we chose to defer making all the changes to Windows 
> scripts. This ticket is to ensure that we do that work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14608) Confirm correctness of windows scripts post-9608

2018-07-26 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14608:
-
Environment: Windows

> Confirm correctness of windows scripts post-9608
> 
>
> Key: CASSANDRA-14608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14608
> Project: Cassandra
>  Issue Type: Task
> Environment: Windows
>Reporter: Jason Brown
>Priority: Blocker
>  Labels: Windows
> Fix For: 4.0
>
>
> In CASSANDRA-9608, we chose to defer making all the changes to Windows 
> scripts. This ticket is to ensure that we do that work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14586) Performant range containment check for SSTables

2018-07-26 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14586:
-
Description: Related to CASSANDRA-14556, we would like to make the range 
containment check performant. Right now we iterate over all partition keys in 
the SSTables and determine the eligibility for Zero Copy streaming. This ticket 
is to explore ways to make it performant by storing information in the 
SSTable's Metadata.  (was: Related to 14556, we would like to make the range 
containment check performant. Right now we iterate over all partition keys in 
the SSTables and determine the eligibility for Zero Copy streaming. This ticket 
is to explore ways to make it performant by storing information in the 
SSTable's Metadata.)

> Performant range containment check for SSTables
> ---
>
> Key: CASSANDRA-14586
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14586
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
>
> Related to CASSANDRA-14556, we would like to make the range containment check 
> performant. Right now we iterate over all partition keys in the SSTables and 
> determine the eligibility for Zero Copy streaming. This ticket is to explore 
> ways to make it performant by storing information in the SSTable's Metadata.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14573) Expose settings in virtual table

2018-07-24 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554590#comment-16554590
 ] 

Jeremy Hanna commented on CASSANDRA-14573:
--

Would we want to include a column indicating whether the value was explicitly 
set (as opposed to a default)?

> Expose settings in virtual table
> 
>
> Key: CASSANDRA-14573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14573
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
>  Labels: virtual-tables
>
> Allow both viewing what the settings are (currently impossible for some) and 
> allow changing some settings.
> Example:
> {code:java}
> UPDATE system_info.settings SET value = 'false' WHERE setting = 
> 'hinted_handoff_enabled';
> SELECT * FROM system_info.settings WHERE writable = True;
>  setting  | value  | writable
> --++--
>   batch_size_fail_threshold_in_kb | 50 | True
>   batch_size_warn_threshold_in_kb |  5 | True
>  cas_contention_timeout_in_ms |   1000 | True
>  compaction_throughput_mb_per_sec | 16 | True
> concurrent_compactors |  2 | True
>concurrent_validations | 2147483647 | True
>   counter_write_request_timeout_in_ms |   5000 | True
>hinted_handoff_enabled |  false | True
> hinted_handoff_throttle_in_kb |   1024 | True
>   incremental_backups |  false | True
>  inter_dc_stream_throughput_outbound_megabits_per_sec |200 | True
> phi_convict_threshold |8.0 | True
>   range_request_timeout_in_ms |  1 | True
>read_request_timeout_in_ms |   5000 | True
> request_timeout_in_ms |  1 | True
>   stream_throughput_outbound_megabits_per_sec |200 | True
>   tombstone_failure_threshold | 10 | True
>  tombstone_warn_threshold |   1000 | True
>truncate_request_timeout_in_ms |  6 | True
>   write_request_timeout_in_ms |   2000 | 
> True{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14571) Fix race condition in MV build/propagate when there is existing data in the base table

2018-07-24 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14571:
-
Component/s: Materialized Views

> Fix race condition in MV build/propagate when there is existing data in the 
> base table
> --
>
> Key: CASSANDRA-14571
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14571
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 4.0
>
>
> CASSANDRA-13426 exposed a race in MV initialisation and building, which now 
> breaks, consistently, 
> {{materialized_views_test.py::TestMaterializedViews::test_populate_mv_after_insert_wide_rows}}.
> CASSANDRA-14168 is also directly related.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14572) Expose all table metrics in virtual table

2018-07-24 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14572:
-
Labels: virtual-tables  (was: )

> Expose all table metrics in virtual table
> -
>
> Key: CASSANDRA-14572
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14572
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
>  Labels: virtual-tables
>
> While we want a number of virtual tables to display data in a way thats great 
> and intuitive like in nodetool. There is also much for being able to expose 
> the metrics we have for tooling via CQL instead of JMX. This is more for the 
> tooling and adhoc advanced users who know exactly what they are looking for.
> *Schema:*
> Initial idea is to expose data via {{((keyspace, table), metric)}} with a 
> column for each metric value. Could also use a Map or UDT instead of the 
> column based that can be a bit more specific to each metric type. To that end 
> there can be a {{metric_type}} column and then a UDT for each metric type 
> filled in, or a single value with more of a Map style. I am 
> purposing the column type though as with {{ALLOW FILTERING}} it does allow 
> more extensive query capabilities.
> *Implementations:*
> * Use reflection to grab all the metrics from TableMetrics (see: 
> CASSANDRA-7622 impl). This is easiest and least abrasive towards new metric 
> implementors... but its reflection and a kinda a bad idea.
> * Add a hook in TableMetrics to register with this virtual table when 
> registering
> * Pull from the CassandraMetrics registery (either reporter or iterate 
> through metrics query on read of virtual table)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



<    1   2   3   4   5   6   7   8   9   10   >