[jira] [Commented] (CASSANDRA-17763) Allow node to serve traffic only once compactions settle

2022-07-23 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570304#comment-17570304
 ] 

Romain Hardouin commented on CASSANDRA-17763:
-

I also prefer a YAML option but I thought the idea would have been rejected.

> Allow node to serve traffic only once compactions settle
> 
>
> Key: CASSANDRA-17763
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17763
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local/Compaction
>Reporter: Gil Ganz
>Priority: Normal
> Fix For: 4.x
>
>
> Today when nodes are joined to the cluster, once data streaming is completed, 
> node starts serving traffic, but it's possible there are a lot of pending 
> compactions, so having reads accessing sstables during that time is going to 
> make these reads slower and load the server. In some cases performance is so 
> bad it can bring the application down. 
> Today I overcome this by stopping native transport to a node after join 
> finishes, and enabling it after compactions pending reach a certain 
> threshold, it would be nice to have that as part of the join process, only 
> consider join completed once there are not that many compactions pending. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17763) Allow node to serve traffic only once compactions settle

2022-07-22 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570101#comment-17570101
 ] 

Romain Hardouin commented on CASSANDRA-17763:
-

I like the idea because we use the same workaround (stopping native interface).

But what is a _good_ threshold? I try to figure out an automatic threshold 
based on available resources (e.g. num cores) but it's way too brittle. There 
is no one-size-fits-all solution (clock speed, disk IO/bandwith, read latency 
SLO, ...). Before starting native interface again, we compare pending 
compactions metric with the other nodes in the same data center, right? So this 
is not a decision based on the node's metrics by itself. We have to get an idea 
of the value to set ahead of time for a given data center.

Therefore, I think that pending compactions threshold would be defined using a 
new (command line) parameter.

Other ideas?

 

> Allow node to serve traffic only once compactions settle
> 
>
> Key: CASSANDRA-17763
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17763
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local/Compaction
>Reporter: Gil Ganz
>Priority: Normal
> Fix For: 4.x
>
>
> Today when nodes are joined to the cluster, once data streaming is completed, 
> node starts serving traffic, but it's possible there are a lot of pending 
> compactions, so having reads accessing sstables during that time is going to 
> make these reads slower and load the server. In some cases performance is so 
> bad it can bring the application down. 
> Today I overcome this by stopping native transport to a node after join 
> finishes, and enabling it after compactions pending reach a certain 
> threshold, it would be nice to have that as part of the join process, only 
> consider join completed once there are not that many compactions pending. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-17241) Improve audit logging documentation for production

2022-01-06 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17470173#comment-17470173
 ] 

Romain Hardouin edited comment on CASSANDRA-17241 at 1/6/22, 8:20 PM:
--

I've pushed a PR with information and system.log examples to help users when 
Googling warnings like:
 * Failed to shrink file as it exists no longer
 * Overriding roll cycle from xxx to yyy

Some Chronicle Queue 5.20.123 links:
 * SingleChronicleQueueBuilder 
[overrideRollCycle()|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/impl/single/SingleChronicleQueueBuilder.java#L477]
 log
 * Inference when [metadata is 
deleted|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/impl/single/SingleChronicleQueueBuilder.java#L446-L447]
 * JVM properties 
[chronicle.queue.synchronousFileShrinking|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/impl/single/QueueFileShrinkManager.java#L36-L37]
 * Roll cycles 
[values|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/RollCycles.java#L30]


was (Author: rha):
I've pushed a PR with information and system.log examples to help users when 
Googling warnings like:
* Failed to shrink file as it exists no longer
* Overriding roll cycle from xxx to yyy

Some Chronicle Queue links:
* SingleChronicleQueueBuilder 
[overrideRollCycle()|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/impl/single/SingleChronicleQueueBuilder.java#L477]
 log
* Inference when  [metadata is 
deleted|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/impl/single/SingleChronicleQueueBuilder.java#L446-L447]
*  JVM properties 
[chronicle.queue.synchronousFileShrinking|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/impl/single/QueueFileShrinkManager.java#L36-L37]

> Improve audit logging documentation for production
> --
>
> Key: CASSANDRA-17241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17241
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Normal
> Fix For: 4.0.x
>
>
> After using audit logging in production, it turns out that documentation is 
> lacking some important information.
> Namely:
>  * {{roll_cycle}} can be overriden by Chronicle Queue
>  * {{max_log_size}} is ignored if {{archive_command}} option is set
>  * {{archive_command}} prevents Chronicle Queue Shrink Manager from working



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17241) Improve audit logging documentation for production

2022-01-06 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17470173#comment-17470173
 ] 

Romain Hardouin commented on CASSANDRA-17241:
-

I've pushed a PR with information and system.log examples to help users when 
Googling warnings like:
* Failed to shrink file as it exists no longer
* Overriding roll cycle from xxx to yyy

Some Chronicle Queue links:
* SingleChronicleQueueBuilder 
[overrideRollCycle()|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/impl/single/SingleChronicleQueueBuilder.java#L477]
 log
* Inference when  [metadata is 
deleted|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/impl/single/SingleChronicleQueueBuilder.java#L446-L447]
*  JVM properties 
[chronicle.queue.synchronousFileShrinking|https://github.com/OpenHFT/Chronicle-Queue/blob/chronicle-queue-5.20.123/src/main/java/net/openhft/chronicle/queue/impl/single/QueueFileShrinkManager.java#L36-L37]

> Improve audit logging documentation for production
> --
>
> Key: CASSANDRA-17241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17241
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Normal
> Fix For: 4.0.x
>
>
> After using audit logging in production, it turns out that documentation is 
> lacking some important information.
> Namely:
>  * {{roll_cycle}} can be overriden by Chronicle Queue
>  * {{max_log_size}} is ignored if {{archive_command}} option is set
>  * {{archive_command}} prevents Chronicle Queue Shrink Manager from working



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17241) Improve audit logging documentation for production

2022-01-06 Thread Romain Hardouin (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-17241:

Description: 
After using audit logging in production, it turns out that documentation is 
lacking some important information.

Namely:
 * {{roll_cycle}} can be overriden by Chronicle Queue
 * {{max_log_size}} is ignored if {{archive_command}} option is set
 * {{archive_command}} prevents Chronicle Queue Shrink Manager from working


  was:
After using audit logging in production, it turns out that documentation is 
lacking some important information.

Namely:
 * {{roll_cycle}} cannot be changed just by modifying cassandra.yaml
 * {{max_log_size}} is ignored if {{archive_command}} option is set
 * {{archive_command}} prevents Chronicle Queue Shrink Manager from working



> Improve audit logging documentation for production
> --
>
> Key: CASSANDRA-17241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17241
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Normal
>
> After using audit logging in production, it turns out that documentation is 
> lacking some important information.
> Namely:
>  * {{roll_cycle}} can be overriden by Chronicle Queue
>  * {{max_log_size}} is ignored if {{archive_command}} option is set
>  * {{archive_command}} prevents Chronicle Queue Shrink Manager from working



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17241) Improve audit logging documentation for production

2022-01-06 Thread Romain Hardouin (Jira)
Romain Hardouin created CASSANDRA-17241:
---

 Summary: Improve audit logging documentation for production
 Key: CASSANDRA-17241
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17241
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation/Website
Reporter: Romain Hardouin
Assignee: Romain Hardouin


After using audit logging in production, it turns out that documentation is 
lacking some important information.

Namely:
 * {{roll_cycle}} cannot be changed just by modifying cassandra.yaml
 * {{max_log_size}} is ignored if {{archive_command}} option is set
 * {{archive_command}} prevents Chronicle Queue Shrink Manager from working




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16380) KeyCache load performance issue during startup

2021-01-14 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264741#comment-17264741
 ] 

Romain Hardouin commented on CASSANDRA-16380:
-

Looks like a duplicate of https://issues.apache.org/jira/browse/CASSANDRA-14898

 

> KeyCache load performance issue during startup
> --
>
> Key: CASSANDRA-16380
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16380
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Venkata Harikrishna Nukala
>Assignee: Venkata Harikrishna Nukala
>Priority: Normal
>
> Cassandra startup blocked for loading key cache.
> From org.apache.cassandra.service.CassandraDaemon#setup method:
> {code:java}
> try
> {
>  loadRowAndKeyCacheAsync().get();
> }
> catch (Throwable t)
> {
>  JVMStabilityInspector.inspectThrowable(t);
>  logger.warn("Error loading key or row cache", t);
> }{code}
> Key cache {{deserialize}} method is fetching all CANONICAL SSTables and picks 
> one from it for each entry: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/CacheService.java#L447.
>  When the key cache is relatively big and has lots of SSTables (in thousands) 
> then loading key cache take lots of time.
> Performance of key cache loading can be improved and have timeout for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16314) nodetool cleanup not working

2020-12-07 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245134#comment-17245134
 ] 

Romain Hardouin commented on CASSANDRA-16314:
-

Hi [~dnamicro] , there is nothing wrong with cleanup. Since you didn't add new 
nodes in the cluster the command has nothing to do.
{quote}nodetool cleanup - Triggers the immediate cleanup of keys *no longer 
belonging to a node*.
{quote}
[https://cassandra.apache.org/doc/latest/tools/nodetool/cleanup.html]
{quote}
h2. Cleanup data after range movements

As a safety measure, Cassandra does not automatically remove data from nodes 
that “lose” part of their token range due to a range movement operation 
(bootstrap, move, replace). Run *{{nodetool cleanup}}* on the nodes that lost 
ranges to the joining node when you are satisfied the new node is up and 
working. If you do not do this the old data will still be counted against the 
load on that node.
{quote}
[https://cassandra.apache.org/doc/latest/operating/topo_changes.html]

 

 

 

> nodetool cleanup not working
> 
>
> Key: CASSANDRA-16314
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16314
> Project: Cassandra
>  Issue Type: Bug
>Reporter: AaronTrazona
>Priority: Normal
> Attachments: image-2020-12-07-09-23-02-002.png, 
> image-2020-12-07-09-23-33-788.png, image-2020-12-07-09-24-54-453.png, 
> image-2020-12-07-09-26-28-702.png
>
>
> Hi,
>  
> After setting up the 3 clusters, I want to free up the disk on my first 
> cluster since 
> the previous still there.
> This is the nodetool status before running the nodetool cleanup
> !image-2020-12-07-09-23-02-002.png!
> When I run the  nodetool cleanup 
> !image-2020-12-07-09-23-33-788.png!
> After I run the nodetool cleanup . I check if the node free up the spaces. 
> This is the result
> !image-2020-12-07-09-24-54-453.png!
> It's seems that the nodetool cleanup not working well
> cassandra version and java version
> !image-2020-12-07-09-26-28-702.png!
> Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16069) Loss of functionality around null clustering when dropping compact storage

2020-08-24 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17183171#comment-17183171
 ] 

Romain Hardouin commented on CASSANDRA-16069:
-

IMHO Solution #3 seems the safest solution if it's clearly documented.

Dropping COMPACT STORAGE is not something users do in production without 
testing. They must ensure that apps/services work without errors.

If a service relies on this brittle "feature", it will still be able to access 
data using a slice query. There is no data unavailability, which is the most 
important thing I think.

On top of that, it's not a sneaky change with a silent error. INSERT and UPDATE 
will throw InvalidRequest that should appear during tests.

> Loss of functionality around null clustering when dropping compact storage
> --
>
> Key: CASSANDRA-16069
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16069
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/CQL
>Reporter: Sylvain Lebresne
>Priority: Normal
>
> For backward compatibility reasons[1], it is allowed to insert rows where 
> some of the clustering columns are {{null}} for compact tables. That support 
> is a tad limited/inconsistent[2] but essentially you can do:
> {noformat}
> cqlsh:ks> CREATE TABLE t (k int, c1 int, c2 int, v int, PRIMARY KEY (k, c1, 
> c2)) WITH COMPACT STORAGE;
> cqlsh:ks> INSERT INTO t(k, c1, v) VALUES (1, 1, 1);
> cqlsh:ks> SELECT * FROM t;
>  k | c1 | c2   | v
> ---++--+---
>  1 |  1 | null | 1
> (1 rows)
> cqlsh:ks> UPDATE t SET v = 2 WHERE k = 1 AND c1 = 1;
> cqlsh:ks> SELECT * FROM t;
>  k | c1 | c2   | v
> ---++--+---
>  1 |  1 | null | 2
> (1 rows)
> {noformat}
> This is not allowed on non-compact tables however:
> {noformat}
> cqlsh:ks> CREATE TABLE t2 (k int, c1 int, c2 int, v int, PRIMARY KEY (k, c1, 
> c2));
> cqlsh:ks> INSERT INTO t2(k, c1, v) VALUES (1, 1, 1);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Some 
> clustering keys are missing: c2"
> cqlsh:ks> UPDATE t2 SET v = 2 WHERE k = 1 AND c1 = 1;
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Some 
> clustering keys are missing: c2"
> {noformat}
> Which means that a user with a compact table that rely on this will not be 
> able to use {{DROP COMPACT STORAGE}}.
> Which is a problem for the 4.0 upgrade story. Problem to which we need an 
> answer.
>  
> 
> [1]: the underlying {{CompositeType}} used by such tables allows to provide 
> only a prefix of components, so thrift users could have used such 
> functionality. We thus had to support it in CQL, or those users wouldn't have 
> been able to upgrade to CQL easily.
> [2]: building on the example above, the value for {{c2}} is essentially 
> {{null}}, yet none of the following is currently allowed:
> {noformat}
> cqlsh:ks> INSERT INTO t(k, c1, c2, v) VALUES (1, 1, null, 1);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Invalid 
> null value in condition for column c2"
> cqlsh:ks> UPDATE t SET v = 2 WHERE k = 1 AND c1 = 1 AND c2 = null;
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Invalid 
> null value in condition for column c2"
> cqlsh:ks> SELECT * FROM c WHERE k = 1 AND c1 = 1 AND c2 = null;
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Invalid 
> null value in condition for column c2"
> {noformat}
> Not only is that unintuitive/inconsistent, but the {{SELECT}} one means there 
> is no way to select only the row. You can skip specifying {{c2}} in the 
> {{SELECT}}, but this become a slice selection essentially, as shown below:
> {noformat}
> cqlsh:ks> INSERT INTO ct(k, c1, c2, v) VALUES (1, 1, 1, 1);
> cqlsh:ks> SELECT * FROM ct WHERE k = 1 AND c1 = 1;
>  k | c1 | c2   | v
> ---++--+---
>  1 |  1 | null | 1
>  1 |  1 |1 | 1
> (2 rows)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15406) Add command to show the progress of data streaming and index build

2019-11-09 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970753#comment-16970753
 ] 

Romain Hardouin commented on CASSANDRA-15406:
-

Hi [~maxwellguo] , you can see in which mode a node is with the first line of 
{{nodetool netstats output}}. e.g. "Mode: NORMAL" 

Mode can be: [STARTING, NORMAL, JOINING, LEAVING, DECOMMISSIONED, MOVING, 
DRAINING, 
DRAINED|https://github.com/apache/cassandra/blob/e4287d04feaa168802168352412d5d74fc7faae4/src/java/org/apache/cassandra/service/StorageService.java#L219]

When a node is bootstrapping (JOINING)  you can have an idea of the progress 
with:
{code:java}
 nodetool netstats -H | grep -v 100%{code}
I don't say that's perfect, a more user friendly command like {{nodetool 
progress}} (or a {{netstats}} option) with a global percentage could be a good 
idea.

> Add command to show the progress of data streaming and index build 
> ---
>
> Key: CASSANDRA-15406
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15406
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Streaming, Legacy/Streaming and Messaging, 
> Tool/nodetool
>Reporter: maxwellguo
>Priority: Normal
>
> I found that we should supply a command to show the progress of streaming 
> when we do the operation of bootstrap/move/decommission/removenode. For when 
> do data streaming , noboday knows which steps there program are in , so I 
> think a command to show the joing/leaving node's is needed .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15349) Add “Going away” message to the client protocol

2019-10-10 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948279#comment-16948279
 ] 

Romain Hardouin commented on CASSANDRA-15349:
-

It would be really nice  for smooth rolling restarts (y)

> Add “Going away” message to the client protocol
> ---
>
> Key: CASSANDRA-15349
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15349
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Messaging/Client
>Reporter: Alex Petrov
>Priority: Normal
>  Labels: client-impacting
>
> Add “Going away” message that allows node to announce its shutdown and let 
> clients gracefully shutdown and not attempt further requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15301) DDL operation during backup process after made snaphot ,restore will lost data

2019-09-12 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928807#comment-16928807
 ] 

Romain Hardouin commented on CASSANDRA-15301:
-

Maybe adding incremental backups to daily snapshots could fit your needs. You 
can build something similar to _tablesnap_ that uploads somewhere each sstable 
under /backups directory and then process them to do whatever you need. 
 See:
 * 
[https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/operations/opsBackupIncremental.html]
 * 
[https://github.com/apache/cassandra/blob/cassandra-3.11/conf/cassandra.yaml#L775]
 * [http://cassandra.apache.org/doc/latest/tools/nodetool/enablebackup.html]
 * tablesnap: [https://github.com/JeremyGrosser/tablesnap] 

 

 

> DDL operation during backup process after made snaphot ,restore will lost 
> data 
> ---
>
> Key: CASSANDRA-15301
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15301
> Project: Cassandra
>  Issue Type: Bug
>Reporter: maxwellguo
>Priority: Normal
>
> As the document described 
> https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/operations/opsBackupRestore.html
>  , the setp to do bakcup and restore, we make snapshot and do table file 
> export , but during thie process new keyspace / table was created ,The table 
> file can be export to new cluster ,but when restore snapshot and files, new 
> added table want't be loaded . so the new added table's data can not be get . 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15301) DDL operation during backup process after made snaphot ,restore will lost data

2019-09-04 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922293#comment-16922293
 ] 

Romain Hardouin commented on CASSANDRA-15301:
-

There is no bug to fix, a snapshot is just a bunch of hard links.
If you want to backup/restore the new table you need to take another - newer - 
snapshot: either a system wide (no keyspace specified) or a specific one 
targetting only the new table.

> DDL operation during backup process after made snaphot ,restore will lost 
> data 
> ---
>
> Key: CASSANDRA-15301
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15301
> Project: Cassandra
>  Issue Type: Bug
>Reporter: maxwellguo
>Priority: Normal
>
> As the document described 
> https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/operations/opsBackupRestore.html
>  , the setp to do bakcup and restore, we make snapshot and do table file 
> export , but during thie process new keyspace / table was created ,The table 
> file can be export to new cluster ,but when restore snapshot and files, new 
> added table want't be loaded . so the new added table's data can not be get . 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15301) DDL operation during backup process after made snaphot ,restore will lost data

2019-09-04 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922245#comment-16922245
 ] 

Romain Hardouin commented on CASSANDRA-15301:
-

Hi [~maxwellguo], here is what I understood:

1. You took a snapshot at time T
2. Snapshot data are exported 
3. During the export you created a new table
4. You restored snapshotted data in a new cluster
5. You expected that new table data were part of the new, restored, cluster

The snapshot taken at time T only contains data created <= T, so you can't get 
newer data.

I think I missed something, it's not clear to me what data have been exported.

When you need help regarding a process or a behavior you should use the "Users" 
mailing list or Slack "cassandra" channel 
http://cassandra.apache.org/doc/latest/contactus.html

> DDL operation during backup process after made snaphot ,restore will lost 
> data 
> ---
>
> Key: CASSANDRA-15301
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15301
> Project: Cassandra
>  Issue Type: Bug
>Reporter: maxwellguo
>Priority: Normal
>
> As the document described 
> https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/operations/opsBackupRestore.html
>  , the setp to do bakcup and restore, we make snapshot and do table file 
> export , but during thie process new keyspace / table was created ,The table 
> file can be export to new cluster ,but when restore snapshot and files, new 
> added table want't be loaded . so the new added table's data can not be get . 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15296) ZstdCompressor compression_level setting

2019-09-03 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920975#comment-16920975
 ] 

Romain Hardouin edited comment on CASSANDRA-15296 at 9/3/19 6:21 PM:
-

 TLDR: the compression level interval is {{[-(1<<17), 22]}} i.e. {{[-131072, 
22]}}

Looking at Z Standard C sources, the maximum is 22, not 2 (it's a typo):
{code:c}
#define ZSTD_MAX_CLEVEL 22
int ZSTD_maxCLevel(void) { return ZSTD_MAX_CLEVEL; }
int ZSTD_minCLevel(void) { return (int)-ZSTD_TARGETLENGTH_MAX; }
{code}
[https://github.com/facebook/zstd/blob/519834738228cc810b48a2d44ff295b845842af4/lib/compress/zstd_compress.c#L3823]

 
 {{ZSTD_TARGETLENGTH_MAX}} is defined as:
{code:c}
#define ZSTD_TARGETLENGTH_MAXZSTD_BLOCKSIZE_MAX
{code}
{code:c}
#define ZSTD_BLOCKSIZELOG_MAX  17
#define ZSTD_BLOCKSIZE_MAX (1 
>
> Key: CASSANDRA-15296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15296
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies, Feature/Compression
>Reporter: DeepakVohra
>Assignee: Romain Hardouin
>Priority: Low
> Attachments: 15296-trunk.txt
>
>
> The DEFAULT_COMPRESSION_LEVEL for ZstdCompressor is set to 3, but its range 
> for compression_level is indicated to be between -131072 and 2.  The default 
> value is outside the range. Is it by design or a bug?
> {code:java}
> - ``compression_level`` is only applicable for ``ZstdCompressor`` and accepts 
> values between ``-131072`` and ``2``.
> // Compressor Defaults
> public static final int DEFAULT_COMPRESSION_LEVEL = 3;
> {code}
> https://github.com/apache/cassandra/commit/dccf53061a61e7c632669c60cd94626e405518e9



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15296) ZstdCompressor compression_level setting

2019-09-02 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920991#comment-16920991
 ] 

Romain Hardouin commented on CASSANDRA-15296:
-

Patch to fix the typo and add information about compression levels.

> ZstdCompressor compression_level setting
> 
>
> Key: CASSANDRA-15296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15296
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies, Feature/Compression
>Reporter: DeepakVohra
>Assignee: Romain Hardouin
>Priority: Normal
> Attachments: 15296-trunk.txt
>
>
> The DEFAULT_COMPRESSION_LEVEL for ZstdCompressor is set to 3, but its range 
> for compression_level is indicated to be between -131072 and 2.  The default 
> value is outside the range. Is it by design or a bug?
> {code:java}
> - ``compression_level`` is only applicable for ``ZstdCompressor`` and accepts 
> values between ``-131072`` and ``2``.
> // Compressor Defaults
> public static final int DEFAULT_COMPRESSION_LEVEL = 3;
> {code}
> https://github.com/apache/cassandra/commit/dccf53061a61e7c632669c60cd94626e405518e9



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15296) ZstdCompressor compression_level setting

2019-09-02 Thread Romain Hardouin (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-15296:

Attachment: 15296-trunk.txt

> ZstdCompressor compression_level setting
> 
>
> Key: CASSANDRA-15296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15296
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies, Feature/Compression
>Reporter: DeepakVohra
>Priority: Normal
> Attachments: 15296-trunk.txt
>
>
> The DEFAULT_COMPRESSION_LEVEL for ZstdCompressor is set to 3, but its range 
> for compression_level is indicated to be between -131072 and 2.  The default 
> value is outside the range. Is it by design or a bug?
> {code:java}
> - ``compression_level`` is only applicable for ``ZstdCompressor`` and accepts 
> values between ``-131072`` and ``2``.
> // Compressor Defaults
> public static final int DEFAULT_COMPRESSION_LEVEL = 3;
> {code}
> https://github.com/apache/cassandra/commit/dccf53061a61e7c632669c60cd94626e405518e9



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15296) ZstdCompressor compression_level setting

2019-09-02 Thread Romain Hardouin (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin reassigned CASSANDRA-15296:
---

Assignee: Romain Hardouin

> ZstdCompressor compression_level setting
> 
>
> Key: CASSANDRA-15296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15296
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies, Feature/Compression
>Reporter: DeepakVohra
>Assignee: Romain Hardouin
>Priority: Normal
> Attachments: 15296-trunk.txt
>
>
> The DEFAULT_COMPRESSION_LEVEL for ZstdCompressor is set to 3, but its range 
> for compression_level is indicated to be between -131072 and 2.  The default 
> value is outside the range. Is it by design or a bug?
> {code:java}
> - ``compression_level`` is only applicable for ``ZstdCompressor`` and accepts 
> values between ``-131072`` and ``2``.
> // Compressor Defaults
> public static final int DEFAULT_COMPRESSION_LEVEL = 3;
> {code}
> https://github.com/apache/cassandra/commit/dccf53061a61e7c632669c60cd94626e405518e9



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15296) ZstdCompressor compression_level setting

2019-09-02 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920975#comment-16920975
 ] 

Romain Hardouin edited comment on CASSANDRA-15296 at 9/2/19 5:03 PM:
-

 TLDR: the compression level interval is {{[-(2<<17), 22]}} i.e. {{[-131072, 
22]}}

Looking at Z Standard C sources, the maximum is 22, not 2 (it's a typo):
{code:c}
#define ZSTD_MAX_CLEVEL 22
int ZSTD_maxCLevel(void) { return ZSTD_MAX_CLEVEL; }
int ZSTD_minCLevel(void) { return (int)-ZSTD_TARGETLENGTH_MAX; }
{code}
[https://github.com/facebook/zstd/blob/519834738228cc810b48a2d44ff295b845842af4/lib/compress/zstd_compress.c#L3823]

 
 {{ZSTD_TARGETLENGTH_MAX}} is defined as:
{code:c}
#define ZSTD_TARGETLENGTH_MAXZSTD_BLOCKSIZE_MAX
{code}
{code:c}
#define ZSTD_BLOCKSIZELOG_MAX  17
#define ZSTD_BLOCKSIZE_MAX (1 
>
> Key: CASSANDRA-15296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15296
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies, Feature/Compression
>Reporter: DeepakVohra
>Priority: Normal
>
> The DEFAULT_COMPRESSION_LEVEL for ZstdCompressor is set to 3, but its range 
> for compression_level is indicated to be between -131072 and 2.  The default 
> value is outside the range. Is it by design or a bug?
> {code:java}
> - ``compression_level`` is only applicable for ``ZstdCompressor`` and accepts 
> values between ``-131072`` and ``2``.
> // Compressor Defaults
> public static final int DEFAULT_COMPRESSION_LEVEL = 3;
> {code}
> https://github.com/apache/cassandra/commit/dccf53061a61e7c632669c60cd94626e405518e9



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15296) ZstdCompressor compression_level setting

2019-09-02 Thread Romain Hardouin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920975#comment-16920975
 ] 

Romain Hardouin commented on CASSANDRA-15296:
-

 TLDR: the compression level interval is {{[-(2<<17), 22]}} i.e. {{[-131072, 
22]}}

Looking at Z Standard C sources, the maximum is indeed 22, not 2 (it's a typo):
{code:c}
#define ZSTD_MAX_CLEVEL 22
int ZSTD_maxCLevel(void) { return ZSTD_MAX_CLEVEL; }
int ZSTD_minCLevel(void) { return (int)-ZSTD_TARGETLENGTH_MAX; }
{code}
[https://github.com/facebook/zstd/blob/519834738228cc810b48a2d44ff295b845842af4/lib/compress/zstd_compress.c#L3823]

 
 {{ZSTD_TARGETLENGTH_MAX}} is defined as:
{code:c}
#define ZSTD_TARGETLENGTH_MAXZSTD_BLOCKSIZE_MAX
{code}
{code:c}
#define ZSTD_BLOCKSIZELOG_MAX  17
#define ZSTD_BLOCKSIZE_MAX (1< ZstdCompressor compression_level setting
> 
>
> Key: CASSANDRA-15296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15296
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies, Feature/Compression
>Reporter: DeepakVohra
>Priority: Normal
>
> The DEFAULT_COMPRESSION_LEVEL for ZstdCompressor is set to 3, but its range 
> for compression_level is indicated to be between -131072 and 2.  The default 
> value is outside the range. Is it by design or a bug?
> {code:java}
> - ``compression_level`` is only applicable for ``ZstdCompressor`` and accepts 
> values between ``-131072`` and ``2``.
> // Compressor Defaults
> public static final int DEFAULT_COMPRESSION_LEVEL = 3;
> {code}
> https://github.com/apache/cassandra/commit/dccf53061a61e7c632669c60cd94626e405518e9



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15157) Missing commands in nodetool help

2019-06-17 Thread Romain Hardouin (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864157#comment-16864157
 ] 

Romain Hardouin edited comment on CASSANDRA-15157 at 6/17/19 7:22 AM:
--

{{cfXXX}} commands are deprecated from a long time ago and {{tableXXX}} 
commands should be used instead.

You will notice that {{tablestats}} and {{tablehistograms}} are part of the 
"nodetool help" output. Maybe your question is how to list aliases? You can't, 
unless if you look at sources e.g. 
[https://github.com/apache/cassandra/blob/033b30f869ea8a3171c22cbb3c5c517ce2a0fd59/src/java/org/apache/cassandra/tools/nodetool/CfStats.java#L25]

A grep (well, rg) shows that only two commands are hidden:
{code:java}
▶ rg 'hidden = true'
src/java/org/apache/cassandra/tools/nodetool/CfStats.java
25:@Command(name = "cfstats", hidden = true, description = "Print statistics on 
tables")

src/java/org/apache/cassandra/tools/nodetool/CfHistograms.java
25:@Command(name = "cfhistograms", hidden = true, description = "Print 
statistic histograms for a given column family")

{code}


was (Author: rha):
{{cfXXX}} commands are deprecated from a long time ago and {{tableXXX}} 
commands should be used instead.

You will notice that {{tablestats}} or {{tablehistograms}} are part of the 
"nodetool help" output. Maybe your question is how to list aliases? You can't 
unless if look at sources e.g. 
[https://github.com/apache/cassandra/blob/033b30f869ea8a3171c22cbb3c5c517ce2a0fd59/src/java/org/apache/cassandra/tools/nodetool/CfStats.java#L25]

A grep (well, rg) shows that only two commands are hidden:
{code:java}
▶ rg 'hidden = true'
src/java/org/apache/cassandra/tools/nodetool/CfStats.java
25:@Command(name = "cfstats", hidden = true, description = "Print statistics on 
tables")

src/java/org/apache/cassandra/tools/nodetool/CfHistograms.java
25:@Command(name = "cfhistograms", hidden = true, description = "Print 
statistic histograms for a given column family")

{code}

> Missing commands in nodetool help
> -
>
> Key: CASSANDRA-15157
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15157
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Yakir Gibraltar
>Priority: Normal
>
> Hi, how to gel *all* available commands of nodetool? like cfhistograms, 
> cfstats, etc.
> "nodetool help" does not return all commands.
> for example:
> {code}
> [root@ctaz001 ~]# nodetool version
> ReleaseVersion: 3.11.4
> [root@ctaz001 ~]# nodetool help | grep cfh | wc -l
> 0
> [root@ctaz001 ~]# nodetool help
> usage: nodetool [(-p  | --port )]
> [(-u  | --username )]
> [(-pw  | --password )]
> [(-pwf  | --password-file )]
> [(-h  | --host )]  []
> The most commonly used nodetool commands are:
> assassinate  Forcefully remove a dead node without 
> re-replicating any data.  Use as a last resort if you cannot removenode
> bootstrapMonitor/manage node's bootstrap process
> cleanup  Triggers the immediate cleanup of keys no 
> longer belonging to a node. By default, clean all keyspaces
> clearsnapshotRemove the snapshot with the given name from 
> the given keyspaces. If no snapshotName is specified we will remove all 
> snapshots
> compact  Force a (major) compaction on one or more 
> tables or user-defined compaction on given SSTables
> compactionhistoryPrint history of compaction
> compactionstats  Print statistics on compactions
> decommission Decommission the *node I am connecting to*
> describecluster  Print the name, snitch, partitioner and 
> schema version of a cluster
> describering Shows the token ranges info of a given 
> keyspace
> disableautocompactionDisable autocompaction for the given 
> keyspace and table
> disablebackupDisable incremental backup
> disablebinaryDisable native transport (binary protocol)
> disablegossipDisable gossip (effectively marking the node 
> down)
> disablehandoff   Disable storing hinted handoffs
> disablehintsfordcDisable hints for a data center
> disablethriftDisable thrift server
> drainDrain the node (stop accepting writes and 
> flush all tables)
> enableautocompaction Enable autocompaction for the given keyspace 
> and table
> enablebackup Enable incremental backup
> enablebinary Reenable native transport (binary protocol)
> enablegossip Reenable gossip
> enablehandoffReenable future hints 

[jira] [Commented] (CASSANDRA-15157) Missing commands in nodetool help

2019-06-14 Thread Romain Hardouin (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864157#comment-16864157
 ] 

Romain Hardouin commented on CASSANDRA-15157:
-

{{cfXXX}} commands are deprecated from a long time ago and {{tableXXX}} 
commands should be used instead.

You will notice that {{tablestats}} or {{tablehistograms}} are part of the 
"nodetool help" output. Maybe your question is how to list aliases? You can't 
unless if look at sources e.g. 
[https://github.com/apache/cassandra/blob/033b30f869ea8a3171c22cbb3c5c517ce2a0fd59/src/java/org/apache/cassandra/tools/nodetool/CfStats.java#L25]

A grep (well, rg) shows that only two commands are hidden:
{code:java}
▶ rg 'hidden = true'
src/java/org/apache/cassandra/tools/nodetool/CfStats.java
25:@Command(name = "cfstats", hidden = true, description = "Print statistics on 
tables")

src/java/org/apache/cassandra/tools/nodetool/CfHistograms.java
25:@Command(name = "cfhistograms", hidden = true, description = "Print 
statistic histograms for a given column family")

{code}

> Missing commands in nodetool help
> -
>
> Key: CASSANDRA-15157
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15157
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Yakir Gibraltar
>Priority: Normal
>
> Hi, how to gel *all* available commands of nodetool? like cfhistograms, 
> cfstats, etc.
> "nodetool help" does not return all commands.
> for example:
> {code}
> [root@ctaz001 ~]# nodetool version
> ReleaseVersion: 3.11.4
> [root@ctaz001 ~]# nodetool help | grep cfh | wc -l
> 0
> [root@ctaz001 ~]# nodetool help
> usage: nodetool [(-p  | --port )]
> [(-u  | --username )]
> [(-pw  | --password )]
> [(-pwf  | --password-file )]
> [(-h  | --host )]  []
> The most commonly used nodetool commands are:
> assassinate  Forcefully remove a dead node without 
> re-replicating any data.  Use as a last resort if you cannot removenode
> bootstrapMonitor/manage node's bootstrap process
> cleanup  Triggers the immediate cleanup of keys no 
> longer belonging to a node. By default, clean all keyspaces
> clearsnapshotRemove the snapshot with the given name from 
> the given keyspaces. If no snapshotName is specified we will remove all 
> snapshots
> compact  Force a (major) compaction on one or more 
> tables or user-defined compaction on given SSTables
> compactionhistoryPrint history of compaction
> compactionstats  Print statistics on compactions
> decommission Decommission the *node I am connecting to*
> describecluster  Print the name, snitch, partitioner and 
> schema version of a cluster
> describering Shows the token ranges info of a given 
> keyspace
> disableautocompactionDisable autocompaction for the given 
> keyspace and table
> disablebackupDisable incremental backup
> disablebinaryDisable native transport (binary protocol)
> disablegossipDisable gossip (effectively marking the node 
> down)
> disablehandoff   Disable storing hinted handoffs
> disablehintsfordcDisable hints for a data center
> disablethriftDisable thrift server
> drainDrain the node (stop accepting writes and 
> flush all tables)
> enableautocompaction Enable autocompaction for the given keyspace 
> and table
> enablebackup Enable incremental backup
> enablebinary Reenable native transport (binary protocol)
> enablegossip Reenable gossip
> enablehandoffReenable future hints storing on the current 
> node
> enablehintsfordc Enable hints for a data center that was 
> previsouly disabled
> enablethrift Reenable thrift server
> failuredetector  Shows the failure detector information for 
> the cluster
> flushFlush one or more tables
> garbagecollect   Remove deleted data from one or more tables
> gcstats  Print GC Statistics
> getcompactionthreshold   Print min and max compaction thresholds for 
> a given table
> getcompactionthroughput  Print the MB/s throughput cap for compaction 
> in the system
> getconcurrentcompactors  Get the number of concurrent compactors in 
> the system.
> getendpoints Print the end points that owns the key
> getinterdcstreamthroughput   Print the Mb/s throughput cap for 
> inter-datacenter streaming in the system
> 

[jira] [Commented] (CASSANDRA-14987) Add metrics for interrupted compactions

2019-02-15 Thread Romain Hardouin (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769332#comment-16769332
 ] 

Romain Hardouin commented on CASSANDRA-14987:
-

Hi, should compactions interrupted while stopping the node be counted? It would 
add some noise, not a big deal that said.

> Add metrics for interrupted compactions
> ---
>
> Key: CASSANDRA-14987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14987
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> To be able to track how often we interrupt compactions (for example when 
> starting anticompactions) we should add a few metrics;
> * number of interrupted compactions
> * number of bytes written by interrupted compactions
> We currently count an interrupted compaction as completed, which should 
> change with this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14933) allocate_tokens_for_local_replication_factor

2018-12-14 Thread Romain Hardouin (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16721216#comment-16721216
 ] 

Romain Hardouin commented on CASSANDRA-14933:
-

Hi [~alokamvenki],

This bug tracker is primarily used by contributors of the Apache Cassandra 
project toward development of the database itself. For operational concerns / 
questions, can you reach out to the user's list or public IRC channel for 
support? A member of the community may be able to help.

Here's a page with information on the best channels for support: 
[http://cassandra.apache.org/community/]

> allocate_tokens_for_local_replication_factor
> 
>
> Key: CASSANDRA-14933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14933
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: venky
>Priority: Major
>
> Is Apache Cassandra supports 
> {color:#FF}*allocate_tokens_for_local_replication_factor* {color}**?
> Currently, We are using DSE with allocate_tokens_for_local_replication_factor 
> and We are in the plan to move from DSE to Apache.
> Is it available in Apache Cassandra or is there any plans to implement in 
> Apache Cassandra?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14647) Reading cardinality from Statistics.db failed

2018-09-04 Thread Romain Hardouin (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603303#comment-16603303
 ] 

Romain Hardouin commented on CASSANDRA-14647:
-

After removing {{EstimatedPartitionCount}} from Datadog's cassandra.yaml on one 
node, I don't see any warning for 4 days.

[~nezdali] can you double check that your change was effective (e.g. monitoring 
service restarted, etc.)? 

> Reading cardinality from Statistics.db failed
> -
>
> Key: CASSANDRA-14647
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14647
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Clients are doing only writes with Local One, cluster 
> consist of 3 regions with RF3.
> Storage is configured wth jbod/XFS on 10 x 1Tb disks
> IOPS limit for each disk 500 (total 5000 iops)
> Bandwith for each disk 60mb/s (600 total)
> OS is Debian linux.
>Reporter: Vitali Djatsuk
>Priority: Major
> Fix For: 3.0.x
>
> Attachments: cassandra_compaction_pending_tasks_7days.png
>
>
> There is some issue with sstable metadata which is visible in system.log, the 
> messages says:
> {noformat}
> WARN  [Thread-6] 2018-07-25 07:12:47,928 SSTableReader.java:249 - Reading 
> cardinality from Statistics.db failed for 
> /opt/data/disk5/data/keyspace/table/mc-big-Data.db.{noformat}
> Although there is no such file. 
> The message has appeared after i've changed the compaction strategy from 
> SizeTiered to Leveled. Compaction strategy has been changed region by region 
> (total 3 regions) and it has coincided with the double client write traffic 
> increase.
>  I have tried to run nodetool scrub to rebuilt the sstable, but that does not 
> fix the issue.
> So very hard to define the steps to reproduce, probably it will be:
>  # run stress tool with write traffic
>  # under load change compaction strategy from SireTiered to Leveled for the 
> bunch of hosts
>  # add more write traffic
> Reading the code it is said that if this metadata is broken, then "estimating 
> the keys will be done using index summary". 
>  
> [https://github.com/apache/cassandra/blob/cassandra-3.0.17/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L247]
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14647) Reading cardinality from Statistics.db failed

2018-08-31 Thread Romain Hardouin (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598934#comment-16598934
 ] 

Romain Hardouin commented on CASSANDRA-14647:
-

I do query partition count through Datadog JMX agent, so it happens all the 
time. I'll try to disabled it - although it didn't work for [~nezdali]

> Reading cardinality from Statistics.db failed
> -
>
> Key: CASSANDRA-14647
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14647
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Clients are doing only writes with Local One, cluster 
> consist of 3 regions with RF3.
> Storage is configured wth jbod/XFS on 10 x 1Tb disks
> IOPS limit for each disk 500 (total 5000 iops)
> Bandwith for each disk 60mb/s (600 total)
> OS is Debian linux.
>Reporter: Vitali Djatsuk
>Priority: Major
> Fix For: 3.0.x
>
> Attachments: cassandra_compaction_pending_tasks_7days.png
>
>
> There is some issue with sstable metadata which is visible in system.log, the 
> messages says:
> {noformat}
> WARN  [Thread-6] 2018-07-25 07:12:47,928 SSTableReader.java:249 - Reading 
> cardinality from Statistics.db failed for 
> /opt/data/disk5/data/keyspace/table/mc-big-Data.db.{noformat}
> Although there is no such file. 
> The message has appeared after i've changed the compaction strategy from 
> SizeTiered to Leveled. Compaction strategy has been changed region by region 
> (total 3 regions) and it has coincided with the double client write traffic 
> increase.
>  I have tried to run nodetool scrub to rebuilt the sstable, but that does not 
> fix the issue.
> So very hard to define the steps to reproduce, probably it will be:
>  # run stress tool with write traffic
>  # under load change compaction strategy from SireTiered to Leveled for the 
> bunch of hosts
>  # add more write traffic
> Reading the code it is said that if this metadata is broken, then "estimating 
> the keys will be done using index summary". 
>  
> [https://github.com/apache/cassandra/blob/cassandra-3.0.17/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L247]
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14647) Reading cardinality from Statistics.db failed

2018-08-17 Thread Romain Hardouin (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583927#comment-16583927
 ] 

Romain Hardouin commented on CASSANDRA-14647:
-

This is not due to STCS -> LCS. I have the same behavior on one cluster with 
LCS and heavy writes. STCS has never been configured on it.

> Reading cardinality from Statistics.db failed
> -
>
> Key: CASSANDRA-14647
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14647
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Clients are doing only writes with Local One, cluster 
> consist of 3 regions with RF3.
> Storage is configured wth jbod/XFS on 10 x 1Tb disks
> IOPS limit for each disk 500 (total 5000 iops)
> Bandwith for each disk 60mb/s (600 total)
> OS is Debian linux.
>Reporter: Vitali Djatsuk
>Priority: Major
> Fix For: 3.0.x
>
> Attachments: cassandra_compaction_pending_tasks_7days.png
>
>
> There is some issue with sstable metadata which is visible in system.log, the 
> messages says:
> {noformat}
> WARN  [Thread-6] 2018-07-25 07:12:47,928 SSTableReader.java:249 - Reading 
> cardinality from Statistics.db failed for 
> /opt/data/disk5/data/keyspace/table/mc-big-Data.db.{noformat}
> Although there is no such file. 
> The message has appeared after i've changed the compaction strategy from 
> SizeTiered to Leveled. Compaction strategy has been changed region by region 
> (total 3 regions) and it has coincided with the double client write traffic 
> increase.
>  I have tried to run nodetool scrub to rebuilt the sstable, but that does not 
> fix the issue.
> So very hard to define the steps to reproduce, probably it will be:
>  # run stress tool with write traffic
>  # under load change compaction strategy from SireTiered to Leveled for the 
> bunch of hosts
>  # add more write traffic
> Reading the code it is said that if this metadata is broken, then "estimating 
> the keys will be done using index summary". 
>  
> [https://github.com/apache/cassandra/blob/cassandra-3.0.17/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L247]
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14361) Allow SimpleSeedProvider to resolve multiple IPs per DNS name

2018-04-06 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428457#comment-16428457
 ] 

Romain Hardouin commented on CASSANDRA-14361:
-

{quote}Caching behavior remains the same, given operators relying on hostnames
{quote}
What I meant is that having this feature could motivate operators to use DNS. 
So they must be aware of this setting and set it explicitely. 

I've read Oracle documentation but Java security file is not very explicit:
{noformat}
# default value is forever (FOREVER). For security reasons, this
# caching is made forever when a security manager is set. When a security
# manager is not set, the default behavior in this implementation
# is to cache for 30 seconds.
#
# NOTE: setting this to anything other than the default value can have
#   serious security implications. Do not set it unless
#   you are sure you are not exposed to DNS spoofing attack.
#
#networkaddress.cache.ttl=-1
{noformat}

"{{default value is forever (FOREVER)}}" is misleading.
That's why having CASSANDRA-14364 is nice.

> Allow SimpleSeedProvider to resolve multiple IPs per DNS name
> -
>
> Key: CASSANDRA-14361
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14361
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Ben Bromhead
>Assignee: Ben Bromhead
>Priority: Minor
> Fix For: 4.0
>
>
> Currently SimpleSeedProvider can accept a comma separated string of IPs or 
> hostnames as the set of Cassandra seeds. hostnames are resolved via 
> InetAddress.getByName, which will only return the first IP associated with an 
> A,  or CNAME record.
> By changing to InetAddress.getAllByName, existing behavior is preserved, but 
> now Cassandra can discover multiple IP address per record, allowing seed 
> discovery by DNS to be a little easier.
> Some examples of improved workflows with this change include: 
>  * specify the DNS name of a headless service in Kubernetes which will 
> resolve to all IP addresses of pods within that service. 
>  * seed discovery for multi-region clusters via AWS route53, AzureDNS etc
>  * Other common DNS service discovery mechanisms.
> The only behavior this is likely to impact would be where users are relying 
> on the fact that getByName only returns a single IP address.
> I can't imagine any scenario where that is a sane choice. Even when that 
> choice has been made, it only impacts the first startup of Cassandra and 
> would not be on any critical path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14361) Allow SimpleSeedProvider to resolve multiple IPs per DNS name

2018-04-04 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425244#comment-16425244
 ] 

Romain Hardouin commented on CASSANDRA-14361:
-

JVM property {{networkaddress.cache.ttl}} must be set otherwise operators will 
have to do a rolling restart of the cluster each time the seed list changes 
(unless default is not {{-1}} on their platforms).

> Allow SimpleSeedProvider to resolve multiple IPs per DNS name
> -
>
> Key: CASSANDRA-14361
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14361
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Ben Bromhead
>Assignee: Ben Bromhead
>Priority: Minor
> Fix For: 4.0
>
>
> Currently SimpleSeedProvider can accept a comma separated string of IPs or 
> hostnames as the set of Cassandra seeds. hostnames are resolved via 
> InetAddress.getByName, which will only return the first IP associated with an 
> A,  or CNAME record.
> By changing to InetAddress.getAllByName, existing behavior is preserved, but 
> now Cassandra can discover multiple IP address per record, allowing seed 
> discovery by DNS to be a little easier.
> Some examples of improved workflows with this change include: 
>  * specify the DNS name of a headless service in Kubernetes which will 
> resolve to all IP addresses of pods within that service. 
>  * seed discovery for multi-region clusters via AWS route53, AzureDNS etc
>  * Other common DNS service discovery mechanisms.
> The only behavior this is likely to impact would be where users are relying 
> on the fact that getByName only returns a single IP address.
> I can't imagine any scenario where that is a sane choice. Even when that 
> choice has been made, it only impacts the first startup of Cassandra and 
> would not be on any critical path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14293) Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE and FIXED Policies

2018-03-08 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16391077#comment-16391077
 ] 

Romain Hardouin commented on CASSANDRA-14293:
-

Useful feature, thanks!

Just a reminder (because the ticket doesn't mention it): NEWS.txt should 
mention that NONE is deprecated in favor of NEVER.

> Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE 
> and FIXED Policies
> -
>
> Key: CASSANDRA-14293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14293
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>Priority: Major
>
> Currently the Speculative Retry Policy takes a single string as a parameter, 
> this can be NONE, ALWAYS, 99PERCENTILE (PERCENTILE), 50MS (CUSTOM).
> The problem we have is when a single host goes into a bad state this drags up 
> the percentiles. This means if we are set to use p99 alone, we might not 
> speculate when we intended to to because the value at the specified 
> percentile has gone so high.
> As a fix we need to have support for something like min(99percentile,50ms)
> this means if the normal p99 for the table is <50ms, we will still speculate 
> at this value and not drag the happy path tail latencies up... but if the 
> p99th goes above what we know we should never exceed we use that instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14218) Deprecate Throwables.propagate usage

2018-02-07 Thread Romain Hardouin (JIRA)
Romain Hardouin created CASSANDRA-14218:
---

 Summary: Deprecate Throwables.propagate usage
 Key: CASSANDRA-14218
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14218
 Project: Cassandra
  Issue Type: Improvement
  Components: Libraries
Reporter: Romain Hardouin


Google deciced to deprecate guava {{Throwables.propagate}} method:
 * [Why we deprecated 
Throwables.propagate|https://github.com/google/guava/wiki/Why-we-deprecated-Throwables.propagate]
 * [Documentation 
update|https://github.com/google/guava/wiki/ThrowablesExplained/_compare/92190ee7e37d334fa5fcdb6db8d0f43a2fdf02e1...226a3060445716d479981e606f589c99eee517ca]

We have 35 occurences in the trunk:
{code:java}
$ rg -c 'Throwables.propagate' *
src/java/org/apache/cassandra/streaming/StreamReader.java:1
src/java/org/apache/cassandra/streaming/StreamTransferTask.java:1
src/java/org/apache/cassandra/db/SnapshotDetailsTabularData.java:1
src/java/org/apache/cassandra/db/Memtable.java:1
src/java/org/apache/cassandra/db/ColumnFamilyStore.java:4
src/java/org/apache/cassandra/cache/ChunkCache.java:2
src/java/org/apache/cassandra/utils/WrappedRunnable.java:1
src/java/org/apache/cassandra/hints/Hint.java:1
src/java/org/apache/cassandra/tools/LoaderOptions.java:1
src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java:1
src/java/org/apache/cassandra/streaming/management/ProgressInfoCompositeData.java:3
src/java/org/apache/cassandra/streaming/management/StreamStateCompositeData.java:2
src/java/org/apache/cassandra/streaming/management/StreamSummaryCompositeData.java:2
src/java/org/apache/cassandra/streaming/compress/CompressedStreamReader.java:1
src/java/org/apache/cassandra/db/compaction/Scrubber.java:1
src/java/org/apache/cassandra/db/compaction/Verifier.java:1
src/java/org/apache/cassandra/db/compaction/CompactionHistoryTabularData.java:1
src/java/org/apache/cassandra/db/compaction/Upgrader.java:1
src/java/org/apache/cassandra/io/compress/CompressionMetadata.java:1
src/java/org/apache/cassandra/streaming/management/SessionCompleteEventCompositeData.java:2
src/java/org/apache/cassandra/io/sstable/SSTableSimpleWriter.java:1
src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java:1
src/java/org/apache/cassandra/streaming/management/SessionInfoCompositeData.java:3
src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java:1
{code}
I don't know if we want to remove all usages but we should at least check 
author's intention for each usage and refactor if needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14198) Nodetool command to list out all the connected users

2018-01-30 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344787#comment-16344787
 ] 

Romain Hardouin commented on CASSANDRA-14198:
-

Duplicate of https://issues.apache.org/jira/browse/CASSANDRA-13665 ?

> Nodetool command to list out all the connected users
> 
>
> Key: CASSANDRA-14198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14198
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Major
>
> Create a node tool command to figure out all the connected users at a given 
> time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14102) Vault support for transparent data encryption

2018-01-10 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320368#comment-16320368
 ] 

Romain Hardouin commented on CASSANDRA-14102:
-

I meant EncryptionContext use. Thanks for your feedback! 

> Vault support for transparent data encryption
> -
>
> Key: CASSANDRA-14102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14102
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>  Labels: encryption
> Fix For: 4.x
>
>
> Transparent data encryption provided by CASSANDRA-9945 can currently be used 
> for commitlog and hints. The default {{KeyProvider}} implementation that we 
> ship allows to use a local keystore for storing and retrieving keys. Thanks 
> to the pluggable handling of the {{KeyStore}} provider and basic Vault 
> related classes introduced in CASSANDRA-13971, a Vault based implementation 
> can be provided as {{KeyProvider}} as well. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14102) Vault support for transparent data encryption

2018-01-10 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319903#comment-16319903
 ] 

Romain Hardouin commented on CASSANDRA-14102:
-

It's a nice feature. Out of curiosity, did you make few benchmarks to measure 
impacts on performances?

> Vault support for transparent data encryption
> -
>
> Key: CASSANDRA-14102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14102
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>  Labels: encryption
> Fix For: 4.x
>
>
> Transparent data encryption provided by CASSANDRA-9945 can currently be used 
> for commitlog and hints. The default {{KeyProvider}} implementation that we 
> ship allows to use a local keystore for storing and retrieving keys. Thanks 
> to the pluggable handling of the {{KeyStore}} provider and basic Vault 
> related classes introduced in CASSANDRA-13971, a Vault based implementation 
> can be provided as {{KeyProvider}} as well. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14086) Cassandra cluster load balancing

2017-12-04 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277028#comment-16277028
 ] 

Romain Hardouin commented on CASSANDRA-14086:
-

Hi Denis,

Cassandra mailing list or IRC is more appropriate for this kind of question: 
http://cassandra.apache.org/community/
I'm not a fan of having discrepancies between C* nodes but if you have no other 
choice and if you have no data yet in your cluster then you can adjust 
{{num_tokens}} according to hardware specs (watch out disk space!) 
https://github.com/apache/cassandra/blob/cassandra-3.0/conf/cassandra.yaml#L25

> Cassandra cluster load balancing
> 
>
> Key: CASSANDRA-14086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14086
> Project: Cassandra
>  Issue Type: Task
>  Components: Core
> Environment: I've got cassandra cluster with 46 nodes, nineteen of 
> them are one socketed servers and the rest are two socketed servers nearly 3 
> times more powerful then one socketed ones. Cluster has a plain structure 
> with one dc one rack and SimpleSnitch on top of that.
>Reporter: Denis Horbunov
>
> Hi there! I've got cassandra cluster with 46 nodes, nineteen of them are one 
> socketed servers and the rest are two socketed servers nearly 3 times more 
> powerful then one socketed ones. Cluster has a plain structure with one dc 
> one rack and SimpleSnitch on top of that. How could I balance the cluster so 
> that two socket servers take more load them one socket ones ? I do appologize 
> if I don't word corretly my question, this is my first ticket on cassandra. 
> Thanks in advance!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13855) URL Seed provider

2017-09-08 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158248#comment-16158248
 ] 

Romain Hardouin edited comment on CASSANDRA-13855 at 9/8/17 7:45 AM:
-

[~appodictic] suggested a way to do that in 
https://issues.apache.org/jira/browse/CASSANDRA-12627


was (Author: rha):
[~appodictic] suggested to do that in 
https://issues.apache.org/jira/browse/CASSANDRA-12627

> URL Seed provider
> -
>
> Key: CASSANDRA-13855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>Priority: Minor
>  Labels: lhf
>
> Seems like including a dead simple seed provider that can fetch from a URL, 1 
> line per seed, would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13855) URL Seed provider

2017-09-08 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158248#comment-16158248
 ] 

Romain Hardouin commented on CASSANDRA-13855:
-

[~appodictic] suggested to do that in 
https://issues.apache.org/jira/browse/CASSANDRA-12627

> URL Seed provider
> -
>
> Key: CASSANDRA-13855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>Priority: Minor
>  Labels: lhf
>
> Seems like including a dead simple seed provider that can fetch from a URL, 1 
> line per seed, would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11363) High Blocked NTR When Connecting

2017-08-30 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147483#comment-16147483
 ] 

Romain Hardouin commented on CASSANDRA-11363:
-

[~sadagopan88] When using Open Source Apache Cassandra you have to specify it 
in {{cassandra-env.sh}}:
{code}
JVM_OPTS="$JVM_OPTS -Dcassandra.max_queued_native_transport_requests=1024"
{code}
I don't know if DSE set this setting to something different than default value 
(128). You can ask to DataStax.

> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: T Jake Luciani
> Fix For: 2.1.16, 2.2.8, 3.0.10, 3.10
>
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack, 
> max_queued_ntr_property.txt, thread-queue-2.1.txt
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: 

[jira] [Commented] (CASSANDRA-13785) Compaction fails for SSTables with large number of keys

2017-08-25 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141336#comment-16141336
 ] 

Romain Hardouin commented on CASSANDRA-13785:
-

Thanks!

> Compaction fails for SSTables with large number of keys
> ---
>
> Key: CASSANDRA-13785
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13785
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>
> Every a few minutes there're "LEAK DTECTED" messages in the log:
> {noformat}
> ERROR [Reference-Reaper:1] 2017-08-18 17:18:40,357 Ref.java:223 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3ed22d7) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@1022568824:[Memory@[0..159b6ba4),
>  Memory@[0..d8123468)] was not released before the reference was garbage 
> collected
> ERROR [Reference-Reaper:1] 2017-08-18 17:20:49,693 Ref.java:223 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6470405b) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@97898152:[Memory@[0..159b6ba4),
>  Memory@[0..d8123468)] was not released before the reference was garbage 
> collected
> ERROR [Reference-Reaper:1] 2017-08-18 17:22:38,519 Ref.java:223 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6fc4af5f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@1247404854:[Memory@[0..159b6ba4),
>  Memory@[0..d8123468)] was not released before the reference was garbage 
> collected
> {noformat}
> Debugged the issue and found it's triggered by failed compactions, if the 
> compacted SSTable has more than 51m {{Integer.MAX_VALUE / 40}}) keys, it will 
> fail to create the IndexSummary: 
> [IndexSummary:84|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/IndexSummary.java#L84].
> Cassandra compaction tried to compact every a few minutes and keeps failing.
> The root cause is while [creating 
> SafeMemoryWriter|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L112]
>  with {{> Integer.MAX_VALUE}} space, it returns the tailing 
> {{Integer.MAX_VALUE}} space 
> [SafeMemoryWriter.java:83|https://github.com/apache/cassandra/blob/6a1b1f26b7174e8c9bf86a96514ab626ce2a4117/src/java/org/apache/cassandra/io/util/SafeMemoryWriter.java#L83],
>  which makes the first 
> [entries.length()|https://github.com/apache/cassandra/blob/6a1b1f26b7174e8c9bf86a96514ab626ce2a4117/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L173]
>  not 0. So the assert fails here: 
> [IndexSummary:84|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/IndexSummary.java#L84]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13785) Compaction fails for SSTables with large number of keys

2017-08-24 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139706#comment-16139706
 ] 

Romain Hardouin commented on CASSANDRA-13785:
-

{code}
private static final int MAX_NUM_ENTRIES = Integer.MAX_VALUE / 40;
{code}

[~jay.zhuang] How did you find {{40}}? Maybe it's wise to name this number with 
a constant?

> Compaction fails for SSTables with large number of keys
> ---
>
> Key: CASSANDRA-13785
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13785
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>
> Every a few minutes there're "LEAK DTECTED" messages in the log:
> {noformat}
> ERROR [Reference-Reaper:1] 2017-08-18 17:18:40,357 Ref.java:223 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3ed22d7) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@1022568824:[Memory@[0..159b6ba4),
>  Memory@[0..d8123468)] was not released before the reference was garbage 
> collected
> ERROR [Reference-Reaper:1] 2017-08-18 17:20:49,693 Ref.java:223 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6470405b) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@97898152:[Memory@[0..159b6ba4),
>  Memory@[0..d8123468)] was not released before the reference was garbage 
> collected
> ERROR [Reference-Reaper:1] 2017-08-18 17:22:38,519 Ref.java:223 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6fc4af5f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@1247404854:[Memory@[0..159b6ba4),
>  Memory@[0..d8123468)] was not released before the reference was garbage 
> collected
> {noformat}
> Debugged the issue and found it's triggered by failed compactions, if the 
> compacted SSTable has more than 51m {{Integer.MAX_VALUE / 40}}) keys, it will 
> fail to create the IndexSummary: 
> [IndexSummary:84|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/IndexSummary.java#L84].
> Cassandra compaction tried to compact every a few minutes and keeps failing.
> The root cause is while [creating 
> SafeMemoryWriter|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L112]
>  with {{> Integer.MAX_VALUE}} space, it returns the tailing 
> {{Integer.MAX_VALUE}} space 
> [SafeMemoryWriter.java:83|https://github.com/apache/cassandra/blob/6a1b1f26b7174e8c9bf86a96514ab626ce2a4117/src/java/org/apache/cassandra/io/util/SafeMemoryWriter.java#L83],
>  which makes the first 
> [entries.length()|https://github.com/apache/cassandra/blob/6a1b1f26b7174e8c9bf86a96514ab626ce2a4117/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L173]
>  not 0. So the assert fails here: 
> [IndexSummary:84|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/IndexSummary.java#L84]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13779) issue with pycharm datastax cassandra driver

2017-08-19 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134107#comment-16134107
 ] 

Romain Hardouin edited comment on CASSANDRA-13779 at 8/19/17 4:37 PM:
--

Hi, this error happens when there is a mismatch between Cassandra version and 
Driver version. This is not related to PyCharm. Be sure to check [Python 
DataStax driver compatibility 
matrix|https://docs.datastax.com/en/developer/driver-matrix/doc/pythonDrivers.html],
 [Java DataStax driver compatibility 
matrix|https://docs.datastax.com/en/developer/java-driver/3.3/manual/native_protocol/#compatibility-matrix],
 etc.

(Note that DataStax Driver has its own bug tracker for 
[Python|https://datastax-oss.atlassian.net/projects/PYTHON/issues/], 
[Java|https://datastax-oss.atlassian.net/projects/JAVA/summary], etc.)


was (Author: rha):
Hi, this error happens when there is a mismatch between Cassandra version and 
Driver version. This is not related to PyCharm. Be sure to check [DataStax 
driver compatibility 
matrix|https://docs.datastax.com/en/developer/java-driver/3.3/manual/native_protocol/#compatibility-matrix].

(Note that DataStax Driver has its own bug tracker: 
https://datastax-oss.atlassian.net/projects/JAVA/summary )

> issue with pycharm datastax cassandra driver
> 
>
> Key: CASSANDRA-13779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13779
> Project: Cassandra
>  Issue Type: Bug
>Reporter: venkatesulu
>Priority: Minor
>
> [Server error] message="io.netty.handler.codec.DecoderException: 
> org.apache.cassandra.transport.ProtocolException: Invalid or unsupported 
> protocol version: 4"',)})



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13779) issue with pycharm datastax cassandra driver

2017-08-19 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134107#comment-16134107
 ] 

Romain Hardouin commented on CASSANDRA-13779:
-

Hi, this error happens when there is a mismatch between Cassandra version and 
Driver version. This is not related to PyCharm. Be sure to check [DataStax 
driver compatibility 
matrix|https://docs.datastax.com/en/developer/java-driver/3.3/manual/native_protocol/#compatibility-matrix].

(Note that DataStax Driver has its own bug tracker: 
https://datastax-oss.atlassian.net/projects/JAVA/summary )

> issue with pycharm datastax cassandra driver
> 
>
> Key: CASSANDRA-13779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13779
> Project: Cassandra
>  Issue Type: Bug
>Reporter: venkatesulu
>Priority: Minor
>
> [Server error] message="io.netty.handler.codec.DecoderException: 
> org.apache.cassandra.transport.ProtocolException: Invalid or unsupported 
> protocol version: 4"',)})



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13778) Enable Direct I/O for non-system SStables operations

2017-08-19 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134101#comment-16134101
 ] 

Romain Hardouin commented on CASSANDRA-13778:
-

Hi, could you share your benchmark results and a patch? Thanks.

> Enable Direct I/O for non-system SStables operations
> 
>
> Key: CASSANDRA-13778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13778
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Carlos Abad
>  Labels: performance
> Fix For: 4.x
>
>
> Following the line of other databases (like RocksDB) enable Cassandra to 
> bypass the Linux Page Cache by using Direct I/O. By enabling this 
> functionality in the 4 main data paths (read/write un/compressed) our 
> internal testing at Intel shows that using Direct I/O increased performance 
> (latency and throughput) considerably.
> In this implementation not all disk accesses bypass the OS Page Cache, only 
> the ones targeting a non-system SStable data files do. Disk access to 
> SStables metadata files -index, crc, system tables, etc- still benefit from 
> the OS Page Cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-08-11 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Attachment: (was: 12758-trunk.patch)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-08-11 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Attachment: (was: 12758-3.0.patch)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-08-11 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123323#comment-16123323
 ] 

Romain Hardouin commented on CASSANDRA-12758:
-

Rebased on trunk, build successful 
https://circleci.com/gh/rhardouin/cassandra/18
Let's include it in trunk only, anyway it's trivial to backport if someone need 
it on production.

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-3.0.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-08-09 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Summary: Expose tasks queue length via JMX  (was: Expose NTR tasks queue 
length via JMX)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-3.0.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12758) Expose NTR tasks queue length via JMX

2017-08-09 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Summary: Expose NTR tasks queue length via JMX  (was: Expose tasks queue 
length via JMX)

> Expose NTR tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-3.0.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-20 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094756#comment-16094756
 ] 

Romain Hardouin edited comment on CASSANDRA-13699 at 7/20/17 4:30 PM:
--

I see random failures/errors in CircleCI. 
EDIT: https://circleci.com/gh/rhardouin/cassandra/9 is successful.


was (Author: rha):
I see random failures/errors in CircleCI

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-20 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094756#comment-16094756
 ] 

Romain Hardouin commented on CASSANDRA-13699:
-

I see random failures/errors in CircleCI

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-20 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13699:

Attachment: 13699-trunk.txt

Added CHANGES.txt entry and updated commit message

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-20 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13699:

Attachment: (was: 13699-trunk.txt)

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-20 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13699:

Attachment: (was: 13699-trunk.txt)

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-20 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13699:

Attachment: 13699-trunk.txt

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-19 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093870#comment-16093870
 ] 

Romain Hardouin edited comment on CASSANDRA-13699 at 7/19/17 10:01 PM:
---

Thanks for the review. I fixed coding style while you wrote your comment, it 
should be correct. I triggered a build here 
https://circleci.com/gh/rhardouin/cassandra/3


was (Author: rha):
Thanks for the review. I fixed coding style while you wrote your comment, it 
should be correct. I trigged a build here 
https://circleci.com/gh/rhardouin/cassandra/3

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-19 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093870#comment-16093870
 ] 

Romain Hardouin commented on CASSANDRA-13699:
-

Thanks for the review. I fixed coding style while you wrote your comment, it 
should be correct. I trigged a build here 
https://circleci.com/gh/rhardouin/cassandra/3

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-19 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13699:

Flags:   (was: Patch)

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-19 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13699:

Attachment: 13699-trunk.txt

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-19 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13699:

Attachment: (was: 13699-trunk.txt)

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-19 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13699:

Attachment: 13699-trunk.txt

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-19 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13699:

Attachment: (was: 
0001-Allow-to-set-batch_size_warn_threshold_in_kb-via-JMX.patch)

> Allow to set batch_size_warn_threshold_in_kb via JMX
> 
>
> Key: CASSANDRA-13699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13699-trunk.txt
>
>
> We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
> {{batch_size_warn_threshold_in_kb}}. 
> The patch allows to set it dynamically and adds a INFO log for both 
> thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13699) Allow to set batch_size_warn_threshold_in_kb via JMX

2017-07-19 Thread Romain Hardouin (JIRA)
Romain Hardouin created CASSANDRA-13699:
---

 Summary: Allow to set batch_size_warn_threshold_in_kb via JMX
 Key: CASSANDRA-13699
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13699
 Project: Cassandra
  Issue Type: Improvement
Reporter: Romain Hardouin
Priority: Minor
 Fix For: 4.x
 Attachments: 
0001-Allow-to-set-batch_size_warn_threshold_in_kb-via-JMX.patch

We can set {{batch_size_fail_threshold_in_kb}} via JMX but not 
{{batch_size_warn_threshold_in_kb}}. 

The patch allows to set it dynamically and adds a INFO log for both thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13625) Remove unused cassandra.yaml setting, max_value_size_in_mb, from 2.2.9

2017-06-21 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057768#comment-16057768
 ] 

Romain Hardouin edited comment on CASSANDRA-13625 at 6/21/17 4:17 PM:
--

Ok! I understood {{>= 2.2.9}} and not {{>= 2.2.9, < 3}}


was (Author: rha):
Ok! I understood >= 2.2.9 

> Remove unused cassandra.yaml setting, max_value_size_in_mb, from 2.2.9
> --
>
> Key: CASSANDRA-13625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13625
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joaquin Casares
>  Labels: lhf
> Fix For: 2.2.10
>
>
> {{max_value_size_in_mb}} is currently in the 2.2.9 cassandra.yaml, but does 
> not make reference of the config in any place within its codebase:
> https://github.com/apache/cassandra/blob/cassandra-2.2.9/conf/cassandra.yaml#L888-L891
> CASSANDRA-9530, which introduced {{max_value_size_in_mb}}, has it's Fix 
> Version/s marked as 3.0.7, 3.7, and 3.8.
> Let's remove the {{max_value_size_in_mb}} from the cassandra.yaml.
> {NOFORMAT}
> ~/repos/cassandra[(HEAD detached at cassandra-2.2.9)] (joaquin)$ grep -r 
> max_value_size_in_mb .
> conf/cassandra.yaml:# max_value_size_in_mb: 256
> {NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13625) Remove unused cassandra.yaml setting, max_value_size_in_mb, from 2.2.9

2017-06-21 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057768#comment-16057768
 ] 

Romain Hardouin commented on CASSANDRA-13625:
-

Ok! I understood >= 2.2.9 

> Remove unused cassandra.yaml setting, max_value_size_in_mb, from 2.2.9
> --
>
> Key: CASSANDRA-13625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13625
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joaquin Casares
>  Labels: lhf
> Fix For: 2.2.10
>
>
> {{max_value_size_in_mb}} is currently in the 2.2.9 cassandra.yaml, but does 
> not make reference of the config in any place within its codebase:
> https://github.com/apache/cassandra/blob/cassandra-2.2.9/conf/cassandra.yaml#L888-L891
> CASSANDRA-9530, which introduced {{max_value_size_in_mb}}, has it's Fix 
> Version/s marked as 3.0.7, 3.7, and 3.8.
> Let's remove the {{max_value_size_in_mb}} from the cassandra.yaml.
> {NOFORMAT}
> ~/repos/cassandra[(HEAD detached at cassandra-2.2.9)] (joaquin)$ grep -r 
> max_value_size_in_mb .
> conf/cassandra.yaml:# max_value_size_in_mb: 256
> {NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13625) Remove unused cassandra.yaml setting, max_value_size_in_mb, from 2.2.9

2017-06-21 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057223#comment-16057223
 ] 

Romain Hardouin commented on CASSANDRA-13625:
-

Hi, I don't understand why do you want to remove this setting, it's still used 
in trunk:

{code}
$ grep -IRn getMaxValueSize
test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java:86:
maxValueSize = DatabaseDescriptor.getMaxValueSize();
test/unit/org/apache/cassandra/io/sstable/SSTableWriterTestBase.java:84:
maxValueSize = DatabaseDescriptor.getMaxValueSize();
test/unit/org/apache/cassandra/io/sstable/SSTableCorruptionDetectionTest.java:91:
maxValueSize = DatabaseDescriptor.getMaxValueSize();
src/java/org/apache/cassandra/db/ClusteringPrefix.java:377: 
   : (isEmpty(header, offset) ? ByteBufferUtil.EMPTY_BYTE_BUFFER : 
types.get(offset).readValue(in, DatabaseDescriptor.getMaxValueSize()));
src/java/org/apache/cassandra/db/ClusteringPrefix.java:531: 
 : (Serializer.isEmpty(nextHeader, i) ? ByteBufferUtil.EMPTY_BYTE_BUFFER : 
serializationHeader.clusteringTypes().get(i).readValue(in, 
DatabaseDescriptor.getMaxValueSize()));
src/java/org/apache/cassandra/db/rows/Cell.java:246:value = 
header.getType(column).readValue(in, DatabaseDescriptor.getMaxValueSize());
src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java:1056:  
  DecoratedKey key = 
metadata.partitioner.decorateKey(metadata.partitionKeyType.readValue(in, 
DatabaseDescriptor.getMaxValueSize()));
src/java/org/apache/cassandra/config/DatabaseDescriptor.java:1054:public 
static int getMaxValueSize()
{code}

{{getMaxValueSize()}} occurrences: 
https://github.com/apache/cassandra/search?q=getMaxValueSize

> Remove unused cassandra.yaml setting, max_value_size_in_mb, from 2.2.9
> --
>
> Key: CASSANDRA-13625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13625
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joaquin Casares
>  Labels: lhf
> Fix For: 2.2.10
>
>
> {{max_value_size_in_mb}} is currently in the 2.2.9 cassandra.yaml, but does 
> not make reference of the config in any place within its codebase:
> https://github.com/apache/cassandra/blob/cassandra-2.2.9/conf/cassandra.yaml#L888-L891
> CASSANDRA-9530, which introduced {{max_value_size_in_mb}}, has it's Fix 
> Version/s marked as 3.0.7, 3.7, and 3.8.
> Let's remove the {{max_value_size_in_mb}} from the cassandra.yaml.
> {NOFORMAT}
> ~/repos/cassandra[(HEAD detached at cassandra-2.2.9)] (joaquin)$ grep -r 
> max_value_size_in_mb .
> conf/cassandra.yaml:# max_value_size_in_mb: 256
> {NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13610) Add the ability to only scrub one file in Cassandra

2017-06-16 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051534#comment-16051534
 ] 

Romain Hardouin commented on CASSANDRA-13610:
-

We encounter same behavior. I have to run {{nodetool scrub --skip-corrupted}} 
to fix it.

> Add the ability to only scrub one file in Cassandra
> ---
>
> Key: CASSANDRA-13610
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13610
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 4.x
>
>
> In our production clusters, we see several corrupted files on C* severs some 
> times. And we use `Nodetool scrub` to rebuild the tables.
> However, out of the 1000+ sstables, usually, just a few are corrupted (less 
> than 10). It will be useful to enhance the `Nodetool scrub` to only scrub 
> certain sstable files. So it will be much efficient than rebuilding the whole 
> table.
> One engineer in my team is working on this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13572) describecluster shows sub-snitch for DynamicEndpointSnitch

2017-06-05 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036653#comment-16036653
 ] 

Romain Hardouin commented on CASSANDRA-13572:
-

Duplicate of CASSANDRA-13528

> describecluster shows sub-snitch for DynamicEndpointSnitch
> --
>
> Key: CASSANDRA-13572
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13572
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>  Labels: nodetool
> Fix For: 4.0
>
>
> {{nodetool describecluster}} only shows the first level snitch name, if 
> DynamicSnitch is enable, it doesn't give the sub-snitch name, which is also 
> very useful. For example:
> {noformat}
> Cluster Information:
> Name: Test Cluster
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> 59a1610b-0384-337c-a2c5-9c8efaba12be: [127.0.0.1]
> {noformat}
> It would be better to show sub-snitch name if it's DynamicSnitch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13528) nodetool describeclusters shows different snitch info as to what is configured.

2017-05-14 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009739#comment-16009739
 ] 

Romain Hardouin edited comment on CASSANDRA-13528 at 5/14/17 1:30 PM:
--

Actually {{DynamicEndpointSnitch}} is a wrapper, so it wraps {{EC2Snitch}} 
here. When DES is disabled, {{nodetool describecluster}} gives you what you 
expect.  I agree it's not very user friendly nor very useful.

Something like this would be more helpful:
{code}
Snitch: 
DynamicEndpointSnitch: (enabled|disabled)
{code}





was (Author: rha):
Actually {{DynamicEndpointSnitch}} is a wrapper, so it wraps EC2Snitch here. 
When DES is disabled {{nodetool describecluster}} gives you what you expect.  I 
agree it's not very user friendly nor very useful.

Something like this would be more helpful:
{code}
Snitch: 
DynamicEndpointSnitch: (enabled|disabled)
{code}




> nodetool describeclusters shows different snitch info as to what is 
> configured.
> ---
>
> Key: CASSANDRA-13528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13528
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paul Villacorta
>Priority: Minor
> Attachments: Screen Shot 2017-05-12 at 14.15.04.png
>
>
> I couldn't find any similar issue as this one so I'm creating one.
> I noticed that doing nodetool describecluster shows a different Snitch 
> Information as to what is being set in the configuration file.
> My setup is hosted in AWS and I am using Ec2Snitch.
> cassandra@cassandra3$ nodetool describecluster
> Cluster Information:
>   Name: testv3
>   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
>   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>   Schema versions:
>   fc6e8656-ee7a-341b-9782-b569d1fd1a51: 
> [10.0.3.61,10.0.3.62,10.0.3.63]
> I checked via MX4J and it shows the same, I haven't verified tho using a 
> different Snitch and I am using 2.2.6 above and 3.0.X 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13528) nodetool describeclusters shows different snitch info as to what is configured.

2017-05-14 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009739#comment-16009739
 ] 

Romain Hardouin commented on CASSANDRA-13528:
-

Actually {{DynamicEndpointSnitch}} is a wrapper, so it wraps EC2Snitch here. 
When DES is disabled {{nodetool describecluster}} gives you what you expect.  I 
agree it's not very user friendly nor very useful.

Something like this would be more helpful:
{code}
Snitch: 
DynamicEndpointSnitch: (enabled|disabled)
{code}




> nodetool describeclusters shows different snitch info as to what is 
> configured.
> ---
>
> Key: CASSANDRA-13528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13528
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paul Villacorta
>Priority: Minor
> Attachments: Screen Shot 2017-05-12 at 14.15.04.png
>
>
> I couldn't find any similar issue as this one so I'm creating one.
> I noticed that doing nodetool describecluster shows a different Snitch 
> Information as to what is being set in the configuration file.
> My setup is hosted in AWS and I am using Ec2Snitch.
> cassandra@cassandra3$ nodetool describecluster
> Cluster Information:
>   Name: testv3
>   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
>   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>   Schema versions:
>   fc6e8656-ee7a-341b-9782-b569d1fd1a51: 
> [10.0.3.61,10.0.3.62,10.0.3.63]
> I checked via MX4J and it shows the same, I haven't verified tho using a 
> different Snitch and I am using 2.2.6 above and 3.0.X 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13494) Check at what time cassandra was started on a node

2017-05-05 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15998714#comment-15998714
 ] 

Romain Hardouin commented on CASSANDRA-13494:
-

You can also use {{nodetool info}} to check uptime

> Check at what time cassandra was started on a node
> --
>
> Key: CASSANDRA-13494
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13494
> Project: Cassandra
>  Issue Type: Task
>Reporter: Tamar Nirenberg
>Priority: Minor
>
> Hi,
> I am quite new to Cassandra, and I was wondering how can I check if and when 
> Cassandra was started on a specific node.
> Are there certain words I should look for in the log file?
> or is there another tool to check it?
> Thanks,
> Tamar



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11348) Compaction Filter in Cassandra

2017-05-02 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15993091#comment-15993091
 ] 

Romain Hardouin commented on CASSANDRA-11348:
-

I have already needed this feature, this is interesting and useful. 

The concern when purging data (I mean physically delete) is repair (read repair 
or maintenance repair), we must use tombstones to avoid data resurrection. One 
of my use case is to purge counters that are older than a specific timestamp. 

I also imagined a filter that put tombstones on data and produce backup 
sstables corresponding to tombstone'd data. This would allow easy restore in 
case of problems.

Another use case would be to plug a small filter that just logs keys when a key 
is greater than a specific size but less than 
{{compaction_large_partition_warning_threshold_mb}}. It would allow to easily 
find outliers.

> Compaction Filter in Cassandra
> --
>
> Key: CASSANDRA-11348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11348
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 4.x
>
>
> RocksDB has the feature called "Compaction Filter" to allow application to 
> modify/delete a key-value during the background compaction. 
> https://github.com/facebook/rocksdb/blob/v4.1/include/rocksdb/options.h#L201-L226
> It could be valuable to implement this feature in C* as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2017-05-02 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15992489#comment-15992489
 ] 

Romain Hardouin commented on CASSANDRA-11720:
-

It would be nice to have that in nodetool. I use jmxterm to do that currently:
{code}
echo "set -b org.apache.cassandra.db:type=StorageProxy MaxHintWindow " | 
java -jar /path/to/jmxterm.jar -l 127.0.0.1:7199 -u ... -p ...
{code}

Some comments:

In {{SetMaxHintWindow}} it should be {{value_in_ms}}:
{code}
usage = ""
{code}

In {{GetMaxHintWindow}} I don't understand "of the given type" here:
{code}
@Command(name = "getmaxhintwindow", description = "Print the max hint window of 
the given type in ms")
{code}
Maybe just {{Print max hint window in ms}}?
Thanks

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Assignee: Hiroyuki Nishi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-04-24 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980947#comment-15980947
 ] 

Romain Hardouin commented on CASSANDRA-12758:
-

I removed 2.x patches - it was six months ago - and I updated patches to apply 
cleanly on 3.0 and trunk. 
I didn't put entry in trunk's CHANGES.txt because I don't know where the line 
is supposed to be added.

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-3.0.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-04-24 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Attachment: (was: 12758-2.2.patch)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-3.0.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-04-24 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Attachment: (was: 12758-2.1.patch)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-3.0.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-04-24 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Fix Version/s: (was: 2.2.x)
   (was: 2.1.x)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-3.0.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-04-24 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Attachment: 12758-trunk.patch
12758-3.0.patch

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-2.1.patch, 12758-2.2.patch, 12758-3.0.patch, 
> 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-04-24 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Attachment: (was: 12758-trunk.patch)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-2.1.patch, 12758-2.2.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-04-24 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Attachment: (was: 0001-Add-MBean-to-monitor-max-queued-tasks_2.2.patch)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-2.1.patch, 12758-2.2.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-04-24 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Attachment: (was: 
0001-Add-MBean-to-monitor-max-queued-tasks_trunk.patch)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-2.1.patch, 12758-2.2.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-04-24 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Attachment: (was: 0001-Add-MBean-to-monitor-max-queued-tasks_2.1.patch)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: 0001-Add-MBean-to-monitor-max-queued-tasks_2.2.patch, 
> 0001-Add-MBean-to-monitor-max-queued-tasks_trunk.patch, 12758-2.1.patch, 
> 12758-2.2.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13315) Semantically meaningful Consistency Levels

2017-03-10 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905471#comment-15905471
 ] 

Romain Hardouin commented on CASSANDRA-13315:
-

If we keep all current CL and if it can help newcomers then why not. 
That said, it won't prevent the most common mistake I've seen: RF=2 with 
LOCAL_QUORUM. "What? My setup is not fault tolerant?"

> Semantically meaningful Consistency Levels
> --
>
> Key: CASSANDRA-13315
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13315
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ryan Svihla
>
> New users really struggle with consistency level and fall into a large number 
> of tarpits trying to decide on the right one.
> o
> 1. There are a LOT of consistency levels and it's up to the end user to 
> reason about what combinations are valid and what is really what they intend 
> it to be. Is there any reason why write at ALL and read at CL TWO is better 
> than read at CL ONE? 
> 2. They require a good understanding of failure modes to do well. It's not 
> uncommon for people to use CL one and wonder why their data is missing.
> 3. The serial consistency level "bucket" is confusing to even write about and 
> easy to get wrong even for experienced users.
> So I propose the following steps (EDIT based on Jonathan's comment):
> 1. Remove the "serial consistency" level of consistency levels and just have 
> all consistency levels in one bucket to set, conditions still need to be 
> required for SERIAL/LOCAL_SERIAL
> 2. add 3 new consistency levels pointing to existing ones but that infer 
> intent much more cleanly:
> EDIT better names bases on comments.
>* EVENTUALLY = LOCAL_ONE reads and writes
>* STRONG = LOCAL_QUORUM reads and writes
>* SERIAL = LOCAL_SERIAL reads and writes (though a ton of folks dont know 
> what SERIAL means so this is why I suggested TRANSACTIONAL even if its not as 
> correct as Id like)
> for global levels of this I propose keeping the old ones around, they're 
> rarely used in the field except by accident or particularly opinionated and 
> advanced users.
> Drivers should put the new consistency levels in a new package and docs 
> should be updated to suggest their use. Likewise setting default CL should 
> only provide those three settings and applying it for reads and writes at the 
> same time.
> CQLSH I'm gonna suggest should default to HIGHLY_CONSISTENT. New sysadmins 
> get surprised by this frequently and I can think of a couple very major 
> escalations because people were confused what the default behavior was.
> The benefit to all this change is we shrink the surface area that one has to 
> understand when learning Cassandra greatly, and we have far less bad initial 
> experiences and surprises. New users will more likely be able to wrap their 
> brains around those 3 ideas more readily then they can "what happens when I 
> have RF2, QUROUM writes and ONE reads". Advanced users get access to all the 
> way still, while new users don't have to learn all the ins and outs of 
> distributed theory just to write data and be able to read it back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13289) Make it possible to monitor an ideal consistency level separate from actual consistency level

2017-03-07 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898858#comment-15898858
 ] 

Romain Hardouin edited comment on CASSANDRA-13289 at 3/7/17 9:07 AM:
-

bq. Yes you can set it via JMX.
Great, thanks!

Typo in cassandra.yaml: {{requested by each each write}}
Also, I don't see where {{import javax.xml.crypto.Data}} is used in 
StorageProxy.  


was (Author: rha):
> Yes you can set it via JMX.
Great, thanks!

Typo in cassandra.yaml: {{requested by each each write}}

> Make it possible to monitor an ideal consistency level separate from actual 
> consistency level
> -
>
> Key: CASSANDRA-13289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13289
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.0
>
>
> As an operator there are several issues related to multi-datacenter 
> replication and consistency you may want to have more information on from 
> your production database.
> For instance. If your application writes at LOCAL_QUORUM how often are those 
> writes failing to achieve EACH_QUORUM at other data centers. If you failed 
> your application over to one of those data centers roughly how inconsistent 
> might it be given the number of writes that didn't propagate since the last 
> incremental repair?
> You might also want to know roughly what the latency of writes would be if 
> you switched to a different consistency level. For instance you are writing 
> at LOCAL_QUORUM and want to know what would happen if you switched to 
> EACH_QUORUM.
> The proposed change is to allow an ideal_consistency_level to be specified in 
> cassandra.yaml as well as get/set via JMX. If no ideal consistency level is 
> specified no additional tracking is done.
> if an ideal consistency level is specified then the 
> {{AbstractWriteResponesHandler}} will contain a delegate WriteResponseHandler 
> that tracks whether the ideal consistency level is met before a write times 
> out. It also tracks the latency for achieving the ideal CL  of successful 
> writes.
> These two metrics would be reported on a per keyspace basis.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13289) Make it possible to monitor an ideal consistency level separate from actual consistency level

2017-03-06 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898858#comment-15898858
 ] 

Romain Hardouin commented on CASSANDRA-13289:
-

> Yes you can set it via JMX.
Great, thanks!

Typo in cassandra.yaml: {{requested by each each write}}

> Make it possible to monitor an ideal consistency level separate from actual 
> consistency level
> -
>
> Key: CASSANDRA-13289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13289
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.0
>
>
> As an operator there are several issues related to multi-datacenter 
> replication and consistency you may want to have more information on from 
> your production database.
> For instance. If your application writes at LOCAL_QUORUM how often are those 
> writes failing to achieve EACH_QUORUM at other data centers. If you failed 
> your application over to one of those data centers roughly how inconsistent 
> might it be given the number of writes that didn't propagate since the last 
> incremental repair?
> You might also want to know roughly what the latency of writes would be if 
> you switched to a different consistency level. For instance you are writing 
> at LOCAL_QUORUM and want to know what would happen if you switched to 
> EACH_QUORUM.
> The proposed change is to allow an ideal_consistency_level to be specified in 
> cassandra.yaml as well as get/set via JMX. If no ideal consistency level is 
> specified no additional tracking is done.
> if an ideal consistency level is specified then the 
> {{AbstractWriteResponesHandler}} will contain a delegate WriteResponseHandler 
> that tracks whether the ideal consistency level is met before a write times 
> out. It also tracks the latency for achieving the ideal CL  of successful 
> writes.
> These two metrics would be reported on a per keyspace basis.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13289) Make it possible to monitor an ideal consistency level separate from actual consistency level

2017-03-03 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894255#comment-15894255
 ] 

Romain Hardouin commented on CASSANDRA-13289:
-

Very interesting! 
Do you have an idea of the overhead that this monitoring layer would add? If 
this adds too much overhead we should be able to enable/disable it at runtime 
via nodetool.

> Make it possible to monitor an ideal consistency level separate from actual 
> consistency level
> -
>
> Key: CASSANDRA-13289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13289
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>
> As an operator there are several issues related to multi-datacenter 
> replication and consistency you may want to have more information on from 
> your production database.
> For instance. If your application writes at LOCAL_QUORUM how often are those 
> writes failing to achieve EACH_QUORUM at other data centers. If you failed 
> your application over to one of those data centers roughly how inconsistent 
> might it be given the number of writes that didn't propagate since the last 
> incremental repair?
> You might also want to know roughly what the latency of writes would be if 
> you switched to a different consistency level. For instance you are writing 
> at LOCAL_QUORUM and want to know what would happen if you switched to 
> EACH_QUORUM.
> The proposed change is to allow an ideal_consistency_level to be specified in 
> cassandra.yaml as well as get/set via JMX. If no ideal consistency level is 
> specified no additional tracking is done.
> if an ideal consistency level is specified then the 
> {{AbstractWriteResponesHandler}} will contain a delegate WriteResponseHandler 
> that tracks whether the ideal consistency level is met before a write times 
> out. It also tracks the latency for achieving the ideal CL  of successful 
> writes.
> These two metrics would be reported on a per keyspace basis.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13279) Table default settings file

2017-03-01 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890121#comment-15890121
 ] 

Romain Hardouin commented on CASSANDRA-13279:
-

In my mind this kind of file is supposed to be handled by Chef/Puppet/Ansible 
so I don't think it would be a problem. 
But I understand your point of view so feel free to close this ticket.

> Table default settings file
> ---
>
> Key: CASSANDRA-13279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13279
> Project: Cassandra
>  Issue Type: Wish
>  Components: Configuration
>Reporter: Romain Hardouin
>Priority: Minor
>  Labels: config, documentation
>
> Following CASSANDRA-13241 we often see that there is no one-size-fits-all 
> value for settings. We can't find a sweet spot for every use cases.
> It's true for settings in cassandra.yaml but as [~brstgt] said for 
> {{chunk_length_in_kb}}: "this is somewhat hidden for the average user". 
> Many table settings are somewhat hidden for the average user. Some people 
> will think RTFM but if a file - say tables.yaml - contains default values for 
> table settings, more people would pay attention to them. And of course this 
> file could contain useful comments and guidance. 
> Example with SSTable compression options:
> {code}
> # General comments about sstable compression
> compression:
> # First of all: explain what is it. We split each SSTable into chunks, 
> etc.
> # Explain when users should lower this value (e.g. 4) or when a higher 
> value like 64 or 128 are recommended.
> # Explain the trade-off between read latency and off-heap compression 
> metadata size.
> chunk_length_in_kb: 16
> 
> # List of available compressor: LZ4Compressor, SnappyCompressor, and 
> DeflateCompressor
> # Explain trade-offs, some specific use cases (e.g. archives), etc.
> class: 'LZ4Compressor'
> 
> # If you want to disable compression by default, uncomment the following 
> line
> #enabled: false
> {code}
> So instead of hard coded values we would end up with something like 
> TableConfig + TableDescriptor à la Config + DatabaseDescriptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13241) Lower default chunk_length_in_kb from 64kb to 4kb

2017-02-28 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887728#comment-15887728
 ] 

Romain Hardouin edited comment on CASSANDRA-13241 at 2/28/17 10:25 AM:
---

I created CASSANDRA-13279 because it's a broader problem IMHO.
I don't say we should stay with 64KB. Maybe 8KB i.e. 1GB of compression 
metadata per TB  would be a good trade-off.


was (Author: rha):
I created CASSANDRA-13279 because it's a broader problem IMHO.
I don't say we should stay with 64KB. Maybe 8KB i.e. 1GB per TB  would be a 
good trade-off.

> Lower default chunk_length_in_kb from 64kb to 4kb
> -
>
> Key: CASSANDRA-13241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Benjamin Roth
>
> Having a too low chunk size may result in some wasted disk space. A too high 
> chunk size may lead to massive overreads and may have a critical impact on 
> overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and 
> avg reads of 200MB/s. After lowering chunksize (of course aligned with read 
> ahead), the avg read IO went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / 
> (total data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but 
> if the model consists rather of small rows or small resultsets, the read 
> overhead with 64kb chunk size is insanely high. This applies for example for 
> (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB 
> RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic 
> snitch magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13241) Lower default chunk_length_in_kb from 64kb to 4kb

2017-02-28 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887728#comment-15887728
 ] 

Romain Hardouin edited comment on CASSANDRA-13241 at 2/28/17 10:24 AM:
---

I created CASSANDRA-13279 because it's a broader problem IMHO.
I don't say we should stay with 64KB. Maybe 8KB i.e. 1GB per TB  would be a 
good trade-off.


was (Author: rha):
I created https://issues.apache.org/jira/browse/CASSANDRA-13279 because it's a 
broader problem IMHO.
I don't say we should stay with 64KB. Maybe 8KB i.e. 1GB per TB  would be a 
good trade-off.

> Lower default chunk_length_in_kb from 64kb to 4kb
> -
>
> Key: CASSANDRA-13241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Benjamin Roth
>
> Having a too low chunk size may result in some wasted disk space. A too high 
> chunk size may lead to massive overreads and may have a critical impact on 
> overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and 
> avg reads of 200MB/s. After lowering chunksize (of course aligned with read 
> ahead), the avg read IO went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / 
> (total data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but 
> if the model consists rather of small rows or small resultsets, the read 
> overhead with 64kb chunk size is insanely high. This applies for example for 
> (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB 
> RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic 
> snitch magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13241) Lower default chunk_length_in_kb from 64kb to 4kb

2017-02-28 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887728#comment-15887728
 ] 

Romain Hardouin commented on CASSANDRA-13241:
-

I created https://issues.apache.org/jira/browse/CASSANDRA-13279 because it's a 
broader problem IMHO.
I don't say we should stay with 64KB. Maybe 8KB i.e. 1GB per TB  would be a 
good trade-off.

> Lower default chunk_length_in_kb from 64kb to 4kb
> -
>
> Key: CASSANDRA-13241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Benjamin Roth
>
> Having a too low chunk size may result in some wasted disk space. A too high 
> chunk size may lead to massive overreads and may have a critical impact on 
> overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and 
> avg reads of 200MB/s. After lowering chunksize (of course aligned with read 
> ahead), the avg read IO went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / 
> (total data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but 
> if the model consists rather of small rows or small resultsets, the read 
> overhead with 64kb chunk size is insanely high. This applies for example for 
> (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB 
> RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic 
> snitch magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13279) Table default settings file

2017-02-28 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-13279:

Summary: Table default settings file  (was: Table settings file)

> Table default settings file
> ---
>
> Key: CASSANDRA-13279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13279
> Project: Cassandra
>  Issue Type: Wish
>  Components: Configuration
>Reporter: Romain Hardouin
>Priority: Minor
>  Labels: config, documentation
>
> Following CASSANDRA-13241 we often see that there is no one-size-fits-all 
> value for settings. We can't find a sweet spot for every use cases.
> It's true for settings in cassandra.yaml but as [~brstgt] said for 
> {{chunk_length_in_kb}}: "this is somewhat hidden for the average user". 
> Many table settings are somewhat hidden for the average user. Some people 
> will think RTFM but if a file - say tables.yaml - contains default values for 
> table settings, more people would pay attention to them. And of course this 
> file could contain useful comments and guidance. 
> Example with SSTable compression options:
> {code}
> # General comments about sstable compression
> compression:
> # First of all: explain what is it. We split each SSTable into chunks, 
> etc.
> # Explain when users should lower this value (e.g. 4) or when a higher 
> value like 64 or 128 are recommended.
> # Explain the trade-off between read latency and off-heap compression 
> metadata size.
> chunk_length_in_kb: 16
> 
> # List of available compressor: LZ4Compressor, SnappyCompressor, and 
> DeflateCompressor
> # Explain trade-offs, some specific use cases (e.g. archives), etc.
> class: 'LZ4Compressor'
> 
> # If you want to disable compression by default, uncomment the following 
> line
> #enabled: false
> {code}
> So instead of hard coded values we would end up with something like 
> TableConfig + TableDescriptor à la Config + DatabaseDescriptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13279) Table settings file

2017-02-28 Thread Romain Hardouin (JIRA)
Romain Hardouin created CASSANDRA-13279:
---

 Summary: Table settings file
 Key: CASSANDRA-13279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13279
 Project: Cassandra
  Issue Type: Wish
  Components: Configuration
Reporter: Romain Hardouin
Priority: Minor


Following CASSANDRA-13241 we often see that there is no one-size-fits-all value 
for settings. We can't find a sweet spot for every use cases.

It's true for settings in cassandra.yaml but as [~brstgt] said for 
{{chunk_length_in_kb}}: "this is somewhat hidden for the average user". 
Many table settings are somewhat hidden for the average user. Some people will 
think RTFM but if a file - say tables.yaml - contains default values for table 
settings, more people would pay attention to them. And of course this file 
could contain useful comments and guidance. 

Example with SSTable compression options:

{code}
# General comments about sstable compression
compression:

# First of all: explain what is it. We split each SSTable into chunks, etc.
# Explain when users should lower this value (e.g. 4) or when a higher 
value like 64 or 128 are recommended.
# Explain the trade-off between read latency and off-heap compression 
metadata size.
chunk_length_in_kb: 16

# List of available compressor: LZ4Compressor, SnappyCompressor, and 
DeflateCompressor
# Explain trade-offs, some specific use cases (e.g. archives), etc.
class: 'LZ4Compressor'

# If you want to disable compression by default, uncomment the following 
line
#enabled: false
{code}

So instead of hard coded values we would end up with something like TableConfig 
+ TableDescriptor à la Config + DatabaseDescriptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13277) Duplicate results with secondary index on static column

2017-02-27 Thread Romain Hardouin (JIRA)
Romain Hardouin created CASSANDRA-13277:
---

 Summary: Duplicate results with secondary index on static column
 Key: CASSANDRA-13277
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13277
 Project: Cassandra
  Issue Type: Bug
Reporter: Romain Hardouin


As a follow up of 
http://www.mail-archive.com/user@cassandra.apache.org/msg50816.html 

Duplicate results appear with secondary index on static column with RF > 1.
Number of results vary depending on consistency level.

Here is a CCM session to reproduce the issue:
{code}
romain@debian:~$ ccm create 39 -n 3 -v 3.9 -s
Current cluster is now: 39
romain@debian:~$ ccm node1 cqlsh
Connected to 39 at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 2};
cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added 
timestamp, source text static, dest text, primary key (id, added));
cqlsh> CREATE index ON test.idx_static (id2);
cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values 
('id1', 22,'2017-01-28', 'src1', 'dst1');
cqlsh> SELECT * FROM test.idx_static where id2=22;

 id  | added   | id2 | source | dest
-+-+-++--
 id1 | 2017-01-27 23:00:00.00+ |  22 |   src1 | dst1
 id1 | 2017-01-27 23:00:00.00+ |  22 |   src1 | dst1

(2 rows)
cqlsh> CONSISTENCY ALL 
Consistency level set to ALL.
cqlsh> SELECT * FROM test.idx_static where id2=22;

 id  | added   | id2 | source | dest
-+-+-++--
 id1 | 2017-01-27 23:00:00.00+ |  22 |   src1 | dst1
 id1 | 2017-01-27 23:00:00.00+ |  22 |   src1 | dst1
 id1 | 2017-01-27 23:00:00.00+ |  22 |   src1 | dst1

(3 rows)
{code}

When RF matches the number of nodes, it works as expected.

Example with RF=3 and 3 nodes:
{code}
romain@debian:~$ ccm create 39 -n 3 -v 3.9 -s
Current cluster is now: 39

romain@debian:~$ ccm node1 cqlsh
Connected to 39 at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.

cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 3};
cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added 
timestamp, source text static, dest text, primary key (id, added));
cqlsh> CREATE index ON test.idx_static (id2);
cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values 
('id1', 22,'2017-01-28', 'src1', 'dst1');
cqlsh> SELECT * FROM test.idx_static where id2=22;

 id  | added   | id2 | source | dest
-+-+-++--
 id1 | 2017-01-27 23:00:00.00+ |  22 |   src1 | dst1

(1 rows)
cqlsh> CONSISTENCY all
Consistency level set to ALL.
cqlsh> SELECT * FROM test.idx_static where id2=22;

 id  | added   | id2 | source | dest
-+-+-++--
 id1 | 2017-01-27 23:00:00.00+ |  22 |   src1 | dst1

(1 rows)
{code}

Example with RF = 2 and 2 nodes:

{code}
romain@debian:~$ ccm create 39 -n 2 -v 3.9 -s
Current cluster is now: 39
romain@debian:~$ ccm node1 cqlsh
Connected to 39 at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 2};
cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added 
timestamp, source text static, dest text, primary key (id, added));
cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values 
('id1', 22,'2017-01-28', 'src1', 'dst1');
cqlsh> CREATE index ON test.idx_static (id2);
cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values 
('id1', 22,'2017-01-28', 'src1', 'dst1');
cqlsh> SELECT * FROM test.idx_static where id2=22;

 id  | added   | id2 | source | dest
-+-+-++--
 id1 | 2017-01-27 23:00:00.00+ |  22 |   src1 | dst1

(1 rows)
cqlsh> CONSISTENCY ALL 
Consistency level set to ALL.
cqlsh> SELECT * FROM test.idx_static where id2=22;

 id  | added   | id2 | source | dest
-+-+-++--
 id1 | 2017-01-27 23:00:00.00+ |  22 |   src1 | dst1

(1 rows)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13241) Lower default chunk_length_in_kb from 64kb to 4kb

2017-02-22 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878256#comment-15878256
 ] 

Romain Hardouin edited comment on CASSANDRA-13241 at 2/22/17 2:03 PM:
--

Compression metadata took lots of RAM (>1.2 GB per node) on a several TB tables 
with 33 billions partitions. On other tables metadata compression size stayed 
in order of MB (say from 10 to 100 MB). I agree that in most cases 4 KB should 
be much better than 64 KB.


was (Author: rha):
Compression metadata took lots of RAM (>1.2 GB per node) on a several TB tables 
with 33 billions partitions. On other tables metadata compression size stayed 
in order of MB (say from 10 to 100 MB). I agree that in most cases 4kb should 
be much better than 64kb.

> Lower default chunk_length_in_kb from 64kb to 4kb
> -
>
> Key: CASSANDRA-13241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Benjamin Roth
>
> Having a too low chunk size may result in some wasted disk space. A too high 
> chunk size may lead to massive overreads and may have a critical impact on 
> overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and 
> avg reads of 200MB/s. After lowering chunksize (of course aligned with read 
> ahead), the avg read IO went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / 
> (total data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but 
> if the model consists rather of small rows or small resultsets, the read 
> overhead with 64kb chunk size is insanely high. This applies for example for 
> (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB 
> RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic 
> snitch magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13241) Lower default chunk_length_in_kb from 64kb to 4kb

2017-02-22 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878256#comment-15878256
 ] 

Romain Hardouin commented on CASSANDRA-13241:
-

Compression metadata took lots of RAM (>1.2 GB per node) on a several TB tables 
with 33 billions partitions. On other tables metadata compression size stayed 
in order of MB (say from 10 to 100 MB). I agree that in most cases 4kb should 
be much better than 64kb.

> Lower default chunk_length_in_kb from 64kb to 4kb
> -
>
> Key: CASSANDRA-13241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Benjamin Roth
>
> Having a too low chunk size may result in some wasted disk space. A too high 
> chunk size may lead to massive overreads and may have a critical impact on 
> overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and 
> avg reads of 200MB/s. After lowering chunksize (of course aligned with read 
> ahead), the avg read IO went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / 
> (total data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but 
> if the model consists rather of small rows or small resultsets, the read 
> overhead with 64kb chunk size is insanely high. This applies for example for 
> (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB 
> RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic 
> snitch magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13241) Lower default chunk_length_in_kb from 64kb to 4kb

2017-02-22 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878101#comment-15878101
 ] 

Romain Hardouin commented on CASSANDRA-13241:
-

Like you I lowered compression chunks length on some tables to 4kb. As 
expected, read latency was better after the change.
But there is a price to pay, I observed an increase of compression metadata 
size. This can be non negligible for big tables with high cardinality. There is 
a sweet spot to find depending on use cases. I agree that 64kb is somewhat high 
but it's hard to find a one-size-fits-all value.

> Lower default chunk_length_in_kb from 64kb to 4kb
> -
>
> Key: CASSANDRA-13241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Benjamin Roth
>
> Having a too low chunk size may result in some wasted disk space. A too high 
> chunk size may lead to massive overreads and may have a critical impact on 
> overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and 
> avg reads of 200MB/s. After lowering chunksize (of course aligned with read 
> ahead), the avg read IO went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / 
> (total data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but 
> if the model consists rather of small rows or small resultsets, the read 
> overhead with 64kb chunk size is insanely high. This applies for example for 
> (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB 
> RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic 
> snitch magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13240) FailureDetector.java

2017-02-22 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1587#comment-1587
 ] 

Romain Hardouin edited comment on CASSANDRA-13240 at 2/22/17 8:00 AM:
--

You have to reset logging levels with nodetool, just type the following command 
on all nodes:

{code}
nodetool setlogginglevel
{code}

FYI {{setlogginglevel}} command takes arguments when you want to put a class or 
a whole package on DEBUG or TRACE, e.g:

{code}
nodetool setlogginglevel org.apache.cassandra.gms.FailureDetector TRACE
{code}


Also check your logback.xml and compare it with the upstream file of your 
Cassandra version to see if someone changed something.


was (Author: rha):
You have to reset logging levels with nodetool, just type the following command 
on all nodes:

{code}
nodetool setlogginglevel
{code}

FYI {{setlogginglevel}} command takes arguments when you want to put a class or 
a whole package on DEBUG or TRACE, e.g:

{code}
nodetool setlogginglevel org.apache.cassandra.gms.FailureDetector TRACE
{code}


Also check you logback.xml and compare it with the upstream file of your 
Cassandra version to see if someone changed something.

> FailureDetector.java
> 
>
> Key: CASSANDRA-13240
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13240
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Production
>Reporter: Chetan Rawal
>Priority: Minor
>
> We are getting frequent  FailureDetector.java messages in Cassandra logs.
> TRACE [GossipStage:1] 2017-02-17 07:06:00,156 FailureDetector.java (line 164) 
> reporting /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,157 Gossiper.java (line 920) 
> /10.21.176.84local generation 1478408871, remote generation 1478408871
> TRACE [GossipStage:1] 2017-02-17 07:06:00,157 Gossiper.java (line 966) 
> Updating heartbeat state version to 9025996 from 9025995 for /10.21.176.84 ...
> TRACE [OptionalTasks:1] 2017-02-17 07:06:00,239 MeteredFlusher.java (line 
> 111) memtable memory usage is 10485760 bytes with 10485760 live
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 126) My 
> heartbeat is now 9040716
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 385) Gossip 
> Digests are : /10.21.181.60:1478392309:9040716 
> /10.21.176.84:1478408871:9025996 
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 570) 
> Sending a GossipDigestSynMessage to /10.21.176.84 ...
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 MessagingService.java (line 
> 450) /10.21.181.60 sending GOSSIP_DIGEST_SYN to 27612718@/10.21.176.84
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 165) 
> Performing status check ...
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,452 FailureDetector.java (line 185) 
> PHI for /10.21.176.84 : 0.12728519330605453
> TRACE [Thread-50] 2017-02-17 07:06:00,453 IncomingTcpConnection.java (line 
> 112) Version is now 5
> TRACE [Thread-50] 2017-02-17 07:06:00,453 MessagingService.java (line 572) 
> /10.21.181.60 received GOSSIP_DIGEST_ACK from 27177304@/10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 GossipDigestAckVerbHandler.java 
> (line 47) Received a GossipDigestAckMessage from /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 Gossiper.java (line 726) local 
> heartbeat version 9040716 greater than 9040715 for /10.21.181.60
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 GossipDigestAckVerbHandler.java 
> (line 84) Sending a GossipDigestAck2Message to /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 MessagingService.java (line 
> 450) /10.21.181.60 sending GOSSIP_DIGEST_ACK2 to 27612719@/10.21.176.84
> TRACE [Thread-50] 2017-02-17 07:06:01,132 IncomingTcpConnection.java (line 
> 112) Version is now 5
> TRACE [Thread-50] 2017-02-17 07:06:01,132 MessagingService.java (line 572) 
> /10.21.181.60 received GOSSIP_DIGEST_SYN from 27177305@/10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 GossipDigestSynVerbHandler.java 
> (line 46) Received a GossipDigestSynMessage from /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 GossipDigestSynVerbHandler.java 
> (line 76) Gossip syn digests are : /10.21.181.60:1478392309:9040716 
> /10.21.176.84:1478408871:9025997 
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 GossipDigestSynVerbHandler.java 
> (line 90) Sending a GossipDigestAckMessage to /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 MessagingService.java (line 
> 450) /10.21.181.60 sending GOSSIP_DIGEST_ACK to 27612720@/10.21.176.84
> TRACE [Thread-50] 2017-02-17 07:06:01,134 IncomingTcpConnection.java (line 
> 112) Version is now 5
> TRACE [Thread-50] 2017-02-17 07:06:01,134 MessagingService.java (line 572) 
> /10.21.181.60 received GOSSIP_DIGEST_ACK2 from 

[jira] [Comment Edited] (CASSANDRA-13240) FailureDetector.java

2017-02-21 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1587#comment-1587
 ] 

Romain Hardouin edited comment on CASSANDRA-13240 at 2/21/17 8:01 AM:
--

You have to reset logging levels with nodetool, just type the following command 
on all nodes:

{code}
nodetool setlogginglevel
{code}

FYI {{setlogginglevel}} command takes arguments when you want to put a class or 
a whole package on DEBUG or TRACE, e.g:

{code}
nodetool setlogginglevel org.apache.cassandra.gms.FailureDetector TRACE
{code}


Also check you logback.xml and compare it with the upstream file of your 
Cassandra version to see if someone changed something.


was (Author: rha):
You have to reset logging levels with nodetool, just type:

{code}
nodetool setlogginglevel
{code}

FYI {{setlogginglevel}} command takes arguments when you want to put a class or 
a whole package on DEBUG or TRACE, e.g:

{code}
nodetool setlogginglevel org.apache.cassandra.gms.FailureDetector TRACE
{code}


Also check you logback.xml and compare it with the upstream file of your 
Cassandra version to see if someone changed something.

> FailureDetector.java
> 
>
> Key: CASSANDRA-13240
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13240
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Production
>Reporter: Chetan Rawal
>Priority: Minor
>
> We are getting frequent  FailureDetector.java messages in Cassandra logs.
> TRACE [GossipStage:1] 2017-02-17 07:06:00,156 FailureDetector.java (line 164) 
> reporting /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,157 Gossiper.java (line 920) 
> /10.21.176.84local generation 1478408871, remote generation 1478408871
> TRACE [GossipStage:1] 2017-02-17 07:06:00,157 Gossiper.java (line 966) 
> Updating heartbeat state version to 9025996 from 9025995 for /10.21.176.84 ...
> TRACE [OptionalTasks:1] 2017-02-17 07:06:00,239 MeteredFlusher.java (line 
> 111) memtable memory usage is 10485760 bytes with 10485760 live
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 126) My 
> heartbeat is now 9040716
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 385) Gossip 
> Digests are : /10.21.181.60:1478392309:9040716 
> /10.21.176.84:1478408871:9025996 
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 570) 
> Sending a GossipDigestSynMessage to /10.21.176.84 ...
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 MessagingService.java (line 
> 450) /10.21.181.60 sending GOSSIP_DIGEST_SYN to 27612718@/10.21.176.84
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 165) 
> Performing status check ...
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,452 FailureDetector.java (line 185) 
> PHI for /10.21.176.84 : 0.12728519330605453
> TRACE [Thread-50] 2017-02-17 07:06:00,453 IncomingTcpConnection.java (line 
> 112) Version is now 5
> TRACE [Thread-50] 2017-02-17 07:06:00,453 MessagingService.java (line 572) 
> /10.21.181.60 received GOSSIP_DIGEST_ACK from 27177304@/10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 GossipDigestAckVerbHandler.java 
> (line 47) Received a GossipDigestAckMessage from /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 Gossiper.java (line 726) local 
> heartbeat version 9040716 greater than 9040715 for /10.21.181.60
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 GossipDigestAckVerbHandler.java 
> (line 84) Sending a GossipDigestAck2Message to /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 MessagingService.java (line 
> 450) /10.21.181.60 sending GOSSIP_DIGEST_ACK2 to 27612719@/10.21.176.84
> TRACE [Thread-50] 2017-02-17 07:06:01,132 IncomingTcpConnection.java (line 
> 112) Version is now 5
> TRACE [Thread-50] 2017-02-17 07:06:01,132 MessagingService.java (line 572) 
> /10.21.181.60 received GOSSIP_DIGEST_SYN from 27177305@/10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 GossipDigestSynVerbHandler.java 
> (line 46) Received a GossipDigestSynMessage from /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 GossipDigestSynVerbHandler.java 
> (line 76) Gossip syn digests are : /10.21.181.60:1478392309:9040716 
> /10.21.176.84:1478408871:9025997 
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 GossipDigestSynVerbHandler.java 
> (line 90) Sending a GossipDigestAckMessage to /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 MessagingService.java (line 
> 450) /10.21.181.60 sending GOSSIP_DIGEST_ACK to 27612720@/10.21.176.84
> TRACE [Thread-50] 2017-02-17 07:06:01,134 IncomingTcpConnection.java (line 
> 112) Version is now 5
> TRACE [Thread-50] 2017-02-17 07:06:01,134 MessagingService.java (line 572) 
> /10.21.181.60 received GOSSIP_DIGEST_ACK2 from 27177306@/10.21.176.84
> TRACE 

[jira] [Commented] (CASSANDRA-13240) FailureDetector.java

2017-02-21 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1587#comment-1587
 ] 

Romain Hardouin commented on CASSANDRA-13240:
-

You have to reset logging levels with nodetool, just type:

{code}
nodetool setlogginglevel
{code}

FYI {{setlogginglevel}} command takes arguments when you want to put a class or 
a whole package on DEBUG or TRACE, e.g:

{code}
nodetool setlogginglevel org.apache.cassandra.gms.FailureDetector TRACE
{code}


Also check you logback.xml and compare it with the upstream file of your 
Cassandra version to see if someone changed something.

> FailureDetector.java
> 
>
> Key: CASSANDRA-13240
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13240
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Production
>Reporter: Chetan Rawal
>Priority: Minor
>
> We are getting frequent  FailureDetector.java messages in Cassandra logs.
> TRACE [GossipStage:1] 2017-02-17 07:06:00,156 FailureDetector.java (line 164) 
> reporting /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,157 Gossiper.java (line 920) 
> /10.21.176.84local generation 1478408871, remote generation 1478408871
> TRACE [GossipStage:1] 2017-02-17 07:06:00,157 Gossiper.java (line 966) 
> Updating heartbeat state version to 9025996 from 9025995 for /10.21.176.84 ...
> TRACE [OptionalTasks:1] 2017-02-17 07:06:00,239 MeteredFlusher.java (line 
> 111) memtable memory usage is 10485760 bytes with 10485760 live
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 126) My 
> heartbeat is now 9040716
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 385) Gossip 
> Digests are : /10.21.181.60:1478392309:9040716 
> /10.21.176.84:1478408871:9025996 
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 570) 
> Sending a GossipDigestSynMessage to /10.21.176.84 ...
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 MessagingService.java (line 
> 450) /10.21.181.60 sending GOSSIP_DIGEST_SYN to 27612718@/10.21.176.84
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,451 Gossiper.java (line 165) 
> Performing status check ...
> TRACE [GossipTasks:1] 2017-02-17 07:06:00,452 FailureDetector.java (line 185) 
> PHI for /10.21.176.84 : 0.12728519330605453
> TRACE [Thread-50] 2017-02-17 07:06:00,453 IncomingTcpConnection.java (line 
> 112) Version is now 5
> TRACE [Thread-50] 2017-02-17 07:06:00,453 MessagingService.java (line 572) 
> /10.21.181.60 received GOSSIP_DIGEST_ACK from 27177304@/10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 GossipDigestAckVerbHandler.java 
> (line 47) Received a GossipDigestAckMessage from /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 Gossiper.java (line 726) local 
> heartbeat version 9040716 greater than 9040715 for /10.21.181.60
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 GossipDigestAckVerbHandler.java 
> (line 84) Sending a GossipDigestAck2Message to /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:00,453 MessagingService.java (line 
> 450) /10.21.181.60 sending GOSSIP_DIGEST_ACK2 to 27612719@/10.21.176.84
> TRACE [Thread-50] 2017-02-17 07:06:01,132 IncomingTcpConnection.java (line 
> 112) Version is now 5
> TRACE [Thread-50] 2017-02-17 07:06:01,132 MessagingService.java (line 572) 
> /10.21.181.60 received GOSSIP_DIGEST_SYN from 27177305@/10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 GossipDigestSynVerbHandler.java 
> (line 46) Received a GossipDigestSynMessage from /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 GossipDigestSynVerbHandler.java 
> (line 76) Gossip syn digests are : /10.21.181.60:1478392309:9040716 
> /10.21.176.84:1478408871:9025997 
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 GossipDigestSynVerbHandler.java 
> (line 90) Sending a GossipDigestAckMessage to /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,133 MessagingService.java (line 
> 450) /10.21.181.60 sending GOSSIP_DIGEST_ACK to 27612720@/10.21.176.84
> TRACE [Thread-50] 2017-02-17 07:06:01,134 IncomingTcpConnection.java (line 
> 112) Version is now 5
> TRACE [Thread-50] 2017-02-17 07:06:01,134 MessagingService.java (line 572) 
> /10.21.181.60 received GOSSIP_DIGEST_ACK2 from 27177306@/10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,135 
> GossipDigestAck2VerbHandler.java (line 45) Received a GossipDigestAck2Message 
> from /10.21.176.84
> TRACE [GossipStage:1] 2017-02-17 07:06:01,135 FailureDetector.java (line 164) 
> reporting /10.21.176.84



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13215) Cassandra nodes startup time 20x more after upgarding to 3.x

2017-02-14 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865479#comment-15865479
 ] 

Romain Hardouin commented on CASSANDRA-13215:
-

It's related to CASSANDRA-6696 i.e. since 3.2.

Regarding {{AbstractReplicationStrategy.getAddressRanges}} it seems to be a 
known limitation. Maybe we can now consider that it's used on a critical path:
{code}
/*
 * NOTE: this is pretty inefficient. also the inverse (getRangeAddresses) 
below.
 * this is fine as long as we don't use this on any critical path.
 * (fixing this would probably require merging tokenmetadata into 
replicationstrategy,
 * so we could cache/invalidate cleanly.)
 */
public Multimap getAddressRanges(TokenMetadata 
metadata)
{code}

> Cassandra nodes startup time 20x more after upgarding to 3.x
> 
>
> Key: CASSANDRA-13215
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13215
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Cluster setup: two datacenters (dc-main, dc-backup).
> dc-main - 9 servers, no vnodes
> dc-backup - 6 servers, vnodes
>Reporter: Viktor Kuzmin
> Attachments: simple-cache.patch
>
>
> CompactionStrategyManage.getCompactionStrategyIndex is called on each sstable 
> at startup. And this function calls StorageService.getDiskBoundaries. And 
> getDiskBoundaries calls AbstractReplicationStrategy.getAddressRanges.
> It appears that last function can be really slow. In our environment we have 
> 1545 tokens and with NetworkTopologyStrategy it can make 1545*1545 
> computations in worst case (maybe I'm wrong, but it really takes lot's of 
> cpu).
> Also this function can affect runtime later, cause it is called not only 
> during startup.
> I've tried to implement simple cache for getDiskBoundaries results and now 
> startup time is about one minute instead of 25m, but I'm not sure if it's a 
> good solution.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13176) DROP INDEX seemingly doesn't stop existing Index build

2017-02-02 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849669#comment-15849669
 ] 

Romain Hardouin commented on CASSANDRA-13176:
-

Did you try {{nodetool stop INDEX_BUILD}} prior to restart nodes?

> DROP INDEX seemingly doesn't stop existing Index build
> --
>
> Key: CASSANDRA-13176
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13176
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: CentOS Linux, JRE 1.8
>Reporter: Soumya Sanyal
>
> There appears to be an edge case with secondary indexes (non SASI). I 
> originally issued a CREATE INDEX on a column, and upon listening to advice 
> from folks in the #cassandra room, decided against it, and issued a DROP 
> INDEX. 
> I didn't check the cluster overnight, but this morning, I found out that our 
> cluster CPU usage was pegged around 80%. Looking at compaction stats, I saw 
> that the index build was still ongoing. We had to restart the entire cluster 
> for the changes to take effect.
> Version: 3.9



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >