[jira] [Created] (CASSANDRA-13540) QUORUM CL is used for new superuser

2017-05-19 Thread Dennis Noordzij (JIRA)
Dennis Noordzij created CASSANDRA-13540:
---

 Summary: QUORUM CL is used for new superuser
 Key: CASSANDRA-13540
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13540
 Project: Cassandra
  Issue Type: Bug
  Components: Distributed Metadata
Reporter: Dennis Noordzij


After bootstrapping Cassandra, we're creating a new superuser, and setting RF 
of system_auth keyspace to 2, NetworkTopologyStrategy. Then a nodetool repair 
on the system_auth KS. 

Documentation says 
{quote}
The system_auth keyspace uses a QUORUM consistency level when checking 
authentication for the default cassandra user. For all other users created, 
superuser or otherwise, a LOCAL_ONE consistency level is used for 
authenticating.
{quote}

But for my new superuser, new node members are rejected because QUORUM CL can't 
be achieved (through nodetool): 
{code}
May 19th 2017, 17:40:14.462 Connection error: ('Unable to connect to any 
servers', {'xx.xx.xx.xx': AuthenticationFailed('Failed to authenticate to 
xx.xx.xx.xx: Error from server: code=0100 [Bad credentials] 
message="org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM"',)})
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable

2017-05-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018167#comment-16018167
 ] 

sankalp kohli commented on CASSANDRA-13508:
---

Your benchmark is on which version of C*? 

> Make system.paxos table compaction strategy configurable
> 
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test11.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test11.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable

2017-05-19 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018140#comment-16018140
 ] 

Blake Eggleston commented on CASSANDRA-13508:
-

I agree, LCS is probably the better choice, which is the default in trunk. I 
think being able to tune the paxos table might not be a bad idea, given how 
heavily it can be used in some systems, but it also has some risks. First, 
supporting it won't be straightforward. System table schemas are hardcoded (see 
{{SystemKeyspace}}), so just allowing alter table statements against them isn't 
enough. Any changes you make will be lost after a node restart. Storing system 
table schemas as regular tables is also a non-starter. Any user configurable 
system properties are something that would have to be configured in 
cassandra.yaml or something, which for non-replicated tables, isn't terrible 
(and not the same thing as the schema.xml file used ca 0.6 this will remind 
people of). [~iamaleksey], do you have any thoughts on this?

> Make system.paxos table compaction strategy configurable
> 
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test11.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test11.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13348) Duplicate tokens after bootstrap

2017-05-19 Thread Tom van der Woerdt (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018126#comment-16018126
 ] 

Tom van der Woerdt commented on CASSANDRA-13348:


Same DC.

> Duplicate tokens after bootstrap
> 
>
> Key: CASSANDRA-13348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13348
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom van der Woerdt
>Assignee: Dikang Gu
>Priority: Blocker
> Fix For: 3.0.x
>
>
> This one is a bit scary, and probably results in data loss. After a bootstrap 
> of a few new nodes into an existing cluster, two new nodes have chosen some 
> overlapping tokens.
> In fact, of the 256 tokens chosen, 51 tokens were already in use on the other 
> node.
> Node 1 log :
> {noformat}
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,461 
> StorageService.java:1160 - JOINING: waiting for ring information
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,461 
> StorageService.java:1160 - JOINING: waiting for schema information to complete
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,461 
> StorageService.java:1160 - JOINING: schema complete, ready to bootstrap
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,462 
> StorageService.java:1160 - JOINING: waiting for pending range calculation
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,462 
> StorageService.java:1160 - JOINING: calculation complete, ready to bootstrap
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,462 
> StorageService.java:1160 - JOINING: getting bootstrap token
> WARN  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,564 
> TokenAllocation.java:61 - Selected tokens [, 2959334889475814712, 
> 3727103702384420083, 7183119311535804926, 6013900799616279548, 
> -1222135324851761575, 1645259890258332163, -1213352346686661387, 
> 7604192574911909354]
> WARN  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,729 
> TokenAllocation.java:65 - Replicated node load in datacentre before 
> allocation max 1.00 min 1.00 stddev 0.
> WARN  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,729 
> TokenAllocation.java:66 - Replicated node load in datacentre after allocation 
> max 1.00 min 1.00 stddev 0.
> WARN  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,729 
> TokenAllocation.java:70 - Unexpected growth in standard deviation after 
> allocation.
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:44,150 
> StorageService.java:1160 - JOINING: sleeping 3 ms for pending range setup
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:43:14,151 
> StorageService.java:1160 - JOINING: Starting to bootstrap...
> {noformat}
> Node 2 log:
> {noformat}
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:51,937 
> StorageService.java:971 - Joining ring by operator request
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,513 
> StorageService.java:1160 - JOINING: waiting for ring information
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,513 
> StorageService.java:1160 - JOINING: waiting for schema information to complete
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,513 
> StorageService.java:1160 - JOINING: schema complete, ready to bootstrap
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,513 
> StorageService.java:1160 - JOINING: waiting for pending range calculation
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,514 
> StorageService.java:1160 - JOINING: calculation complete, ready to bootstrap
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,514 
> StorageService.java:1160 - JOINING: getting bootstrap token
> WARN  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,630 
> TokenAllocation.java:61 - Selected tokens [.., 2890709530010722764, 
> -2416006722819773829, -5820248611267569511, -5990139574852472056, 
> 1645259890258332163, 9135021011763659240, -5451286144622276797, 
> 7604192574911909354]
> WARN  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,794 
> TokenAllocation.java:65 - Replicated node load in datacentre before 
> allocation max 1.02 min 0.98 stddev 0.
> WARN  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,795 
> TokenAllocation.java:66 - Replicated node load in datacentre after allocation 
> max 1.00 min 1.00 stddev 0.
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:53,149 
> StorageService.java:1160 - JOINING: sleeping 3 ms for pending range setup
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:56:23,149 
> StorageService.java:1160 - JOINING: Starting to bootstrap...
> {noformat}
> eg. 7604192574911909354 has been chosen by both.
> The joins were eight days 

[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018124#comment-16018124
 ] 

Paulo Motta commented on CASSANDRA-10130:
-

Overall I like the new approach and the idea of keeping {{markIndex*}} usage 
restricted to {{SecondaryIndexManager}}, since it will keep things more 
self-contained and prevent bad usages.

While inspecting usages of {{buildAllIndexesBlocking}} with the 
{{preBuildTask}} parameter, I noticed that rebuilding indexes is a natural 
consequence of adding new SSTables to the tracker - I don't see a situation 
where we want to add SSTables to the tracker and NOT rebuild the indexes, so 
instead of requiring users of {{Tracker.addSSTables}} to figure out they need 
to rebuild indexes and create a dependency with the {{SecondaryIndexManager}} 
(such as {{OnCompletionRunnable}} or {{ColumnFamilyStore.loadNewSSTables}}), or 
even creating a dependency between {{Tracker.addSSTables}} and the secondary 
index manager, we could leverage the tracker notification support and make the 
secondary index automatically rebuild indexes when receiving an 
{{SSTableAddedNotification}} from the tracker.

However this notification is only triggered *after* the SSTables are added to 
the tracker, but there is a possibility that there is a failure after some 
SSTables were already added and we would need to rebuild indexes in that case, 
so we could maybe add a new {{SSTableBeforeAddedNotification}} (or better name) 
that is triggered at the start of {{Tracker.addSSTables}}, mark the index as 
building when receiving that notification and actually trigger 
{{buildAllIndexesBlocking}} when receiving the {{SSTableAddedNotification}}.

WDYT of this suggestion?

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13348) Duplicate tokens after bootstrap

2017-05-19 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018086#comment-16018086
 ] 

Dikang Gu commented on CASSANDRA-13348:
---

[~tvdw], hmm, for the nodes with duplicated tokens, are they in the same DC or 
different DC?

> Duplicate tokens after bootstrap
> 
>
> Key: CASSANDRA-13348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13348
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom van der Woerdt
>Assignee: Dikang Gu
>Priority: Blocker
> Fix For: 3.0.x
>
>
> This one is a bit scary, and probably results in data loss. After a bootstrap 
> of a few new nodes into an existing cluster, two new nodes have chosen some 
> overlapping tokens.
> In fact, of the 256 tokens chosen, 51 tokens were already in use on the other 
> node.
> Node 1 log :
> {noformat}
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,461 
> StorageService.java:1160 - JOINING: waiting for ring information
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,461 
> StorageService.java:1160 - JOINING: waiting for schema information to complete
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,461 
> StorageService.java:1160 - JOINING: schema complete, ready to bootstrap
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,462 
> StorageService.java:1160 - JOINING: waiting for pending range calculation
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,462 
> StorageService.java:1160 - JOINING: calculation complete, ready to bootstrap
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,462 
> StorageService.java:1160 - JOINING: getting bootstrap token
> WARN  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,564 
> TokenAllocation.java:61 - Selected tokens [, 2959334889475814712, 
> 3727103702384420083, 7183119311535804926, 6013900799616279548, 
> -1222135324851761575, 1645259890258332163, -1213352346686661387, 
> 7604192574911909354]
> WARN  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,729 
> TokenAllocation.java:65 - Replicated node load in datacentre before 
> allocation max 1.00 min 1.00 stddev 0.
> WARN  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,729 
> TokenAllocation.java:66 - Replicated node load in datacentre after allocation 
> max 1.00 min 1.00 stddev 0.
> WARN  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:43,729 
> TokenAllocation.java:70 - Unexpected growth in standard deviation after 
> allocation.
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:42:44,150 
> StorageService.java:1160 - JOINING: sleeping 3 ms for pending range setup
> INFO  [RMI TCP Connection(107)-127.0.0.1] 2017-03-09 07:43:14,151 
> StorageService.java:1160 - JOINING: Starting to bootstrap...
> {noformat}
> Node 2 log:
> {noformat}
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:51,937 
> StorageService.java:971 - Joining ring by operator request
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,513 
> StorageService.java:1160 - JOINING: waiting for ring information
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,513 
> StorageService.java:1160 - JOINING: waiting for schema information to complete
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,513 
> StorageService.java:1160 - JOINING: schema complete, ready to bootstrap
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,513 
> StorageService.java:1160 - JOINING: waiting for pending range calculation
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,514 
> StorageService.java:1160 - JOINING: calculation complete, ready to bootstrap
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,514 
> StorageService.java:1160 - JOINING: getting bootstrap token
> WARN  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,630 
> TokenAllocation.java:61 - Selected tokens [.., 2890709530010722764, 
> -2416006722819773829, -5820248611267569511, -5990139574852472056, 
> 1645259890258332163, 9135021011763659240, -5451286144622276797, 
> 7604192574911909354]
> WARN  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,794 
> TokenAllocation.java:65 - Replicated node load in datacentre before 
> allocation max 1.02 min 0.98 stddev 0.
> WARN  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:52,795 
> TokenAllocation.java:66 - Replicated node load in datacentre after allocation 
> max 1.00 min 1.00 stddev 0.
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:55:53,149 
> StorageService.java:1160 - JOINING: sleeping 3 ms for pending range setup
> INFO  [RMI TCP Connection(380)-127.0.0.1] 2017-03-17 15:56:23,149 
> StorageService.java:1160 - JOINING: Starting to bootstrap...
> {noformat}
> eg. 

[jira] [Commented] (CASSANDRA-13539) The keyspace repairTime metric is not updated

2017-05-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018012#comment-16018012
 ] 

ASF GitHub Bot commented on CASSANDRA-13539:


Github user asfgit closed the pull request at:

https://github.com/apache/cassandra/pull/113


> The keyspace repairTime metric is not updated
> -
>
> Key: CASSANDRA-13539
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13539
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 4.0
>
>
> repairTime metric at keyspace metric isnt updated when repairs complete so 
> its always zeros.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13539) The keyspace repairTime metric is not updated

2017-05-19 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13539:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk as {{e1f2300a1ae7dab1660c16fc38bcb852fdcd44ef}}. Thanks!

> The keyspace repairTime metric is not updated
> -
>
> Key: CASSANDRA-13539
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13539
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 4.0
>
>
> repairTime metric at keyspace metric isnt updated when repairs complete so 
> its always zeros.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Update repairTime for keyspaces on completion

2017-05-19 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk 8028d4bdd -> e1f2300a1


Update repairTime for keyspaces on completion

Patch by Chris Lohfink, Reviewed by Blake Eggleston for CASSANDRA-13539
This closes #113


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e1f2300a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e1f2300a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e1f2300a

Branch: refs/heads/trunk
Commit: e1f2300a1ae7dab1660c16fc38bcb852fdcd44ef
Parents: 8028d4b
Author: Chris Lohfink 
Authored: Thu May 18 10:27:51 2017 -0500
Committer: Blake Eggleston 
Committed: Fri May 19 14:06:03 2017 -0700

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/repair/RepairRunnable.java | 5 +++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e1f2300a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a5afb86..c3e485d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Update repairTime for keyspaces on completion (CASSANDRA-13539)
  * Add configurable upper bound for validation executor threads 
(CASSANDRA-13521)
  * Bring back maxHintTTL propery (CASSANDRA-12982)
  * Add testing guidelines (CASSANDRA-13497)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e1f2300a/src/java/org/apache/cassandra/repair/RepairRunnable.java
--
diff --git a/src/java/org/apache/cassandra/repair/RepairRunnable.java 
b/src/java/org/apache/cassandra/repair/RepairRunnable.java
index f327757..a3b8f22 100644
--- a/src/java/org/apache/cassandra/repair/RepairRunnable.java
+++ b/src/java/org/apache/cassandra/repair/RepairRunnable.java
@@ -521,8 +521,8 @@ public class RepairRunnable extends WrappedRunnable 
implements ProgressEventNoti
 private void repairComplete()
 {
 
ActiveRepairService.instance.removeParentRepairSession(parentSession);
-String duration = 
DurationFormatUtils.formatDurationWords(System.currentTimeMillis() - startTime,
-  true, 
true);
+long durationMillis = System.currentTimeMillis() - startTime;
+String duration = 
DurationFormatUtils.formatDurationWords(durationMillis, true, true);
 String message = String.format("Repair command #%d finished in 
%s", cmd, duration);
 fireProgressEvent(tag, new 
ProgressEvent(ProgressEventType.COMPLETE, progress.get(), totalProgress, 
message));
 logger.info(message);
@@ -540,6 +540,7 @@ public class RepairRunnable extends WrappedRunnable 
implements ProgressEventNoti
 Tracing.instance.stopSession();
 }
 executor.shutdownNow();
+Keyspace.open(keyspace).metric.repairTime.update(durationMillis, 
TimeUnit.MILLISECONDS);
 }
 }
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13510) CI for validating cassandra on power platform

2017-05-19 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017744#comment-16017744
 ] 

Jeff Jirsa edited comment on CASSANDRA-13510 at 5/19/17 6:01 PM:
-

[build 
9|https://builds.apache.org/view/A-D/view/Cassandra/job/cassandra-devbranch-ppc64le-testall/9/console]
 scheduled.

I've also setup the [ppc64le 
dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-ppc64le-dtest],
 though I'm not going to schedule it until the unit tests are complete (don't 
want to monopolize the hadoop ppc64le hardware).


was (Author: jjirsa):
[build 
9|https://builds.apache.org/view/A-D/view/Cassandra/job/cassandra-devbranch-ppc64le-testall/9/console]
 scheduled.


> CI for validating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13510) CI for validating cassandra on power platform

2017-05-19 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017744#comment-16017744
 ] 

Jeff Jirsa commented on CASSANDRA-13510:


[build 
9|https://builds.apache.org/view/A-D/view/Cassandra/job/cassandra-devbranch-ppc64le-testall/9/console]
 scheduled.


> CI for validating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13510) CI for valdiating cassandra on power platform

2017-05-19 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017735#comment-16017735
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13510:
---

[~jjirsa] - can u please kick off more builds, i have added a fix to resolve 
the last issue in my branch. Lets see if there any more failures.

> CI for valdiating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13510) CI for validating cassandra on power platform

2017-05-19 Thread Amitkumar Ghatwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amitkumar Ghatwal updated CASSANDRA-13510:
--
Summary: CI for validating cassandra on power platform  (was: CI for 
valdiating cassandra on power platform)

> CI for validating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13182) test failure in sstableutil_test.SSTableUtilTest.compaction_test

2017-05-19 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017635#comment-16017635
 ] 

Ariel Weisberg commented on CASSANDRA-13182:


Thanks for the fix. I'll keep a look out for this test to see if it fails again.

One suggestion for this kind of flakey test bug where you are checking a 
condition asynchronously and you don't know when it's going to happen. Spin on 
the condition checking it once a second and then set a longish timeout (longer 
than 5 seconds, say 30) and only fail if the condition doesn't occur within the 
time limit. It's not unheard of to have crazy pauses on the infrastructure we 
run these tests on that last a few seconds. Trying to guess how long the magic 
number to wait is a source of flakey tests and setting the magic number longer 
ends up increasing test runtime when the condition has already become true.

You can factor out the spin until condition is true and error if it doesn't 
happen within the time limit to a dedicated method in a dtest utility class so 
all you have to provide is a boolean function object to check the condition.

> test failure in sstableutil_test.SSTableUtilTest.compaction_test
> 
>
> Key: CASSANDRA-13182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13182
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Lerh Chuan Low
>  Labels: dtest, test-failure, test-failure-fresh
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/506/testReport/sstableutil_test/SSTableUtilTest/compaction_test
> {noformat}
> Error Message
> Lists differ: ['/tmp/dtest-Rk_3Cs/test/node1... != 
> ['/tmp/dtest-Rk_3Cs/test/node1...
> First differing element 8:
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db'
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db'
> First list contains 7 additional elements.
> First extra element 24:
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Data.db'
>   
> ['/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Summary.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-TOC.txt',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Digest.crc32',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Filter.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Index.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Statistics.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Summary.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-TOC.txt',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Statistics.db',
>
> 

[jira] [Comment Edited] (CASSANDRA-13120) Trace and Histogram output misleading

2017-05-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017213#comment-16017213
 ] 

Benjamin Lerer edited comment on CASSANDRA-13120 at 5/19/17 10:35 AM:
--

Right now, what {{CFHistograms}} expose is the number of SSTables on which we 
do a partition lookup.
The partition look up can lead to skipping the SSTable (BF, min max, partition 
index lookup or index entry not found) or not.
Nevertheless, partition lookups are not cheap. Especially when an index lookup 
has to be done. Due to that, [~tjake] suggested to me, in an offline 
discussion, to keep the 
metric as it is and to add a new one {{mergedSSTable}} to track how many 
SSTables have been actually merged.

The number of actually merged SSTables should also be the one used for the 
{{Trace}} message and for determining if the {{SSTables}} must be compacted.


was (Author: blerer):
Right now, what {{CFHistograms}} expose is the number of SSTables on which we 
do a partition lookup.
The partition look up can lead to skipping the SSTable (BF, min max, partition 
index lookup or index entry not found) or not.
Nevertheless, partition lookups are not cheap. Especially when an index lookup 
has to be done. Due to that, Jake suggested to me, in an offline discussion, to 
keep the 
metric as it is and to add a new one {{mergedSSTable}} to track how many 
SSTables have been actually merged.

The number of actually merged SSTables should also be the one used for the 
{{Trace}} message and for determining if the {{SSTables}} must be compacted.

> Trace and Histogram output misleading
> -
>
> Key: CASSANDRA-13120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Benjamin Lerer
>Priority: Minor
>
> If we look at the following output:
> {noformat}
> [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db
> {noformat}
> We can see that this key value appears in just 6 sstables.  However, when we 
> run a select against the table and key we get:
> {noformat}
> Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84
>  activity 
>  | timestamp  | source
>  | source_elapsed
> ---+++
>   
>   Execute CQL3 query | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |  0
>  Parsing SELECT * FROM keyspace.table WHERE id = 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2]   | 
> 2017-01-09 13:36:40.419000 | 10.200.254.141 |104
>  
> Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |220
> Executing single-partition query on 
> table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |450
> Acquiring 
> sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |477
>  Bloom filter allows skipping 
> sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |496
>  Bloom filter allows skipping 
> sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |503
> Key cache hit for 
> sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |513
>  Bloom filter allows skipping 
> sstable 648135 [SharedPool-Worker-1] | 2017-01-09 

[jira] [Commented] (CASSANDRA-13120) Trace and Histogram output misleading

2017-05-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017213#comment-16017213
 ] 

Benjamin Lerer commented on CASSANDRA-13120:


Right now, what {{CFHistograms}} expose is the number of SSTables on which we 
do a partition lookup.
The partition look up can lead to skipping the SSTable (BF, min max, partition 
index lookup or index entry not found) or not.
Nevertheless, partition lookups are not cheap. Especially when an index lookup 
has to be done. Due to that, Jake suggested to me, in an offline discussion, to 
keep the 
metric as it is and to add a new one {{mergedSSTable}} to track how many 
SSTables have been actually merged.

The number of actually merged SSTables should also be the one used for the 
{{Trace}} message and for determining if the {{SSTables}} must be compacted.

> Trace and Histogram output misleading
> -
>
> Key: CASSANDRA-13120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Benjamin Lerer
>Priority: Minor
>
> If we look at the following output:
> {noformat}
> [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db
> {noformat}
> We can see that this key value appears in just 6 sstables.  However, when we 
> run a select against the table and key we get:
> {noformat}
> Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84
>  activity 
>  | timestamp  | source
>  | source_elapsed
> ---+++
>   
>   Execute CQL3 query | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |  0
>  Parsing SELECT * FROM keyspace.table WHERE id = 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2]   | 
> 2017-01-09 13:36:40.419000 | 10.200.254.141 |104
>  
> Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |220
> Executing single-partition query on 
> table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |450
> Acquiring 
> sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |477
>  Bloom filter allows skipping 
> sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |496
>  Bloom filter allows skipping 
> sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |503
> Key cache hit for 
> sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |513
>  Bloom filter allows skipping 
> sstable 648135 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |520
>  Bloom filter allows skipping 
> sstable 648130 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |526
>  Bloom filter allows skipping 
> sstable 648048 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |530
>  Bloom filter allows skipping 
> sstable 647749 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |535
>  Bloom filter allows skipping 
> sstable 647404 [SharedPool-Worker-1] | 2017-01-09 

[jira] [Comment Edited] (CASSANDRA-13120) Trace and Histogram output misleading

2017-05-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017213#comment-16017213
 ] 

Benjamin Lerer edited comment on CASSANDRA-13120 at 5/19/17 10:35 AM:
--

Right now, what {{CFHistograms}} expose is the number of SSTables on which we 
do a partition lookup.
The partition look up can lead to skipping the SSTable (BF, min max, partition 
index lookup or index entry not found) or not.
Nevertheless, partition lookups are not cheap. Especially when an index lookup 
has to be done. Due to that, [~tjake] suggested to me, in an offline 
discussion, to keep the metric as it is and to add a new one {{mergedSSTable}} 
to track how many SSTables have been actually merged.

The number of actually merged SSTables should also be the one used for the 
{{Trace}} message and for determining if the {{SSTables}} must be compacted.


was (Author: blerer):
Right now, what {{CFHistograms}} expose is the number of SSTables on which we 
do a partition lookup.
The partition look up can lead to skipping the SSTable (BF, min max, partition 
index lookup or index entry not found) or not.
Nevertheless, partition lookups are not cheap. Especially when an index lookup 
has to be done. Due to that, [~tjake] suggested to me, in an offline 
discussion, to keep the 
metric as it is and to add a new one {{mergedSSTable}} to track how many 
SSTables have been actually merged.

The number of actually merged SSTables should also be the one used for the 
{{Trace}} message and for determining if the {{SSTables}} must be compacted.

> Trace and Histogram output misleading
> -
>
> Key: CASSANDRA-13120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Benjamin Lerer
>Priority: Minor
>
> If we look at the following output:
> {noformat}
> [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db
> {noformat}
> We can see that this key value appears in just 6 sstables.  However, when we 
> run a select against the table and key we get:
> {noformat}
> Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84
>  activity 
>  | timestamp  | source
>  | source_elapsed
> ---+++
>   
>   Execute CQL3 query | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |  0
>  Parsing SELECT * FROM keyspace.table WHERE id = 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2]   | 
> 2017-01-09 13:36:40.419000 | 10.200.254.141 |104
>  
> Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |220
> Executing single-partition query on 
> table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |450
> Acquiring 
> sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |477
>  Bloom filter allows skipping 
> sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |496
>  Bloom filter allows skipping 
> sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |503
> Key cache hit for 
> sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |513
>  Bloom filter allows skipping 
> sstable 648135 [SharedPool-Worker-1] | 2017-01-09 

[jira] [Commented] (CASSANDRA-13182) test failure in sstableutil_test.SSTableUtilTest.compaction_test

2017-05-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017194#comment-16017194
 ] 

Marcus Eriksson commented on CASSANDRA-13182:
-

[~Lerh Low] yeah, we should probably at least add {{@Deprecated}} on the 
methods, but could you open a new ticket for that?

There were two reasons I kept it, first I wanted to avoid changing the public 
API for compaction strategies, second there might be some external compaction 
strategies that need the disabled/enabled notification. So deprecating in 4.0 
and then eventually removing in 4.1/5.0 might be a way forward (I really should 
have deprecated the methods when I did the change..)

> test failure in sstableutil_test.SSTableUtilTest.compaction_test
> 
>
> Key: CASSANDRA-13182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13182
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Lerh Chuan Low
>  Labels: dtest, test-failure, test-failure-fresh
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/506/testReport/sstableutil_test/SSTableUtilTest/compaction_test
> {noformat}
> Error Message
> Lists differ: ['/tmp/dtest-Rk_3Cs/test/node1... != 
> ['/tmp/dtest-Rk_3Cs/test/node1...
> First differing element 8:
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db'
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db'
> First list contains 7 additional elements.
> First extra element 24:
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Data.db'
>   
> ['/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Summary.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-TOC.txt',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Digest.crc32',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Filter.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Index.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Statistics.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Summary.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-TOC.txt',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Summary.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-TOC.txt',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Data.db',
>
> 

[jira] [Commented] (CASSANDRA-13052) Repair process is violating the start/end token limits for small ranges

2017-05-19 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017179#comment-16017179
 ] 

Stefan Podkowinski commented on CASSANDRA-13052:


Mentioned test has been fixed and re-run, along with the aborted dtest. All 
tests have finished and reported errors look unrelated to the patch.

* trunk 
[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-13052-trunk] 
[testall|https://circleci.com/gh/spodkowinski/cassandra/47] 
[dtest|https://builds.apache.org/user/spod/my-views/view/Cassandra%20List%20View/job/Cassandra-devbranch-dtest/53/]
* 3.11 
[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-13052-3.11] 
[testall|https://circleci.com/gh/spodkowinski/cassandra/48] 
[dtest|https://builds.apache.org/user/spod/my-views/view/Cassandra%20List%20View/job/Cassandra-devbranch-dtest/57/]
* 3.0 
[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-13052-3.0] 
[testall|https://circleci.com/gh/spodkowinski/cassandra/49] 
[dtest|https://builds.apache.org/user/spod/my-views/view/Cassandra%20List%20View/job/Cassandra-devbranch-dtest/51/]


> Repair process is violating the start/end token limits for small ranges
> ---
>
> Key: CASSANDRA-13052
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13052
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: We tried this in 2.0.14 and 3.9, same bug.
>Reporter: Cristian P
>Assignee: Stefan Podkowinski
> Attachments: 13052-2.1.patch, ccm_reproduce-13052.txt, 
> system-dev-debug-13052.log
>
>
> We tried to do a single token repair by providing 2 consecutive token values 
> for a large column family. We soon notice heavy streaming and according to 
> the logs the number of ranges streamed was in thousands.
> After investigation we found a bug in the two partitioner classes we use 
> (RandomPartitioner and Murmur3Partitioner).
> The midpoint method used by MerkleTree.differenceHelper method to find ranges 
> with differences for streaming returns abnormal values (way out of the 
> initial range requested for repair) if the repair requested range is small (I 
> expect smaller than 2^15).
> Here is the simple code to reproduce the bug for Murmur3Partitioner:
> Token left = new Murmur3Partitioner.LongToken(123456789L);
> Token right = new Murmur3Partitioner.LongToken(123456789L);
> IPartitioner partitioner = new Murmur3Partitioner();
> Token midpoint = partitioner.midpoint(left, right);
> System.out.println("Murmur3: [ " + left.getToken() + " : " + 
> midpoint.getToken() + " : " + right.getToken() + " ]");
> The output is:
> Murmur3: [ 123456789 : -9223372036731319019 : 123456789 ]
> Note that the midpoint token is nowhere near the suggested repair range. This 
> will happen if during the parsing of the tree (in 
> MerkleTree.differenceHelper) in search for differences  there isn't enough 
> tokens for the split and the subrange becomes 0 (left.token=right.token) as 
> in the above test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13510) CI for valdiating cassandra on power platform

2017-05-19 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016954#comment-16016954
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13510:
---

Firstly thanks [~jjirsa] ,  for creating a job using my github dev branch ( 
ppc64le-capi ) . So the build seems failing with eclipse warnings and 
[~mshuler] for probing the build error.  I will look at above errors and push 
the code fix to my github branch . Thereafter i will request jeff for 
re-triggering another build to valdiate the branch again.

 

> CI for valdiating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org