[jira] [Updated] (CASSANDRA-13863) Speculative retry causes read repair even if read_repair_chance is 0.0.

2017-10-10 Thread Murukesh Mohanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Murukesh Mohanan updated CASSANDRA-13863:
-
Attachment: 0001-Use-read_repair_chance-when-starting-repairs-due-to-.patch

As a quick fix I tried using {{read_repair_chance}} in the exception handler 
for {{DigestMismatchException}}. After running benchmarks with YCSB (default 
settings - so {{read_repair_chance}} is 0), {{workloada}}) on 3.0.(8,9,12) and 
3.0.12 with the patch, the results averaged across ~50 runs are so:

3.0.8:
{code}
[OVERALL], RunTime(ms), 5287.62
[OVERALL], Throughput(ops/sec), 189.70
[TOTAL_GCS_PS_Scavenge], Count, 1
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 14.47
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.27
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0
[TOTAL_GCs], Count, 1
[TOTAL_GC_TIME], Time(ms), 14.47
[TOTAL_GC_TIME_%], Time(%), 0.27
[READ], Operations, 502.55
[READ], AverageLatency(us), 2701.96
[READ], MinLatency(us), 1144.75
[READ], MaxLatency(us), 21410.62
[READ], 95thPercentileLatency(us), 4606.09
[READ], 99thPercentileLatency(us), 8593.26
[READ], Return=OK, 502.55
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 2230368.60
[CLEANUP], MinLatency(us), 2229344.60
[CLEANUP], MaxLatency(us), 2231391.60
[CLEANUP], 95thPercentileLatency(us), 2231391.60
[CLEANUP], 99thPercentileLatency(us), 2231391.60
[UPDATE], Operations, 497.45
[UPDATE], AverageLatency(us), 2118.83
[UPDATE], MinLatency(us), 976.21
[UPDATE], MaxLatency(us), 21953.26
[UPDATE], 95thPercentileLatency(us), 3519.23
[UPDATE], 99thPercentileLatency(us), 7775.53
[UPDATE], Return=OK, 497.45
{code}
3.0.9:
{code}
[OVERALL], RunTime(ms), 5269.64
[OVERALL], Throughput(ops/sec), 190.36
[TOTAL_GCS_PS_Scavenge], Count, 1
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 14.26
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.27
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0
[TOTAL_GCs], Count, 1
[TOTAL_GC_TIME], Time(ms), 14.26
[TOTAL_GC_TIME_%], Time(%), 0.27
[READ], Operations, 499.26
[READ], AverageLatency(us), 2673.89
[READ], MinLatency(us), 1141.89
[READ], MaxLatency(us), 21053.04
[READ], 95thPercentileLatency(us), 4392.28
[READ], 99thPercentileLatency(us), 8742.70
[READ], Return=OK, 499.26
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 2230214.04
[CLEANUP], MinLatency(us), 2229190.04
[CLEANUP], MaxLatency(us), 2231237.04
[CLEANUP], 95thPercentileLatency(us), 2231237.04
[CLEANUP], 99thPercentileLatency(us), 2231237.04
[UPDATE], Operations, 500.74
[UPDATE], AverageLatency(us), 2106.96
[UPDATE], MinLatency(us), 967.11
[UPDATE], MaxLatency(us), 21862.40
[UPDATE], 95thPercentileLatency(us), 3477.83
[UPDATE], 99thPercentileLatency(us), 7677.11
[UPDATE], Return=OK, 500.74
{code}
3.0.12:
{code}
[OVERALL], RunTime(ms), 5425.13
[OVERALL], Throughput(ops/sec), 184.86
[TOTAL_GCS_PS_Scavenge], Count, 1
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 17.42
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.32
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0
[TOTAL_GCs], Count, 1
[TOTAL_GC_TIME], Time(ms), 17.42
[TOTAL_GC_TIME_%], Time(%), 0.32
[READ], Operations, 500.49
[READ], AverageLatency(us), 2805.40
[READ], MinLatency(us), 1158.47
[READ], MaxLatency(us), 24314.62
[READ], 95thPercentileLatency(us), 4903.83
[READ], 99thPercentileLatency(us), 9662.70
[READ], Return=OK, 500.49
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 2230716.38
[CLEANUP], MinLatency(us), 2229692.38
[CLEANUP], MaxLatency(us), 2231739.38
[CLEANUP], 95thPercentileLatency(us), 2231739.38
[CLEANUP], 99thPercentileLatency(us), 2231739.38
[UPDATE], Operations, 499.51
[UPDATE], AverageLatency(us), 2225.51
[UPDATE], MinLatency(us), 971.92
[UPDATE], MaxLatency(us), 23552.06
[UPDATE], 95thPercentileLatency(us), 3822.02
[UPDATE], 99thPercentileLatency(us), 9153.19
[UPDATE], Return=OK, 499.51
{code}
3.0.12 with patch:
{code}
[OVERALL], RunTime(ms), 5128.40
[OVERALL], Throughput(ops/sec), 195.93
[TOTAL_GCS_PS_Scavenge], Count, 1
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 12.13
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.24
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0
[TOTAL_GCs], Count, 1
[TOTAL_GC_TIME], Time(ms), 12.13
[TOTAL_GC_TIME_%], Time(%), 0.24
[READ], Operations, 500.79
[READ], AverageLatency(us), 2557.40
[READ], MinLatency(us), 1081.06
[READ], MaxLatency(us), 21607.91
[READ], 95thPercentileLatency(us), 4195.49
[READ], 99thPercentileLatency(us), 7990.74
[READ], Return=OK, 500.79
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 2229325.28
[CLEANUP], MinLatency(us), 2228301.28
[CLEANUP], MaxLatency(us), 2230348.28
[CLEANUP], 95thPercentileLatency(us), 2230348.28
[CLEANUP], 99thPercentileLatency(us), 2230348.28
[UPDATE], 

[jira] [Updated] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-10-10 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-12373:
-
Fix Version/s: (was: 3.11.x)
   (was: 3.0.x)
   3.0.15
   3.11.1
   4.0

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13943) Infinite compaction of L0 SSTables in JBOD

2017-10-10 Thread Dan Kinder (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dan Kinder updated CASSANDRA-13943:
---
Attachment: debug.log-with-commit-d8f3f2780

Okay, stood up a node on this commit: 
https://github.com/krummas/cassandra/commit/d8f3f2780fbb9e789f90b095d1f109ebb16f46ff
and attached the log.

> Infinite compaction of L0 SSTables in JBOD
> --
>
> Key: CASSANDRA-13943
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13943
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.11.0 / Centos 6
>Reporter: Dan Kinder
>Assignee: Marcus Eriksson
> Attachments: debug.log, debug.log-with-commit-d8f3f2780
>
>
> I recently upgraded from 2.2.6 to 3.11.0.
> I am seeing Cassandra loop infinitely compacting the same data over and over. 
> Attaching logs.
> It is compacting two tables, one on /srv/disk10, the other on /srv/disk1. It 
> does create new SSTables but immediately recompacts again. Note that I am not 
> inserting anything at the moment, there is no flushing happening on this 
> table (Memtable switch count has not changed).
> My theory is that it somehow thinks those should be compaction candidates. 
> But they shouldn't be, they are on different disks and I ran nodetool 
> relocatesstables as well as nodetool compact. So, it tries to compact them 
> together, but the compaction results in the exact same 2 SSTables on the 2 
> disks, because the keys are split by data disk.
> This is pretty serious, because all our nodes right now are consuming CPU 
> doing this for multiple tables, it seems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7839) Support standard EC2 naming conventions in Ec2Snitch

2017-10-10 Thread Daniel Bankhead (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199485#comment-16199485
 ] 

Daniel Bankhead commented on CASSANDRA-7839:


This would be a nice addition, especially for new Cassandra users. Do you guys 
think it would be possible to add this to Cassandra 4.0? Since it is a breaking 
change I would imagine it would be best for a major release.

> Support standard EC2 naming conventions in Ec2Snitch
> 
>
> Key: CASSANDRA-7839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7839
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Gregory Ramsperger
>Assignee: Gregory Ramsperger
>  Labels: docs-impacting
> Attachments: CASSANDRA-7839-aws-naming-conventions.patch
>
>
> The EC2 snitches use datacenter and rack naming conventions inconsistent with 
> those presented in Amazon EC2 APIs as region and availability zone. A 
> discussion of this is found in CASSANDRA-4026. This has not been changed for 
> valid backwards compatibility reasons. Using SnitchProperties, it is possible 
> to switch between the legacy naming and the full, AWS-style naming. 
> Proposal:
> * introduce a property (ec2_naming_scheme) to switch naming schemes.
> * default to current/legacy naming scheme
> * add support for a new scheme ("standard") which is consistent AWS 
> conventions
> ** data centers will be the region name, including the number
> ** racks will be the availability zone name, including the region name
> Examples:
> * * legacy* : datacenter is the part of the availability zone name preceding 
> the last "\-" when the zone ends in \-1 and includes the number if not \-1. 
> Rack is the portion of the availability zone name following  the last "\-".
> ** us-west-1a => dc: us-west, rack: 1a
> ** us-west-2b => dc: us-west-2, rack: 2b; 
> * *standard* : datacenter is the part of the availability zone name preceding 
> zone letter. rack is the entire availability zone name.
> ** us-west-1a => dc: us-west-1, rack: us-west-1a
> ** us-west-2b => dc: us-west-2, rack: us-west-2b; 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/8] cassandra git commit: Update version to 3.11.2

2017-10-10 Thread mshuler
Update version to 3.11.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aee02e48
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aee02e48
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aee02e48

Branch: refs/heads/trunk
Commit: aee02e4854de5a3e0de4d4e4a603e71e45edf8a4
Parents: e047b1d
Author: Michael Shuler 
Authored: Tue Oct 10 17:19:20 2017 -0500
Committer: Michael Shuler 
Committed: Tue Oct 10 17:19:20 2017 -0500

--
 NEWS.txt | 7 +++
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/aee02e48/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index cb1143f..96285c7 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,13 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.11.2
+==
+
+Upgrading
+-
+- Nothing specific to this release, but please see previous upgrading 
sections.
+
 3.11.1
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aee02e48/build.xml
--
diff --git a/build.xml b/build.xml
index 60a0101..38e8963 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aee02e48/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index d8be158..0d791cb 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.11.2) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Tue, 10 Oct 2017 17:18:26 -0500
+
 cassandra (3.11.1) unstable; urgency=medium
 
   * New release


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/8] cassandra git commit: Update version to 3.0.16

2017-10-10 Thread mshuler
Update version to 3.0.16


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a04d6271
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a04d6271
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a04d6271

Branch: refs/heads/trunk
Commit: a04d627140ebd453a774e8f2577d429914a91439
Parents: 8a424ce
Author: Michael Shuler 
Authored: Tue Oct 10 17:14:47 2017 -0500
Committer: Michael Shuler 
Committed: Tue Oct 10 17:14:47 2017 -0500

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a04d6271/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 7064c5d..944857b 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.0.16
+=
+
+Upgrading
+-
+   - Nothing specific to this release, but please see previous upgrading 
sections,
+ especially if you are upgrading from 2.2.
+
 3.0.15
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a04d6271/build.xml
--
diff --git a/build.xml b/build.xml
index 61c1c22..386bb4dc 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a04d6271/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index f3745e6..d9698c2 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.0.16) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Tue, 10 Oct 2017 17:13:31 -0500
+
 cassandra (3.0.15) unstable; urgency=medium
 
   * New release


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[5/8] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-10-10 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e047b1d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e047b1d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e047b1d0

Branch: refs/heads/trunk
Commit: e047b1d059ffc251afc6a6f871b044871c827f92
Parents: f3cf1c0 a04d627
Author: Michael Shuler 
Authored: Tue Oct 10 17:15:17 2017 -0500
Committer: Michael Shuler 
Committed: Tue Oct 10 17:15:17 2017 -0500

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[8/8] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-10-10 Thread mshuler
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df147cc0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df147cc0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df147cc0

Branch: refs/heads/trunk
Commit: df147cc09a9697992923505b6918fbcf50c0f94a
Parents: 7ef4ff3 aee02e4
Author: Michael Shuler 
Authored: Tue Oct 10 17:19:41 2017 -0500
Committer: Michael Shuler 
Committed: Tue Oct 10 17:19:41 2017 -0500

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/8] cassandra git commit: Update version to 3.0.16

2017-10-10 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 8a424cef3 -> a04d62714
  refs/heads/cassandra-3.11 f3cf1c019 -> aee02e485
  refs/heads/trunk 7ef4ff30c -> df147cc09


Update version to 3.0.16


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a04d6271
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a04d6271
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a04d6271

Branch: refs/heads/cassandra-3.0
Commit: a04d627140ebd453a774e8f2577d429914a91439
Parents: 8a424ce
Author: Michael Shuler 
Authored: Tue Oct 10 17:14:47 2017 -0500
Committer: Michael Shuler 
Committed: Tue Oct 10 17:14:47 2017 -0500

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a04d6271/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 7064c5d..944857b 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.0.16
+=
+
+Upgrading
+-
+   - Nothing specific to this release, but please see previous upgrading 
sections,
+ especially if you are upgrading from 2.2.
+
 3.0.15
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a04d6271/build.xml
--
diff --git a/build.xml b/build.xml
index 61c1c22..386bb4dc 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a04d6271/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index f3745e6..d9698c2 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.0.16) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Tue, 10 Oct 2017 17:13:31 -0500
+
 cassandra (3.0.15) unstable; urgency=medium
 
   * New release


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[7/8] cassandra git commit: Update version to 3.11.2

2017-10-10 Thread mshuler
Update version to 3.11.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aee02e48
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aee02e48
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aee02e48

Branch: refs/heads/cassandra-3.11
Commit: aee02e4854de5a3e0de4d4e4a603e71e45edf8a4
Parents: e047b1d
Author: Michael Shuler 
Authored: Tue Oct 10 17:19:20 2017 -0500
Committer: Michael Shuler 
Committed: Tue Oct 10 17:19:20 2017 -0500

--
 NEWS.txt | 7 +++
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/aee02e48/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index cb1143f..96285c7 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,13 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.11.2
+==
+
+Upgrading
+-
+- Nothing specific to this release, but please see previous upgrading 
sections.
+
 3.11.1
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aee02e48/build.xml
--
diff --git a/build.xml b/build.xml
index 60a0101..38e8963 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aee02e48/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index d8be158..0d791cb 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.11.2) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Tue, 10 Oct 2017 17:18:26 -0500
+
 cassandra (3.11.1) unstable; urgency=medium
 
   * New release


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/8] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-10-10 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e047b1d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e047b1d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e047b1d0

Branch: refs/heads/cassandra-3.11
Commit: e047b1d059ffc251afc6a6f871b044871c827f92
Parents: f3cf1c0 a04d627
Author: Michael Shuler 
Authored: Tue Oct 10 17:15:17 2017 -0500
Committer: Michael Shuler 
Committed: Tue Oct 10 17:15:17 2017 -0500

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/8] cassandra git commit: Update version to 3.0.16

2017-10-10 Thread mshuler
Update version to 3.0.16


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a04d6271
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a04d6271
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a04d6271

Branch: refs/heads/cassandra-3.11
Commit: a04d627140ebd453a774e8f2577d429914a91439
Parents: 8a424ce
Author: Michael Shuler 
Authored: Tue Oct 10 17:14:47 2017 -0500
Committer: Michael Shuler 
Committed: Tue Oct 10 17:14:47 2017 -0500

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a04d6271/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 7064c5d..944857b 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.0.16
+=
+
+Upgrading
+-
+   - Nothing specific to this release, but please see previous upgrading 
sections,
+ especially if you are upgrading from 2.2.
+
 3.0.15
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a04d6271/build.xml
--
diff --git a/build.xml b/build.xml
index 61c1c22..386bb4dc 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a04d6271/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index f3745e6..d9698c2 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.0.16) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Tue, 10 Oct 2017 17:13:31 -0500
+
 cassandra (3.0.15) unstable; urgency=medium
 
   * New release


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13947) Add some clarifying examples of nodetool usage

2017-10-10 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199381#comment-16199381
 ] 

Jeremy Hanna commented on CASSANDRA-13947:
--

I don't know if airlift/airline has the ability to have a separate section for 
example usage or if you need to build that into the existing structures.  
Ideally this would go into the nodetool command-line usage help.  With that, it 
would get automatically put into the nodetool docs as the output of the 
nodetool help commands gets built into those docs.

> Add some clarifying examples of nodetool usage
> --
>
> Key: CASSANDRA-13947
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13947
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jeremy Hanna
>  Labels: lhf
>
> Certain nodetool commands would benefit from some examples of usage.  For 
> example, user defined compactions require a comma separated list of path 
> names from the data directory room to the table name.
> {code}
> nodetool compact --user-defined 
> system_schema/types-5a8b1ca866023f77a0459273d308917a/mc-5-big-Data.db,system_schema/types-5a8b1ca866023f77a0459273d308917a/mc-6-big-Data.db
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13943) Infinite compaction of L0 SSTables in JBOD

2017-10-10 Thread Dan Kinder (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199363#comment-16199363
 ] 

Dan Kinder commented on CASSANDRA-13943:


Hm, it looks like the new patch though does not change any behavior, and 
without any changes I don't think my nodes will be able to do any flushing... I 
might try it on just one node and send those logs.

> Infinite compaction of L0 SSTables in JBOD
> --
>
> Key: CASSANDRA-13943
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13943
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.11.0 / Centos 6
>Reporter: Dan Kinder
>Assignee: Marcus Eriksson
> Attachments: debug.log
>
>
> I recently upgraded from 2.2.6 to 3.11.0.
> I am seeing Cassandra loop infinitely compacting the same data over and over. 
> Attaching logs.
> It is compacting two tables, one on /srv/disk10, the other on /srv/disk1. It 
> does create new SSTables but immediately recompacts again. Note that I am not 
> inserting anything at the moment, there is no flushing happening on this 
> table (Memtable switch count has not changed).
> My theory is that it somehow thinks those should be compaction candidates. 
> But they shouldn't be, they are on different disks and I ran nodetool 
> relocatesstables as well as nodetool compact. So, it tries to compact them 
> together, but the compaction results in the exact same 2 SSTables on the 2 
> disks, because the keys are split by data disk.
> This is pretty serious, because all our nodes right now are consuming CPU 
> doing this for multiple tables, it seems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13123) Draining a node might fail to delete all inactive commitlogs

2017-10-10 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13123:
---
Fix Version/s: (was: 3.11.1)
   (was: 3.0.15)
   3.11.2
   3.0.16

> Draining a node might fail to delete all inactive commitlogs
> 
>
> Key: CASSANDRA-13123
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13123
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jan Urbański
>Assignee: Jan Urbański
> Fix For: 3.0.16, 3.11.2, 4.0
>
> Attachments: 13123-2.2.8.txt, 13123-3.0.10.txt, 13123-3.9.txt, 
> 13123-trunk.txt
>
>
> After issuing a drain command, it's possible that not all of the inactive 
> commitlogs are removed.
> The drain command shuts down the CommitLog instance, which in turn shuts down 
> the CommitLogSegmentManager. This has the effect of discarding any pending 
> management tasks it might have, like the removal of inactive commitlogs.
> This in turn leads to an excessive amount of commitlogs being left behind 
> after a drain and a lengthy recovery after a restart. With a fleet of dozens 
> of nodes, each of them leaving several GB of commitlogs after a drain and 
> taking up to two minutes to recover them on restart, the additional time 
> required to restart the entire fleet becomes noticeable.
> This problem is not present in 3.x or trunk because of the CLSM rewrite done 
> in CASSANDRA-8844.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13006) Disable automatic heap dumps on OOM error

2017-10-10 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13006:
---
Fix Version/s: (was: 3.0.15)
   3.0.16

> Disable automatic heap dumps on OOM error
> -
>
> Key: CASSANDRA-13006
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13006
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: anmols
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0.16
>
> Attachments: 13006-3.0.9.txt
>
>
> With CASSANDRA-9861, a change was added to enable collecting heap dumps by 
> default if the process encountered an OOM error. These heap dumps are stored 
> in the Apache Cassandra home directory unless configured otherwise (see 
> [Cassandra Support 
> Document|https://support.datastax.com/hc/en-us/articles/204225959-Generating-and-Analyzing-Heap-Dumps]
>  for this feature).
>  
> The creation and storage of heap dumps aides debugging and investigative 
> workflows, but is not be desirable for a production environment where these 
> heap dumps may occupy a large amount of disk space and require manual 
> intervention for cleanups. 
>  
> Managing heap dumps on out of memory errors and configuring the paths for 
> these heap dumps are available as JVM options in JVM. The current behavior 
> conflicts with the Boolean JVM flag HeapDumpOnOutOfMemoryError. 
>  
> A patch can be proposed here that would make the heap dump on OOM error honor 
> the HeapDumpOnOutOfMemoryError flag. Users who would want to still generate 
> heap dumps on OOM errors can set the -XX:+HeapDumpOnOutOfMemoryError JVM 
> option.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



svn commit: r1811770 - in /cassandra/site: publish/download/index.html publish/index.html src/_data/releases.yaml

2017-10-10 Thread mshuler
Author: mshuler
Date: Tue Oct 10 20:50:54 2017
New Revision: 1811770

URL: http://svn.apache.org/viewvc?rev=1811770=rev
Log:
Update download page for 3.0.15 and 3.11.1 releases

Modified:
cassandra/site/publish/download/index.html
cassandra/site/publish/index.html
cassandra/site/src/_data/releases.yaml

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1811770=1811769=1811770=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Tue Oct 10 20:50:54 2017
@@ -99,14 +99,14 @@
 
 Latest version
 
-Download the latest Apache Cassandra 3.11 release: http://www.apache.org/dyn/closer.lua/cassandra/3.11.0/apache-cassandra-3.11.0-bin.tar.gz;>3.11.0
 (http://www.apache.org/dist/cassandra/3.11.0/apache-cassandra-3.11.0-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/3.11.0/apache-cassandra-3.11.0-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/3.11.0/apache-cassandra-3.11.0-bin.tar.gz.sha1;>sha1),
 released on 2017-06-23.
+Download the latest Apache Cassandra 3.11 release: http://www.apache.org/dyn/closer.lua/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz;>3.11.1
 (http://www.apache.org/dist/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz.sha1;>sha1),
 released on 2017-10-10.
 
 Older supported releases
 
 The following older Cassandra releases are still supported:
 
 
-  Apache Cassandra 3.0 is supported until 6 months after 4.0 
release (date TBD). The latest release is http://www.apache.org/dyn/closer.lua/cassandra/3.0.14/apache-cassandra-3.0.14-bin.tar.gz;>3.0.14
 (http://www.apache.org/dist/cassandra/3.0.14/apache-cassandra-3.0.14-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/3.0.14/apache-cassandra-3.0.14-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/3.0.14/apache-cassandra-3.0.14-bin.tar.gz.sha1;>sha1),
 released on 2017-06-23.
+  Apache Cassandra 3.0 is supported until 6 months after 4.0 
release (date TBD). The latest release is http://www.apache.org/dyn/closer.lua/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz;>3.0.15
 (http://www.apache.org/dist/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz.sha1;>sha1),
 released on 2017-10-10.
   Apache Cassandra 2.2 is supported until 4.0 release (date 
TBD). The latest release is http://www.apache.org/dyn/closer.lua/cassandra/2.2.11/apache-cassandra-2.2.11-bin.tar.gz;>2.2.11
 (http://www.apache.org/dist/cassandra/2.2.11/apache-cassandra-2.2.11-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.2.11/apache-cassandra-2.2.11-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.2.11/apache-cassandra-2.2.11-bin.tar.gz.sha1;>sha1),
 released on 2017-10-05.
   Apache Cassandra 2.1 is supported until 4.0 release (date 
TBD) with critical fixes only. The latest release is
 http://www.apache.org/dyn/closer.lua/cassandra/2.1.19/apache-cassandra-2.1.19-bin.tar.gz;>2.1.19
 (http://www.apache.org/dist/cassandra/2.1.19/apache-cassandra-2.1.19-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.1.19/apache-cassandra-2.1.19-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.1.19/apache-cassandra-2.1.19-bin.tar.gz.sha1;>sha1),
 released on 2017-10-05.

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1811770=1811769=1811770=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Tue Oct 10 20:50:54 2017
@@ -95,7 +95,7 @@
 
 
   http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.11.0;>Cassandra
 3.11.0 Changelog
+ 
href="http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.11.1;>Cassandra
 3.11.1 Changelog
   
 
   

Modified: cassandra/site/src/_data/releases.yaml
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/_data/releases.yaml?rev=1811770=1811769=1811770=diff
==
--- cassandra/site/src/_data/releases.yaml (original)
+++ cassandra/site/src/_data/releases.yaml Tue Oct 10 20:50:54 2017
@@ -1,10 +1,10 @@
 latest:
-  name: "3.11.0"
-  date: 2017-06-23
+  name: "3.11.1"
+  date: 2017-10-10
 
 "3.0":
-  name: "3.0.14"
-  date: 2017-06-23
+  name: "3.0.15"
+  date: 

svn commit: r22212 [2/2] - in /release/cassandra: 3.0.15/ 3.11.1/ debian/dists/30x/ debian/dists/30x/main/binary-amd64/ debian/dists/30x/main/binary-i386/ debian/dists/30x/main/source/ debian/dists/31

2017-10-10 Thread mshuler
Modified: release/cassandra/redhat/311x/repodata/repomd.xml
==
--- release/cassandra/redhat/311x/repodata/repomd.xml (original)
+++ release/cassandra/redhat/311x/repodata/repomd.xml Tue Oct 10 20:45:47 2017
@@ -1,55 +1,55 @@
 
 http://linux.duke.edu/metadata/repo; 
xmlns:rpm="http://linux.duke.edu/metadata/rpm;>
- 1498256234
+ 1507667399
 
-  17462f7d5509cf8821adee6dda16e5e0521b7044d2ca018299f5f3e2089761bd
-  83eff6a1e769944b3050a933b437c002fd84e5dccf910c8c8061456d5203f523
-  
-  1498256235
-  2047
-  13960
+  c1cc705a958336cc226d364c386373f96930486073525697b023dcc80d00e6b7
+  796d6fb07b20bcdd69d3c9461e98bccfed9735563edaceca822bae953e2dd6ab
+  
+  1507667400
+  2394
+  27795
 
 
-  ba1722305bfc1b45431b9ab48172afa159ff660d55f1e24794f392ba4ce3ce69
-  a63dfa82a63f854a984e2e4bf4ac986dd79cb8171f55b72a7b43a08e76bd143b
-  
-  1498256235
-  1618
-  7789
+  2c7bc42537d98b816bda50c70fe6821fea6a4f28ebc84d7ddc540516a36c19d4
+  164f5fc3cce5d0815cb57bdf536b739368fdf12f903a29012845cd86d4e855d8
+  
+  1507667400
+  1960
+  15411
 
 
-  29629735a8a11c7d080434ea330ace47466dfc80e81dce82d20e6e0e8349d368
-  126e27aa4132feb3c2b44773c1da139a5d486e0cc143457d247c069f9a081bb7
-  
-  1498256235
+  52bf89cd3d860765937698c4394de6970858e79d340c9d03ed414b19c07c023c
+  1bf399a92c46a8b23e5f87f94978a42eba2d8513df118fd886201b0663d7de0b
+  
+  1507667400
   10
-  4355
-  30720
+  6261
+  38912
 
 
-  4ea2274efcc02ea19f5d97856f04e5141a45791be4d71fc7caa684a71434562d
-  e533c6f23b2167156d93edd60701f533c687d3d0a816452503417f9d52029827
-  
-  1498256235
+  f36475867e4cd68b55db42900c7eaa407ba72535c049172636475287196cc5c2
+  9deb1b386ad74fa0cfe4fca396694afba9eb538fd8e0e5ad032473001ce00238
+  
+  1507667400
   10
-  987
+  1222
   6144
 
 
-  e5d9a776e9578d3b8ebbdd8ac589e666de0608fce710cedd9c6ddd0340560863
-  1563b9e554dbbd153794cc4298abdac2123f4a83f4b2ec0251b968e5405ff6e6
-  
-  1498256235
-  443
-  1096
+  5b989e501e8b50f6b9355948499c84c9cdb6cb32841a9f6143ed2fe28acac1a5
+  24ca74746c10ff8bf078dc5d424695ab86efb8b698f8f2ca474de21c4645f497
+  
+  1507667400
+  585
+  2071
 
 
-  aa0783771bbe5f7a23debe8e84251beb90778a783654c46135c36f5b2f5218e2
-  4fca4077d94fd13418f5c14d25d53db50622f1c3b8d27e33233d7061696a86ab
-  
-  1498256235
+  aa839740b4c4dc21f4bae4745af282d2745235118dd914571859e79ccaee2eeb
+  17e68a8212f3e9ca80360aafdeffcdaacdd46299195260c4b1814beae91bdf8c
+  
+  1507667400
   10
-  3163
-  13312
+  4218
+  20480
 
 

Modified: release/cassandra/redhat/311x/repodata/repomd.xml.asc
==
--- release/cassandra/redhat/311x/repodata/repomd.xml.asc (original)
+++ release/cassandra/redhat/311x/repodata/repomd.xml.asc Tue Oct 10 20:45:47 
2017
@@ -1,17 +1,17 @@
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 
-iQIcBAABCAAGBQJZTZNzAAoJEKJ4t4H+SyvaJkIP/iGRpwcmBH7vciSQcSMS/UkL
-s+u9T9En37xwDT+hb5M2z7KlcHDuS4bVgbp874ySdYNQalXdTqWAPttFXoWxPWYP
-qZRhIs7FMDE8NCofJd944etuHaJVo1Ihezg6Vv2WXUCQ4bqgRPo6PLq921VpAZ0D
-ns5bjYw7Lp+C4wsKnk10Llan7Zt+mnpPoved2jrpZd6MVzj4tvZt1S5vNBHu3HNd
-FJirxxyMSmaKAqXk8GQACijVr7utfSI0SJJo5JcBGtr9+O81qtXGGotHBx5soIMb
-ASac6L+2o+JHJKCYsBYDLzsLnS8M0q7+5HYxB+VcbhXRMNsNp5Uw2gJOgY/jiQGZ
-i2c6jq/Lw4PnqdY29jmaKPvQOkEezlb6VvvIzGoKxAbIPJZ8kAetSBLr1usypn+T
-h20qDXDUwnMXK7fBboQxNvxoVju72u2bB3q81BC7Q9RHkTxnGenrxZLixgHpG+Ol
-TraHYsIw/HyMVvvhnPkq5WIkhwaeXbjq+gD7lAsOWFvaXzd1sYbTwXTm5unQiIAv
-emddAE0vo6gXaS6/abEM4IQLDpz3d6RQWQIOCD6n1CWdcngrGZk8uCBTy9idCZIP
-pRKmQTE30hxqYZAAb3VC6QpUlGYcgJ86sC4QkePWblHyDpDu5yjYy6o8mm7x5IQI
-jnzNuJZgjbIiWZOesyQb
-=jTui
+iQIcBAABCAAGBQJZ3S3hAAoJEKJ4t4H+Syvap/UP/1wem5OdG5+W6X8zgjMn+vji
+tPkyVO3wQetwNB4eA6JTMwnEXFprSeQ/LIqzkRB6YDgOuJ17wQ+0E7lsVFE8Qk0C
+zQYnrcMIr5ghufiyEHTyMuWEhFKtf1f6kH2Z2vjkj8CFXagqnfSgdrOcgjkvbliu
+qg4DAHIz+sgV4vLbrZslbptDR+/idY7o4lmAnxlNkOsNRtzfKJHAjzXjaRD2t8SY
+cwdvmn9mX1CSIA/EjDsNB2/WSYxlD+72knlsOpwHIwqPUuTfU2Q/cQ6zSYRCpV5D
+DBuFVymzRAoeAOnGy0FFdFNo9V6E04Crk1I24MQ8T9gLeckUOggaWayhIASXNp4q
+yvc6KaPtIiZhnmEMF/SboTqthnL3fPLlzxJakzO3sVGaplfPXBV9WZr4BWvPmIhV
+TrsAaz5QLqw/nrMpG9IWzW6BQ5j8lGa6eZpAhDftHjsBSU59dOTWSdLb1mmCPHRQ
+M9mud8Cl6n2PrS/3jU8/8YjToP2SWxiYSYrTyfxY80nPgNAla/AsRE4bLT1OGq3e
+Ovt7OwJgheUta6fU9PL6vkkYm0Xf6SmUQk6uYRlwD2eyh2C+kiOztucLl6sAftMU
+aHuzA7phLVc7LsY9a1LveuX/hw1+YS9wHcPTnsOzM9EyxOFROickVAxATu4m691k
+fSJ9Y2gJarTi1wXviHfl
+=HZzi
 -END PGP SIGNATURE-



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



svn commit: r22212 [1/2] - in /release/cassandra: 3.0.15/ 3.11.1/ debian/dists/30x/ debian/dists/30x/main/binary-amd64/ debian/dists/30x/main/binary-i386/ debian/dists/30x/main/source/ debian/dists/31

2017-10-10 Thread mshuler
Author: mshuler
Date: Tue Oct 10 20:45:47 2017
New Revision: 22212

Log:
Apache Cassandra 3.0.15 and 3.11.1 Releases

Added:
release/cassandra/3.0.15/
release/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz   (with props)
release/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz.asc
release/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz.asc.md5
release/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz.asc.sha1
release/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz.md5
release/cassandra/3.0.15/apache-cassandra-3.0.15-bin.tar.gz.sha1
release/cassandra/3.0.15/apache-cassandra-3.0.15-src.tar.gz   (with props)
release/cassandra/3.0.15/apache-cassandra-3.0.15-src.tar.gz.asc
release/cassandra/3.0.15/apache-cassandra-3.0.15-src.tar.gz.asc.md5
release/cassandra/3.0.15/apache-cassandra-3.0.15-src.tar.gz.asc.sha1
release/cassandra/3.0.15/apache-cassandra-3.0.15-src.tar.gz.md5
release/cassandra/3.0.15/apache-cassandra-3.0.15-src.tar.gz.sha1
release/cassandra/3.11.1/
release/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz
release/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz.asc
release/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz.asc.md5
release/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz.asc.sha1
release/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz.md5
release/cassandra/3.11.1/apache-cassandra-3.11.1-bin.tar.gz.sha1
release/cassandra/3.11.1/apache-cassandra-3.11.1-src.tar.gz
release/cassandra/3.11.1/apache-cassandra-3.11.1-src.tar.gz.asc
release/cassandra/3.11.1/apache-cassandra-3.11.1-src.tar.gz.asc.md5
release/cassandra/3.11.1/apache-cassandra-3.11.1-src.tar.gz.asc.sha1
release/cassandra/3.11.1/apache-cassandra-3.11.1-src.tar.gz.md5
release/cassandra/3.11.1/apache-cassandra-3.11.1-src.tar.gz.sha1

release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_3.0.15_all.deb   
(with props)

release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_3.11.1_all.deb   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.15.diff.gz   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.15.dsc
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.15.orig.tar.gz 
  (with props)

release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.15.orig.tar.gz.asc
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.15_all.deb   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.11.1.diff.gz   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.11.1.dsc
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.11.1.orig.tar.gz 
  (with props)

release/cassandra/debian/pool/main/c/cassandra/cassandra_3.11.1.orig.tar.gz.asc
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.11.1_all.deb   
(with props)
release/cassandra/redhat/30x/cassandra-3.0.15-1.noarch.rpm   (with props)
release/cassandra/redhat/30x/cassandra-3.0.15-1.src.rpm   (with props)
release/cassandra/redhat/30x/cassandra-tools-3.0.15-1.noarch.rpm   (with 
props)

release/cassandra/redhat/30x/repodata/6774d3a3ee605052975462461778890cdfbad356a5174e6f9b9f01b9030159c7-primary.xml.gz
   (with props)

release/cassandra/redhat/30x/repodata/7ebb369bb224efa0e0858660db24d1b7d866e088821f9e93b06e357ed1d573ea-filelists.sqlite.bz2
   (with props)

release/cassandra/redhat/30x/repodata/7ebb369bb224efa0e0858660db24d1b7d866e088821f9e93b06e357ed1d573ea-filelists.sqlite.bz2.asc

release/cassandra/redhat/30x/repodata/8ebfe7b63efc41d01fd3c77e4e863098d20eec3f806850c24f4372ae2a7abea2-primary.sqlite.bz2
   (with props)

release/cassandra/redhat/30x/repodata/8ebfe7b63efc41d01fd3c77e4e863098d20eec3f806850c24f4372ae2a7abea2-primary.sqlite.bz2.asc

release/cassandra/redhat/30x/repodata/9eae9dd9267ee1a9524db2aefecb695fa32e1c755b414e739bff84b665ff8b23-filelists.xml.gz
   (with props)

release/cassandra/redhat/30x/repodata/cef049ccb5649d01e4e7db3e49db904ed16ee50a73f06a9e704f93be079ad2d5-other.xml.gz
   (with props)

release/cassandra/redhat/30x/repodata/e8e01d7e0ce9c2f4be2b90a5dbe138017b3bac51b5dc13b92902ba9e2365099e-other.sqlite.bz2
   (with props)

release/cassandra/redhat/30x/repodata/e8e01d7e0ce9c2f4be2b90a5dbe138017b3bac51b5dc13b92902ba9e2365099e-other.sqlite.bz2.asc
release/cassandra/redhat/311x/cassandra-3.11.1-1.noarch.rpm   (with props)
release/cassandra/redhat/311x/cassandra-3.11.1-1.src.rpm   (with props)
release/cassandra/redhat/311x/cassandra-tools-3.11.1-1.noarch.rpm   (with 
props)

release/cassandra/redhat/311x/repodata/2c7bc42537d98b816bda50c70fe6821fea6a4f28ebc84d7ddc540516a36c19d4-primary.xml.gz
   (with props)

release/cassandra/redhat/311x/repodata/52bf89cd3d860765937698c4394de6970858e79d340c9d03ed414b19c07c023c-primary.sqlite.bz2
   (with props)


[jira] [Updated] (CASSANDRA-13947) Add some clarifying examples of nodetool usage

2017-10-10 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13947:
-
Description: 
Certain nodetool commands would benefit from some examples of usage.  For 
example, user defined compactions require a comma separated list of path names 
from the data directory room to the table name.

{code}
nodetool compact --user-defined 
system_schema/types-5a8b1ca866023f77a0459273d308917a/mc-5-big-Data.db,system_schema/types-5a8b1ca866023f77a0459273d308917a/mc-6-big-Data.db
{code}

  was:Certain nodetool commands would benefit from some examples of usage.  For 
example, user defined compactions require a comma separated list of path names 
from the data directory room to the table name.


> Add some clarifying examples of nodetool usage
> --
>
> Key: CASSANDRA-13947
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13947
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jeremy Hanna
>  Labels: lhf
>
> Certain nodetool commands would benefit from some examples of usage.  For 
> example, user defined compactions require a comma separated list of path 
> names from the data directory room to the table name.
> {code}
> nodetool compact --user-defined 
> system_schema/types-5a8b1ca866023f77a0459273d308917a/mc-5-big-Data.db,system_schema/types-5a8b1ca866023f77a0459273d308917a/mc-6-big-Data.db
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] Git Push Summary

2017-10-10 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/cassandra-3.11.1 [created] e6339325e

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] Git Push Summary

2017-10-10 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/3.11.1-tentative [deleted] 983c72a84

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] Git Push Summary

2017-10-10 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/3.0.15-tentative [deleted] b32a9e645

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] Git Push Summary

2017-10-10 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/cassandra-3.0.15 [created] 1773d6c8e

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13947) Add some clarifying examples of nodetool usage

2017-10-10 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-13947:


 Summary: Add some clarifying examples of nodetool usage
 Key: CASSANDRA-13947
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13947
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jeremy Hanna


Certain nodetool commands would benefit from some examples of usage.  For 
example, user defined compactions require a comma separated list of path names 
from the data directory room to the table name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13475) First version of pluggable storage engine API.

2017-10-10 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198094#comment-16198094
 ] 

Dikang Gu edited comment on CASSANDRA-13475 at 10/10/17 8:05 PM:
-

Here is the first version of the pluggable storage engine api, based on trunk. 
|[trunk|https://github.com/DikangGu/cassandra/commit/f1c69f688d05504f7409dd735e1473982c59fa52]|[unit
 test|https://circleci.com/gh/DikangGu/cassandra/2]|

It contains the API, and a little bit refactoring of the streaming part. 

You can check https://github.com/Instagram/cassandra/tree/rocks_3.0 for the 
RocksDB based implementation.


was (Author: dikanggu):
Here is the first version of the pluggable storage engine api, based on trunk. 
https://github.com/DikangGu/cassandra/commit/f1c69f688d05504f7409dd735e1473982c59fa52

It contains the API, and a little bit refactoring of the streaming part. 

You can check https://github.com/Instagram/cassandra/tree/rocks_3.0 for the 
RocksDB based implementation.

> First version of pluggable storage engine API.
> --
>
> Key: CASSANDRA-13475
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13475
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>
> In order to support pluggable storage engine, we need to define a unified 
> interface/API, which can allow us to plug in different storage engines for 
> different requirements. 
> In very high level, the storage engine interface should include APIs to:
> 1. Apply update into the engine.
> 2. Query data from the engine.
> 3. Stream data in/out to/from the engine.
> 4. Table operations, like create/drop/truncate a table, etc.
> 5. Various stats about the engine.
> I create this ticket to start the discussions about the interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13431) Streaming error occurred org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe

2017-10-10 Thread Varun Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Gupta updated CASSANDRA-13431:

Attachment: Stream_Error_Broken_Pipe

> Streaming error occurred org.apache.cassandra.io.FSReadError: 
> java.io.IOException: Broken pipe
> --
>
> Key: CASSANDRA-13431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13431
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: ubuntu, cassandra 2.2.7, AWS EC2
>Reporter: krish
>  Labels: features, patch, performance
> Fix For: 2.2.7
>
> Attachments: Stream_Error_Broken_Pipe
>
>
> I am trying to add a node to the cluster. 
> Adding new node to cluster fails with broken pipe. cassandra fails after 
> starting with in 2 mints. 
> removed the node from the ring. Adding back fails. 
> OS info:  4.4.0-59-generic #80-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux.
> ERROR [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,410 
> StreamSession.java:532 - [Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de] 
> Streaming error occurred
> org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe
> at 
> org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:144) 
> ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:91)
>  ~[apache-cassandra-2.2.7.jar:2.2.
>   7]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:88)
>  ~[apache-cassandra-2.2.7.jar:2.2.
>   7]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.applyToChannel(BufferedDataOutputStreamPlus.java:297)
>  ~[apache-cassandra-2.2.7  
> .jar:2.2.7]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:87)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage.serialize(OutgoingFileMessage.java:90)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:48)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:40)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:47)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:389)
>  ~[apache-cassandra-2.2.7  
> .jar:2.2.7]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:361)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> Caused by: java.io.IOException: Broken pipe
> at sun.nio.ch.FileChannelImpl.transferTo0(Native Method) 
> ~[na:1.8.0_101]
> at 
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
>  ~[na:1.8.0_101]
> at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493) 
> ~[na:1.8.0_101]
> at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608) 
> ~[na:1.8.0_101]
> at 
> org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:140) 
> ~[apache-cassandra-2.2.7.jar:2.2.7]
> ... 11 common frames omitted
> INFO  [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,424 
> StreamResultFuture.java:183 - [Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de] 
> Session with /  123.120.56.71 
> is complete
> WARN  [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,425 
> StreamResultFuture.java:210 - [Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de] 
> Stream failed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13431) Streaming error occurred org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe

2017-10-10 Thread Varun Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199115#comment-16199115
 ] 

Varun Gupta commented on CASSANDRA-13431:
-

I am seeing similar error at 3.0.14 as well.

> Streaming error occurred org.apache.cassandra.io.FSReadError: 
> java.io.IOException: Broken pipe
> --
>
> Key: CASSANDRA-13431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13431
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: ubuntu, cassandra 2.2.7, AWS EC2
>Reporter: krish
>  Labels: features, patch, performance
> Fix For: 2.2.7
>
>
> I am trying to add a node to the cluster. 
> Adding new node to cluster fails with broken pipe. cassandra fails after 
> starting with in 2 mints. 
> removed the node from the ring. Adding back fails. 
> OS info:  4.4.0-59-generic #80-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux.
> ERROR [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,410 
> StreamSession.java:532 - [Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de] 
> Streaming error occurred
> org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe
> at 
> org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:144) 
> ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:91)
>  ~[apache-cassandra-2.2.7.jar:2.2.
>   7]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:88)
>  ~[apache-cassandra-2.2.7.jar:2.2.
>   7]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.applyToChannel(BufferedDataOutputStreamPlus.java:297)
>  ~[apache-cassandra-2.2.7  
> .jar:2.2.7]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:87)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage.serialize(OutgoingFileMessage.java:90)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:48)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:40)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:47)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:389)
>  ~[apache-cassandra-2.2.7  
> .jar:2.2.7]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:361)
>  ~[apache-cassandra-2.2.7.jar:2.2.7]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> Caused by: java.io.IOException: Broken pipe
> at sun.nio.ch.FileChannelImpl.transferTo0(Native Method) 
> ~[na:1.8.0_101]
> at 
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
>  ~[na:1.8.0_101]
> at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493) 
> ~[na:1.8.0_101]
> at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608) 
> ~[na:1.8.0_101]
> at 
> org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:140) 
> ~[apache-cassandra-2.2.7.jar:2.2.7]
> ... 11 common frames omitted
> INFO  [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,424 
> StreamResultFuture.java:183 - [Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de] 
> Session with /  123.120.56.71 
> is complete
> WARN  [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,425 
> StreamResultFuture.java:210 - [Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de] 
> Stream failed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13937) Cassandra node's startup time increased after increase count of big tables

2017-10-10 Thread Andrey Lataev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198850#comment-16198850
 ] 

Andrey Lataev commented on CASSANDRA-13937:
---


I try to replace LCS on STCS but without significant result.

>  Cassandra node's startup time increased after increase count of big tables
> ---
>
> Key: CASSANDRA-13937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13937
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: RHEL 7.3
> JDK HotSpot 1.8.0_121-b13
> cassandra-3.11 cluster with 43 nodes in 9 datacenters
> 8vCPU, 100 GB RAM
>Reporter: Andrey Lataev
> Attachments: cassandra.zip, debug.zip
>
>
> In startup time Cassandra spends a long time on read some big Columnfamilies.
> For example, in debug.log:
> {code:java}
> grep SSTableReader.java:506 /var/log/cassandra/debug.log
> <...> 
> DEBUG [SSTableBatchOpen:3] 2017-10-04 22:40:05,297 SSTableReader.java:506 - 
> Opening 
> /egov/data/cassandra/datafiles1/p00smevauditbody/messagelogbody20171003-b341cc709c7511e7b1cfed1e90eb03dc/mc-45242-big
>  (19.280MiB)
> DEBUG [SSTableBatchOpen:5] 2017-10-04 22:42:14,188 SSTableReader.java:506 - 
> Opening 
> /egov/data/cassandra/datafiles1/p00smevauditbody/messagelogbody20171004-f82225509d3e11e7b1cfed1e90eb03dc/mc-49002-big
>  (10.607MiB)
> <...>
> DEBUG [SSTableBatchOpen:4] 2017-10-04 22:42:19,792 SSTableReader.java:506 - 
> Opening 
> /egov/data/cassandra/datafiles1/p00smevauditbody/messagelogbody20171004-f82225509d3e11e7b1cfed1e90eb03dc/mc-47907-big
>  (128.172MiB)
> DEBUG [SSTableBatchOpen:1] 2017-10-04 22:44:23,560 SSTableReader.java:506 - 
> Opening 
> /egov/data/cassandra/datafiles1/pk4smevauditbody/messagelogbody20170324-f918bfa0107b11e7adfc2d0b45a372ac/mc-4-big
>  (96.310MiB)
> <..>
> {code}
> SSTableReader.java:506 spent ~ 2 min per every big table in p00smevauditbody 
> keyspace.
> I was planned too keep similar tables for the full month...
> So it seems like Cassandra will need more then 1h on startup...
> Does it available to speed up SSTableBatchOpen ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13931) Cassandra JVM stop itself randomly

2017-10-10 Thread Andrey Lataev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198812#comment-16198812
 ] 

Andrey Lataev edited comment on CASSANDRA-13931 at 10/10/17 3:17 PM:
-

I am downgrade Cassndra til 3.10
Upgrade JDK til 1.8.0_144
And set
{code:java}
MAX_HEAP_SIZE="9G"
{code}
and do not change 
{code:java}
JVM_OPTS="$JVM_OPTS -XX:MaxDirectMemorySize=24G"
{code}
But still periodicaly have a similar problem with off-heap:

{code:java}
#egrep "Dumping|YamlConfigurationLoader.java|ERR" /var/log/cassandra/system.log 
| egrep "2017-10-10 15"
ERROR [NonPeriodicTasks:1] 2017-10-10 15:59:31,155 Ref.java:233 - Error when 
closing class 
org.apache.cassandra.io.sstable.format.SSTableReader$GlobalTidy@954667024:/egov/data/cassandra/datafiles1/p00smevaudit/messagelog20171010-a50f6b00a1f511e78dc897891b876cc2/mc-4357-big
ERROR [NonPeriodicTasks:1] 2017-10-10 15:59:32,103 Ref.java:233 - Error when 
closing class 
org.apache.cassandra.io.sstable.format.SSTableReader$GlobalTidy@1640091777:/egov/data/cassandra/datafiles1/p00smevaudit/messagelog20171010-a50f6b00a1f511e78dc897891b876cc2/mc-4355-big


# egrep "Dumping|YamlConfigurationLoader.java|ERR" 
/var/log/cassandra/system.log | egrep "2017-10-10 16"
ERROR [MessagingService-Incoming-/172.20.4.125] 2017-10-10 16:00:17,421 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.125,5,main]
INFO  [MutationStage-128] 2017-10-10 16:00:17,690 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-196] 2017-10-10 16:00:17,721 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-18] 2017-10-10 16:00:17,754 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-184] 2017-10-10 16:00:17,757 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-235] 2017-10-10 16:00:17,768 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-197] 2017-10-10 16:00:17,769 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-28] 2017-10-10 16:00:17,780 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-2] 2017-10-10 16:00:17,846 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-152] 2017-10-10 16:00:17,873 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-241] 2017-10-10 16:00:17,876 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-223] 2017-10-10 16:00:21,540 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-16] 2017-10-10 16:00:21,540 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-189] 2017-10-10 16:00:21,540 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
ERROR [MessagingService-Incoming-/172.20.4.139] 2017-10-10 16:00:21,540 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.139,5,main]
ERROR [MessagingService-Incoming-/172.20.4.145] 2017-10-10 16:00:21,540 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.145,5,main]
INFO  [MutationStage-224] 2017-10-10 16:00:21,543 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-222] 2017-10-10 16:00:21,545 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-101] 2017-10-10 16:00:21,574 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-40] 2017-10-10 16:00:25,095 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
ERROR [MessagingService-Incoming-/172.20.4.145] 2017-10-10 16:00:25,170 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.145,5,main]
ERROR [MessagingService-Incoming-/172.20.4.109] 2017-10-10 16:00:25,212 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.109,5,main]
ERROR [MessagingService-Incoming-/172.20.4.163] 2017-10-10 16:00:25,213 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.163,5,main]
ERROR [MessagingService-Incoming-/172.20.4.162] 2017-10-10 16:00:25,216 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.162,5,main]
ERROR [MutationStage-128] 2017-10-10 16:00:32,694 

[jira] [Commented] (CASSANDRA-13931) Cassandra JVM stop itself randomly

2017-10-10 Thread Andrey Lataev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198812#comment-16198812
 ] 

Andrey Lataev commented on CASSANDRA-13931:
---

I am downgrade Cassndra til 3.10
Upgrade JDK til 1.8.0_144
And set
{code:java}
MAX_HEAP_SIZE="9G"
{code}
and do not change 
{code:java}
JVM_OPTS="$JVM_OPTS -XX:MaxDirectMemorySize=24G"
{code}
But still periodicaly have a similar problem with off-heap:

{code:java}
#*egrep "Dumping|YamlConfigurationLoader.java|ERR" 
/var/log/cassandra/system.log | egrep "2017-10-10 15"*
ERROR [NonPeriodicTasks:1] 2017-10-10 15:59:31,155 Ref.java:233 - Error when 
closing class 
org.apache.cassandra.io.sstable.format.SSTableReader$GlobalTidy@954667024:/egov/data/cassandra/datafiles1/p00smevaudit/messagelog20171010-a50f6b00a1f511e78dc897891b876cc2/mc-4357-big
ERROR [NonPeriodicTasks:1] 2017-10-10 15:59:32,103 Ref.java:233 - Error when 
closing class 
org.apache.cassandra.io.sstable.format.SSTableReader$GlobalTidy@1640091777:/egov/data/cassandra/datafiles1/p00smevaudit/messagelog20171010-a50f6b00a1f511e78dc897891b876cc2/mc-4355-big

# *egrep "Dumping|YamlConfigurationLoader.java|ERR" 
/var/log/cassandra/system.log | egrep "2017-10-10 16"*
ERROR [MessagingService-Incoming-/172.20.4.125] 2017-10-10 16:00:17,421 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.125,5,main]
INFO  [MutationStage-128] 2017-10-10 16:00:17,690 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-196] 2017-10-10 16:00:17,721 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-18] 2017-10-10 16:00:17,754 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-184] 2017-10-10 16:00:17,757 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-235] 2017-10-10 16:00:17,768 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-197] 2017-10-10 16:00:17,769 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-28] 2017-10-10 16:00:17,780 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-2] 2017-10-10 16:00:17,846 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-152] 2017-10-10 16:00:17,873 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-241] 2017-10-10 16:00:17,876 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-223] 2017-10-10 16:00:21,540 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-16] 2017-10-10 16:00:21,540 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-189] 2017-10-10 16:00:21,540 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
ERROR [MessagingService-Incoming-/172.20.4.139] 2017-10-10 16:00:21,540 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.139,5,main]
ERROR [MessagingService-Incoming-/172.20.4.145] 2017-10-10 16:00:21,540 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.145,5,main]
INFO  [MutationStage-224] 2017-10-10 16:00:21,543 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-222] 2017-10-10 16:00:21,545 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-101] 2017-10-10 16:00:21,574 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
INFO  [MutationStage-40] 2017-10-10 16:00:25,095 HeapUtils.java:136 - Dumping 
heap to /egov/dumps/cassandra-1507584313-pid17345.hprof ...
ERROR [MessagingService-Incoming-/172.20.4.145] 2017-10-10 16:00:25,170 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.145,5,main]
ERROR [MessagingService-Incoming-/172.20.4.109] 2017-10-10 16:00:25,212 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.109,5,main]
ERROR [MessagingService-Incoming-/172.20.4.163] 2017-10-10 16:00:25,213 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.163,5,main]
ERROR [MessagingService-Incoming-/172.20.4.162] 2017-10-10 16:00:25,216 
CassandraDaemon.java:229 - Exception in thread 
Thread[MessagingService-Incoming-/172.20.4.162,5,main]
ERROR [MutationStage-128] 2017-10-10 16:00:32,694 
JVMStabilityInspector.java:142 - JVM state determined 

[jira] [Commented] (CASSANDRA-13945) How to change from Cassandra 1 to Cassandra 2

2017-10-10 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198799#comment-16198799
 ] 

Jeremy Hanna commented on CASSANDRA-13945:
--

It sounds like Cassandra is not running any longer - I would check the 
output.log or system.log to see why.  One item of note is that Java 8 is 
required for Cassandra 2 so that may be part of it.  Also for this kind of 
question, to get the best responses, I would ask on the Cassandra user list 
instead of using Jira.

> How to change from Cassandra 1 to Cassandra 2
> -
>
> Key: CASSANDRA-13945
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13945
> Project: Cassandra
>  Issue Type: Wish
>  Components: Documentation and Website
> Environment: Windows 10 Operating System
>Reporter: nicole wells
>Priority: Minor
>
> I am trying to upgrade 
> Cassandra1(https://mindmajix.com/apache-cassandra-training) to Cassandra 2.. 
> And to do that I upgraded Java (to Java 7) but whenever I execute : 
> cassandra. Its launching like this :
> {code:java}
> INFO 17:32:41,413 Logging initialized INFO 17:32:41,437 Loading
> settings from file:/etc/cassandra/cassandra.yaml INFO 17:32:41,642
> Data files directories: [/var/lib/cassandra/data] INFO 17:32:41,643
> Commit log directory: /var/lib/cassandra/commitlog INFO 17:32:41,643
> DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
> INFO 17:32:41,643 disk_failure_policy is stop INFO 17:32:41,643
> commit_failure_policy is stop INFO 17:32:41,647 Global memtable
> threshold is enabled at 986MB INFO 17:32:41,727 Not using
> multi-threaded compaction INFO 17:32:41,869 JVM vendor/version:
> OpenJDK 64-Bit Server VM/1.7.0_55 WARN 17:32:41,869 OpenJDK is not
> recommended. Please upgrade to the newest Oracle Java release INFO
> 17:32:41,869 Heap size: 4137680896/4137680896 INFO 17:32:41,870 Code
> Cache Non-heap memory: init = 2555904(2496K) used = 657664(642K)
> committed = 2555904(2496K) max = 50331648(49152K) INFO 17:32:41,870
> Par Eden Space Heap memory: init = 335544320(327680K) used =
> 80545080(78657K) committed = 335544320(327680K) max =
> 335544320(327680K) INFO 17:32:41,870 Par Survivor Space Heap memory:
> init = 41943040(40960K) used = 0(0K) committed = 41943040(40960K) max
> = 41943040(40960K) INFO 17:32:41,870 CMS Old Gen Heap memory: init = 
> 3760193536(3672064K) used = 0(0K) committed = 3760193536(3672064K) max
> = 3760193536(3672064K) INFO 17:32:41,872 CMS Perm Gen Non-heap memory: init = 
> 21757952(21248K) used = 14994304(14642K) committed =
> 21757952(21248K) max = 174063616(169984K) INFO 17:32:41,872 Classpath:
> /etc/cassandra:/usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/guava-15.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jline-1.0.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.1.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/lz4-1.2.0.jar:/usr/share/cassandra/lib/metrics-core-2.2.0.jar:/usr/share/cassandra/lib/netty-3.6.6.Final.jar:/usr/share/cassandra/lib/reporter-config-2.1.0.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.7.2.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.7.2.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.5.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-internal-only-0.3.3.jar:/usr/share/cassandra/apache-cassandra-2.0.8.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/apache-cassandra-thrift-2.0.8.jar:/usr/share/cassandra/stress.jar::/usr/share/cassandra/lib/jamm-0.2.5.jar
> INFO 17:32:41,873 JNA not found. Native methods will be disabled. INFO
> 17:32:41,884 Initializing key cache with capacity of 100 MBs. INFO
> 17:32:41,890 Scheduling key cache save to each 14400 seconds (going to
> save all keys). INFO 17:32:41,890 Initializing row cache with capacity
> of 0 MBs INFO 17:32:41,895 Scheduling row cache save to each 0 seconds
> (going to save all keys). INFO 17:32:41,968 Initializing
> system.schema_triggers INFO 17:32:41,985 Initializing
> system.compaction_history INFO 17:32:41,988 Initializing
> system.batchlog INFO 17:32:41,991 Initializing 

[jira] [Updated] (CASSANDRA-13931) Cassandra JVM stop itself randomly

2017-10-10 Thread Andrey Lataev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Lataev updated CASSANDRA-13931:
--
Since Version: 3.10  (was: 3.11.0)

> Cassandra JVM stop itself randomly
> --
>
> Key: CASSANDRA-13931
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13931
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: RHEL 7.3
> JDK HotSpot 1.8.0_121-b13
> cassandra-3.11 cluster with 43 nodes in 9 datacenters
> 8vCPU, 32 GB RAM
>Reporter: Andrey Lataev
> Attachments: cassandra-env.sh, cassandra.yaml, 
> system.log.2017-10-01.zip
>
>
> Before I set  -XX:MaxDirectMemorySize  I receive  OOM on OS level like;
> # # grep "Out of" /var/log/messages-20170918
> Sep 16 06:54:07 p00skimnosql04 kernel: Out of memory: Kill process 26619 
> (java) score 287 or sacrifice child
> Sep 16 06:54:07 p00skimnosql04 kernel: Out of memory: Kill process 26640 
> (java) score 289 or sacrifice child
> If set  -XX:MaxDirectMemorySize=5G limitation then periodicaly begin receive:
> HeapUtils.java:136 - Dumping heap to 
> /egov/dumps/cassandra-1506868110-pid11155.hprof
> It seems like  JVM kill itself when off-heap memory leaks occur.
> Typical errors in  system.log before JVM begin dumping:
> ERROR [MessagingService-Incoming-/172.20.4.143] 2017-10-01 19:00:36,336 
> CassandraDaemon.java:228 - Exception in thread 
> Thread[MessagingService-Incoming-/172.20.4.143,5,main]
> ERROR [Native-Transport-Requests-139] 2017-10-01 19:04:02,675 
> Message.java:625 - Unexpected exception during request; channel = [id: 
> 0x3c0c1c26, L:/172.20.4.142:9042 - R:/172.20.4.139:44874]
> Full stack traces:
> ERROR [Native-Transport-Requests-139] 2017-10-01 19:04:02,675 
> Message.java:625 - Unexpected exception during request; channel = [id: 
> 0x3c0c1c26, L:/172.20.4.142:9042 -
> R:/172.20.4.139:44874]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.transport.ServerConnection.applyStateTransition(ServerConnection.java:97)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:521)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [apache-cassandra-3.11.0.jar:3.1
> 1.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.11.0.jar:3.11.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> INFO  [MutationStage-127] 2017-10-01 19:08:24,255 HeapUtils.java:136 - 
> Dumping heap to /egov/dumps/cassandra-1506868110-pid11155.hprof ...
> Heap dump file created
> ERROR [MessagingService-Incoming-/172.20.4.143] 2017-10-01 19:08:33,493 
> CassandraDaemon.java:228 - Exception in thread 
> Thread[MessagingService-Incoming-/172.20.4.143,5,main]
> java.io.IOError: java.io.EOFException: Stream ended prematurely
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:800)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:415)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>

[jira] [Commented] (CASSANDRA-13943) Infinite compaction of L0 SSTables in JBOD

2017-10-10 Thread Dan Kinder (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198777#comment-16198777
 ] 

Dan Kinder commented on CASSANDRA-13943:


[~krummas] it looks like this latest past does not include the changes from the 
simple-cache patch, I'm assuming I should leave that one applied? I.e. use both 
patches?

> Infinite compaction of L0 SSTables in JBOD
> --
>
> Key: CASSANDRA-13943
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13943
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.11.0 / Centos 6
>Reporter: Dan Kinder
>Assignee: Marcus Eriksson
> Attachments: debug.log
>
>
> I recently upgraded from 2.2.6 to 3.11.0.
> I am seeing Cassandra loop infinitely compacting the same data over and over. 
> Attaching logs.
> It is compacting two tables, one on /srv/disk10, the other on /srv/disk1. It 
> does create new SSTables but immediately recompacts again. Note that I am not 
> inserting anything at the moment, there is no flushing happening on this 
> table (Memtable switch count has not changed).
> My theory is that it somehow thinks those should be compaction candidates. 
> But they shouldn't be, they are on different disks and I ran nodetool 
> relocatesstables as well as nodetool compact. So, it tries to compact them 
> together, but the compaction results in the exact same 2 SSTables on the 2 
> disks, because the keys are split by data disk.
> This is pretty serious, because all our nodes right now are consuming CPU 
> doing this for multiple tables, it seems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13943) Infinite compaction of L0 SSTables in JBOD

2017-10-10 Thread Dan Kinder (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198778#comment-16198778
 ] 

Dan Kinder commented on CASSANDRA-13943:


Ope didn't see your message. Got it.

> Infinite compaction of L0 SSTables in JBOD
> --
>
> Key: CASSANDRA-13943
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13943
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.11.0 / Centos 6
>Reporter: Dan Kinder
>Assignee: Marcus Eriksson
> Attachments: debug.log
>
>
> I recently upgraded from 2.2.6 to 3.11.0.
> I am seeing Cassandra loop infinitely compacting the same data over and over. 
> Attaching logs.
> It is compacting two tables, one on /srv/disk10, the other on /srv/disk1. It 
> does create new SSTables but immediately recompacts again. Note that I am not 
> inserting anything at the moment, there is no flushing happening on this 
> table (Memtable switch count has not changed).
> My theory is that it somehow thinks those should be compaction candidates. 
> But they shouldn't be, they are on different disks and I ran nodetool 
> relocatesstables as well as nodetool compact. So, it tries to compact them 
> together, but the compaction results in the exact same 2 SSTables on the 2 
> disks, because the keys are split by data disk.
> This is pretty serious, because all our nodes right now are consuming CPU 
> doing this for multiple tables, it seems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13943) Infinite compaction of L0 SSTables in JBOD

2017-10-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198770#comment-16198770
 ] 

Marcus Eriksson commented on CASSANDRA-13943:
-

[~dkinder] you should probably revert that patch as it doesn't invalidate the 
cache.

I'll hopefully post my patch to 13215 today

> Infinite compaction of L0 SSTables in JBOD
> --
>
> Key: CASSANDRA-13943
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13943
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.11.0 / Centos 6
>Reporter: Dan Kinder
>Assignee: Marcus Eriksson
> Attachments: debug.log
>
>
> I recently upgraded from 2.2.6 to 3.11.0.
> I am seeing Cassandra loop infinitely compacting the same data over and over. 
> Attaching logs.
> It is compacting two tables, one on /srv/disk10, the other on /srv/disk1. It 
> does create new SSTables but immediately recompacts again. Note that I am not 
> inserting anything at the moment, there is no flushing happening on this 
> table (Memtable switch count has not changed).
> My theory is that it somehow thinks those should be compaction candidates. 
> But they shouldn't be, they are on different disks and I ran nodetool 
> relocatesstables as well as nodetool compact. So, it tries to compact them 
> together, but the compaction results in the exact same 2 SSTables on the 2 
> disks, because the keys are split by data disk.
> This is pretty serious, because all our nodes right now are consuming CPU 
> doing this for multiple tables, it seems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13943) Infinite compaction of L0 SSTables in JBOD

2017-10-10 Thread Dan Kinder (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198769#comment-16198769
 ] 

Dan Kinder commented on CASSANDRA-13943:


Yeah I am running with the patch from 
https://issues.apache.org/jira/browse/CASSANDRA-13215

I'll try that latest patch today. Thanks [~krummas]

> Infinite compaction of L0 SSTables in JBOD
> --
>
> Key: CASSANDRA-13943
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13943
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.11.0 / Centos 6
>Reporter: Dan Kinder
>Assignee: Marcus Eriksson
> Attachments: debug.log
>
>
> I recently upgraded from 2.2.6 to 3.11.0.
> I am seeing Cassandra loop infinitely compacting the same data over and over. 
> Attaching logs.
> It is compacting two tables, one on /srv/disk10, the other on /srv/disk1. It 
> does create new SSTables but immediately recompacts again. Note that I am not 
> inserting anything at the moment, there is no flushing happening on this 
> table (Memtable switch count has not changed).
> My theory is that it somehow thinks those should be compaction candidates. 
> But they shouldn't be, they are on different disks and I ran nodetool 
> relocatesstables as well as nodetool compact. So, it tries to compact them 
> together, but the compaction results in the exact same 2 SSTables on the 2 
> disks, because the keys are split by data disk.
> This is pretty serious, because all our nodes right now are consuming CPU 
> doing this for multiple tables, it seems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13442) Support a means of strongly consistent highly available replication with tunable storage requirements

2017-10-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198763#comment-16198763
 ] 

Ariel Weisberg commented on CASSANDRA-13442:


bq. 1) It is not working with ONE or LOCAL_ONE. Of course transient replication 
is an opt-in feature but it means users should be super-careful about issuing 
queries at ONE/LOCAL_ONE for the keyspaces having transient replication 
enabled. Considering that ONE/LOCAL_ONE is the default consistency level for 
drivers and spark connector, maybe should we throw exception whenever a query 
with those consistency level are issued against transiently replicated 
keyspaces ?
With just transient replication ONE and LOCAL_ONE continue to work correctly 
although anything token aware will need to be updated to get correct token 
aware behavior. Coordinators will always route ONE and LOCAL_ONE to a full 
replica. Thanks for pointing this out I missed the impact on token aware 
routing.

With cheap quorums read at ONE and write at ALL works as you would expect. What 
won't work as you would expect is read at ONE and write at something less. We 
will need to recognize that caveat and do something about it. Either 
documentation, errors, or change in functionality.

bq. 2) Consistency level and repair have been 2 distinct and orthogonal notions 
so far. With transient replication they are strongly tied. Indeed transient 
replication relies heavily on incremental repair. Of course it is a detail of 
impl, Ariel Weisberg has mentioned replicated hints as another impl alternative 
but in this case we're making transient replication dependent of hints impl. 
Same story
Yes you have to have a means with which to implement transient replication with 
some kind of efficiency.

bq. Saying 10-20x is really misleading. No one is actually going to see a 10 - 
20x improvement in disk usage. Even a reduction of 1/3 would be optimistic I'm 
sure.
It's use case specific certainly. It really depends on your outage lengths, 
host replacement SLA and the rate at which you rewrite your data set. If most 
of your data is at rest it's easily 100x. If you overwrite your data set every 
24 hours, have a node failure, a 24 hour host replacement SLA, and 16 vnodes 
then you will in the worst case only have 1/48 additional data for 24 hours at 
RF=3. Larger scale failures like loss of an entire rack might be worse I need 
to think about it more.

There is nothing magical about the results you will get from transient 
replication. If the transient replicas can't drop the data or spread it out 
across multiple nodes on failure then you won't benefit.

bq. Let's not pretend people running vnodes can actually run repairs.
Can you elaborate? I'm not an expert on the challenges of running repairs with 
vnodes other then the sheer number of them. Is this something that gets better 
with the new allocation algorithm and using fewer vnodes? IOW if running 16 
vnodes was practical would repair still not be viable?

Issues with repair is a reason for having alternatives like hint based 
transient replicas. The issue with those is that they don't work for heavy 
overwrite workloads that can fill a disk in 24 hours.

> Support a means of strongly consistent highly available replication with 
> tunable storage requirements
> -
>
> Key: CASSANDRA-13442
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13442
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, Coordination, Distributed Metadata, Local 
> Write-Read Paths
>Reporter: Ariel Weisberg
>
> Replication factors like RF=2 can't provide strong consistency and 
> availability because if a single node is lost it's impossible to reach a 
> quorum of replicas. Stepping up to RF=3 will allow you to lose a node and 
> still achieve quorum for reads and writes, but requires committing additional 
> storage.
> The requirement of a quorum for writes/reads doesn't seem to be something 
> that can be relaxed without additional constraints on queries, but it seems 
> like it should be possible to relax the requirement that 3 full copies of the 
> entire data set are kept. What is actually required is a covering data set 
> for the range and we should be able to achieve a covering data set and high 
> availability without having three full copies. 
> After a repair we know that some subset of the data set is fully replicated. 
> At that point we don't have to read from a quorum of nodes for the repaired 
> data. It is sufficient to read from a single node for the repaired data and a 
> quorum of nodes for the unrepaired data.
> One way to exploit this would be to have N replicas, say the last N replicas 
> (where N varies with RF) in the preference list, delete all 

[jira] [Resolved] (CASSANDRA-13923) Flushers blocked due to many SSTables

2017-10-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-13923.
-
Resolution: Duplicate

I think this is a dupe of CASSANDRA-13215

> Flushers blocked due to many SSTables
> -
>
> Key: CASSANDRA-13923
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13923
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
> Environment: Cassandra 3.11.0
> Centos 6 (downgraded JNA)
> 64GB RAM
> 12-disk JBOD
>Reporter: Dan Kinder
>Assignee: Marcus Eriksson
> Attachments: cassandra-jstack-readstage.txt, cassandra-jstack.txt
>
>
> This started on the mailing list and I'm not 100% sure of the root cause, 
> feel free to re-title if needed.
> I just upgraded Cassandra from 2.2.6 to 3.11.0. Within a few hours of serving 
> traffic, thread pools begin to back up and grow pending tasks indefinitely. 
> This happens to multiple different stages (Read, Mutation) and consistently 
> builds pending tasks for MemtablePostFlush and MemtableFlushWriter.
> Using jstack shows that there is blocking going on when trying to call 
> getCompactionCandidates, which seems to happen on flush. We have fairly large 
> nodes that have ~15,000 SSTables per node, all LCS.
> I seems like this can cause reads to get blocked because they try to acquire 
> a read lock when calling shouldDefragment.
> And writes, of course, block once we can't allocate anymore memtables, 
> because flushes are backed up.
> We did not have this problem in 2.2.6, so it seems like there is some 
> regression causing it to be incredibly slow trying to do calls like 
> getCompactionCandidates that list out the SSTables.
> In our case this causes nodes to build up pending tasks and simply stop 
> responding to requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13813) Don't let user drop (or generally break) tables in system_distributed

2017-10-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198490#comment-16198490
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-13813 at 10/10/17 1:15 PM:
-

[~slebresne] I can/will extend the patch with a new {{reloadlocalschema}} JMX 
call and a nodetool cmd when/if you warm up to it sufficiently (:

EDIT: Actually, never mind. I'll do it either way, in a separate JIRA, for 
cleanliness sake, and it's independently useful anyway. Will poke you on it 
once ready.


was (Author: iamaleksey):
[~slebresne] I can/will extend the patch with a new {{reloadlocalschema}} JMX 
call and a nodetool cmd when/if you warm up to it sufficiently (:

> Don't let user drop (or generally break) tables in system_distributed
> -
>
> Key: CASSANDRA-13813
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13813
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Sylvain Lebresne
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.x, 3.11.x
>
>
> There is not currently no particular restrictions on schema modifications to 
> tables of the {{system_distributed}} keyspace. This does mean you can drop 
> those tables, or even alter them in wrong ways like dropping or renaming 
> columns. All of which is guaranteed to break stuffs (that is, repair if you 
> mess up with on of it's table, or MVs if you mess up with 
> {{view_build_status}}).
> I'm pretty sure this was never intended and is an oversight of the condition 
> on {{ALTERABLE_SYSTEM_KEYSPACES}} in 
> [ClientState|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/ClientState.java#L397].
>  That condition is such that any keyspace not listed in 
> {{ALTERABLE_SYSTEM_KEYSPACES}} (which happens to be the case for 
> {{system_distributed}}) has no specific restrictions whatsoever, while given 
> the naming it's fair to assume the intention that exactly the opposite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13946) Updating row TTL without updating values

2017-10-10 Thread Tomer (JIRA)
Tomer  created CASSANDRA-13946:
--

 Summary: Updating row TTL without updating values
 Key: CASSANDRA-13946
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13946
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, CQL
Reporter: Tomer 
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13930) Avoid grabbing the read lock when checking LCS fanout and if compaction strategy should do defragmentation

2017-10-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13930:

Summary: Avoid grabbing the read lock when checking LCS fanout and if 
compaction strategy should do defragmentation  (was: Avoid grabbing the read 
lock when checking if compaction strategy should do defragmentation)

> Avoid grabbing the read lock when checking LCS fanout and if compaction 
> strategy should do defragmentation
> --
>
> Key: CASSANDRA-13930
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13930
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.11.2, 4.0
>
>
> We grab the read lock when checking whether the compaction strategy benefits 
> from defragmentation, avoid that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/3] cassandra git commit: Avoid locks when checking LCS fanout and if we should do read-time defragmentation

2017-10-10 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 3d09901b4 -> f3cf1c019
  refs/heads/trunk 2ecadc88e -> 7ef4ff30c


Avoid locks when checking LCS fanout and if we should do read-time 
defragmentation

Patch by marcuse; reviewed by Jeff Jirsa for CASSANDRA-13930


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f3cf1c01
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f3cf1c01
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f3cf1c01

Branch: refs/heads/cassandra-3.11
Commit: f3cf1c019e0298dd04f6a0d7396b5fe4a93e6f9a
Parents: 3d09901
Author: Marcus Eriksson 
Authored: Tue Oct 3 10:27:32 2017 +0200
Committer: Marcus Eriksson 
Committed: Tue Oct 10 12:49:01 2017 +0200

--
 CHANGES.txt |  1 +
 .../compaction/CompactionStrategyManager.java   | 30 +---
 2 files changed, 8 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3cf1c01/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 879397b..81444d2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.2
+ * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
 Merged from 3.0:
  * Mishandling of cells for removed/dropped columns when reading legacy files 
(CASSANDRA-13939)
  * Deserialise sstable metadata in nodetool verify (CASSANDRA-13922)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3cf1c01/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
index df89e53..94def2a 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
@@ -81,6 +81,8 @@ public class CompactionStrategyManager implements 
INotificationConsumer
  */
 private volatile CompactionParams schemaCompactionParams;
 private Directories.DataDirectory[] locations;
+private boolean shouldDefragment;
+private int fanout;
 
 public CompactionStrategyManager(ColumnFamilyStore cfs)
 {
@@ -92,6 +94,7 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 params = cfs.metadata.params.compaction;
 locations = getDirectories().getWriteableLocations();
 enabled = params.isEnabled();
+
 }
 
 /**
@@ -182,6 +185,8 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 }
 repaired.forEach(AbstractCompactionStrategy::startup);
 unrepaired.forEach(AbstractCompactionStrategy::startup);
+shouldDefragment = repaired.get(0).shouldDefragment();
+fanout = (repaired.get(0) instanceof LeveledCompactionStrategy) ? 
((LeveledCompactionStrategy) repaired.get(0)).getLevelFanoutSize() : 
LeveledCompactionStrategy.DEFAULT_LEVEL_FANOUT_SIZE;
 }
 finally
 {
@@ -343,19 +348,7 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 
 public int getLevelFanoutSize()
 {
-readLock.lock();
-try
-{
-if (repaired.get(0) instanceof LeveledCompactionStrategy)
-{
-return ((LeveledCompactionStrategy) 
repaired.get(0)).getLevelFanoutSize();
-}
-}
-finally
-{
-readLock.unlock();
-}
-return LeveledCompactionStrategy.DEFAULT_LEVEL_FANOUT_SIZE;
+return fanout;
 }
 
 public int[] getSSTableCountPerLevel()
@@ -403,16 +396,7 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 
 public boolean shouldDefragment()
 {
-readLock.lock();
-try
-{
-assert 
repaired.get(0).getClass().equals(unrepaired.get(0).getClass());
-return repaired.get(0).shouldDefragment();
-}
-finally
-{
-readLock.unlock();
-}
+return shouldDefragment;
 }
 
 public Directories getDirectories()


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/3] cassandra git commit: Avoid locks when checking LCS fanout and if we should do read-time defragmentation

2017-10-10 Thread marcuse
Avoid locks when checking LCS fanout and if we should do read-time 
defragmentation

Patch by marcuse; reviewed by Jeff Jirsa for CASSANDRA-13930


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f3cf1c01
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f3cf1c01
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f3cf1c01

Branch: refs/heads/trunk
Commit: f3cf1c019e0298dd04f6a0d7396b5fe4a93e6f9a
Parents: 3d09901
Author: Marcus Eriksson 
Authored: Tue Oct 3 10:27:32 2017 +0200
Committer: Marcus Eriksson 
Committed: Tue Oct 10 12:49:01 2017 +0200

--
 CHANGES.txt |  1 +
 .../compaction/CompactionStrategyManager.java   | 30 +---
 2 files changed, 8 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3cf1c01/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 879397b..81444d2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.2
+ * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
 Merged from 3.0:
  * Mishandling of cells for removed/dropped columns when reading legacy files 
(CASSANDRA-13939)
  * Deserialise sstable metadata in nodetool verify (CASSANDRA-13922)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3cf1c01/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
index df89e53..94def2a 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
@@ -81,6 +81,8 @@ public class CompactionStrategyManager implements 
INotificationConsumer
  */
 private volatile CompactionParams schemaCompactionParams;
 private Directories.DataDirectory[] locations;
+private boolean shouldDefragment;
+private int fanout;
 
 public CompactionStrategyManager(ColumnFamilyStore cfs)
 {
@@ -92,6 +94,7 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 params = cfs.metadata.params.compaction;
 locations = getDirectories().getWriteableLocations();
 enabled = params.isEnabled();
+
 }
 
 /**
@@ -182,6 +185,8 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 }
 repaired.forEach(AbstractCompactionStrategy::startup);
 unrepaired.forEach(AbstractCompactionStrategy::startup);
+shouldDefragment = repaired.get(0).shouldDefragment();
+fanout = (repaired.get(0) instanceof LeveledCompactionStrategy) ? 
((LeveledCompactionStrategy) repaired.get(0)).getLevelFanoutSize() : 
LeveledCompactionStrategy.DEFAULT_LEVEL_FANOUT_SIZE;
 }
 finally
 {
@@ -343,19 +348,7 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 
 public int getLevelFanoutSize()
 {
-readLock.lock();
-try
-{
-if (repaired.get(0) instanceof LeveledCompactionStrategy)
-{
-return ((LeveledCompactionStrategy) 
repaired.get(0)).getLevelFanoutSize();
-}
-}
-finally
-{
-readLock.unlock();
-}
-return LeveledCompactionStrategy.DEFAULT_LEVEL_FANOUT_SIZE;
+return fanout;
 }
 
 public int[] getSSTableCountPerLevel()
@@ -403,16 +396,7 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 
 public boolean shouldDefragment()
 {
-readLock.lock();
-try
-{
-assert 
repaired.get(0).getClass().equals(unrepaired.get(0).getClass());
-return repaired.get(0).shouldDefragment();
-}
-finally
-{
-readLock.unlock();
-}
+return shouldDefragment;
 }
 
 public Directories getDirectories()


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13930) Avoid grabbing the read lock when checking if compaction strategy should do defragmentation

2017-10-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13930:

   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   4.0
   3.11.2
   Status: Resolved  (was: Ready to Commit)

and committed as {{f3cf1c019e0298dd04f6a0d7396b5fe4a93e6f9a}}, thanks! 

> Avoid grabbing the read lock when checking if compaction strategy should do 
> defragmentation
> ---
>
> Key: CASSANDRA-13930
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13930
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.11.2, 4.0
>
>
> We grab the read lock when checking whether the compaction strategy benefits 
> from defragmentation, avoid that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-10-10 Thread marcuse
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7ef4ff30
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7ef4ff30
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7ef4ff30

Branch: refs/heads/trunk
Commit: 7ef4ff30c66eb1554e856d89d3d35c33dcaaeed7
Parents: 2ecadc8 f3cf1c0
Author: Marcus Eriksson 
Authored: Tue Oct 10 12:56:12 2017 +0200
Committer: Marcus Eriksson 
Committed: Tue Oct 10 12:56:12 2017 +0200

--
 CHANGES.txt |  1 +
 .../compaction/CompactionStrategyManager.java   | 30 +---
 2 files changed, 8 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ef4ff30/CHANGES.txt
--
diff --cc CHANGES.txt
index b8d22cb,81444d2..2454c4f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,155 -1,5 +1,156 @@@
 +4.0
 + * Refactor GcCompactionTest to avoid boxing (CASSANDRA-13941)
 + * Checksum sstable metadata (CASSANDRA-13321)
 + * Expose recent histograms in JmxHistograms (CASSANDRA-13642)
 + * Fix buffer length comparison when decompressing in netty-based streaming 
(CASSANDRA-13899)
 + * Properly close StreamCompressionInputStream to release any ByteBuf 
(CASSANDRA-13906)
 + * Add SERIAL and LOCAL_SERIAL support for cassandra-stress (CASSANDRA-13925)
 + * LCS needlessly checks for L0 STCS candidates multiple times 
(CASSANDRA-12961)
 + * Correctly close netty channels when a stream session ends (CASSANDRA-13905)
 + * Update lz4 to 1.4.0 (CASSANDRA-13741)
 + * Optimize Paxos prepare and propose stage for local requests 
(CASSANDRA-13862)
 + * Throttle base partitions during MV repair streaming to prevent OOM 
(CASSANDRA-13299)
 + * Use compaction threshold for STCS in L0 (CASSANDRA-13861)
 + * Fix problem with min_compress_ratio: 1 and disallow ratio < 1 
(CASSANDRA-13703)
 + * Add extra information to SASI timeout exception (CASSANDRA-13677)
 + * Add incremental repair support for --hosts, --force, and subrange repair 
(CASSANDRA-13818)
 + * Rework CompactionStrategyManager.getScanners synchronization 
(CASSANDRA-13786)
 + * Add additional unit tests for batch behavior, TTLs, Timestamps 
(CASSANDRA-13846)
 + * Add keyspace and table name in schema validation exception 
(CASSANDRA-13845)
 + * Emit metrics whenever we hit tombstone failures and warn thresholds 
(CASSANDRA-13771)
 + * Make netty EventLoopGroups daemon threads (CASSANDRA-13837)
 + * Race condition when closing stream sessions (CASSANDRA-13852)
 + * NettyFactoryTest is failing in trunk on macOS (CASSANDRA-13831)
 + * Allow changing log levels via nodetool for related classes 
(CASSANDRA-12696)
 + * Add stress profile yaml with LWT (CASSANDRA-7960)
 + * Reduce memory copies and object creations when acting on ByteBufs 
(CASSANDRA-13789)
 + * Simplify mx4j configuration (Cassandra-13578)
 + * Fix trigger example on 4.0 (CASSANDRA-13796)
 + * Force minumum timeout value (CASSANDRA-9375)
 + * Use netty for streaming (CASSANDRA-12229)
 + * Use netty for internode messaging (CASSANDRA-8457)
 + * Add bytes repaired/unrepaired to nodetool tablestats (CASSANDRA-13774)
 + * Don't delete incremental repair sessions if they still have sstables 
(CASSANDRA-13758)
 + * Fix pending repair manager index out of bounds check (CASSANDRA-13769)
 + * Don't use RangeFetchMapCalculator when RF=1 (CASSANDRA-13576)
 + * Don't optimise trivial ranges in RangeFetchMapCalculator (CASSANDRA-13664)
 + * Use an ExecutorService for repair commands instead of new 
Thread(..).start() (CASSANDRA-13594)
 + * Fix race / ref leak in anticompaction (CASSANDRA-13688)
 + * Expose tasks queue length via JMX (CASSANDRA-12758)
 + * Fix race / ref leak in PendingRepairManager (CASSANDRA-13751)
 + * Enable ppc64le runtime as unsupported architecture (CASSANDRA-13615)
 + * Improve sstablemetadata output (CASSANDRA-11483)
 + * Support for migrating legacy users to roles has been dropped 
(CASSANDRA-13371)
 + * Introduce error metrics for repair (CASSANDRA-13387)
 + * Refactoring to primitive functional interfaces in AuthCache 
(CASSANDRA-13732)
 + * Update metrics to 3.1.5 (CASSANDRA-13648)
 + * batch_size_warn_threshold_in_kb can now be set at runtime (CASSANDRA-13699)
 + * Avoid always rebuilding secondary indexes at startup (CASSANDRA-13725)
 + * Upgrade JMH from 1.13 to 1.19 (CASSANDRA-13727)
 + * Upgrade SLF4J from 1.7.7 to 1.7.25 (CASSANDRA-12996)
 + * Default for start_native_transport now true if not set in config 
(CASSANDRA-13656)
 + * Don't add localhost to the graph when calculating where to stream from 
(CASSANDRA-13583)
 + * Make CDC availability more deterministic via hard-linking (CASSANDRA-12148)
 + * Allow skipping 

[jira] [Commented] (CASSANDRA-13813) Don't let user drop (or generally break) tables in system_distributed

2017-10-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198490#comment-16198490
 ] 

Aleksey Yeschenko commented on CASSANDRA-13813:
---

[~slebresne] I can/will extend the patch with a new {{reloadlocalschema}} JMX 
call and a nodetool cmd when/if you warm up to it sufficiently (:

> Don't let user drop (or generally break) tables in system_distributed
> -
>
> Key: CASSANDRA-13813
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13813
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Sylvain Lebresne
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.x, 3.11.x
>
>
> There is not currently no particular restrictions on schema modifications to 
> tables of the {{system_distributed}} keyspace. This does mean you can drop 
> those tables, or even alter them in wrong ways like dropping or renaming 
> columns. All of which is guaranteed to break stuffs (that is, repair if you 
> mess up with on of it's table, or MVs if you mess up with 
> {{view_build_status}}).
> I'm pretty sure this was never intended and is an oversight of the condition 
> on {{ALTERABLE_SYSTEM_KEYSPACES}} in 
> [ClientState|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/ClientState.java#L397].
>  That condition is such that any keyspace not listed in 
> {{ALTERABLE_SYSTEM_KEYSPACES}} (which happens to be the case for 
> {{system_distributed}}) has no specific restrictions whatsoever, while given 
> the naming it's fair to assume the intention that exactly the opposite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13813) Don't let user drop (or generally break) tables in system_distributed

2017-10-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198482#comment-16198482
 ] 

Aleksey Yeschenko commented on CASSANDRA-13813:
---

bq. My only bother is that while I haven't actually tried it recently, last 
time I did try updating the schema tables manually, it was annoying because the 
changes were not automatically picked-up and in fact tended to be overridden, 
so I had to force a reload in weird ways (altering some other unrelated table 
in the keyspace, which here would actually be an issue). So it would be nice if 
we added a JMX call to force reload schema tables from disk to make this easier 
(should be easy). If we do, I'm warming up to the idea of considering this is 
the only really safe work-around until we find a better way to deal with all 
thise.

Yep. You either do an unrelated {{ALTER}} (usually {{WITH comment = ...}}) or 
bounce the node. Also wouldn't mind at all adding the new JMX call, as a 
companion to {{resetlocalschema}}.

> Don't let user drop (or generally break) tables in system_distributed
> -
>
> Key: CASSANDRA-13813
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13813
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Sylvain Lebresne
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.x, 3.11.x
>
>
> There is not currently no particular restrictions on schema modifications to 
> tables of the {{system_distributed}} keyspace. This does mean you can drop 
> those tables, or even alter them in wrong ways like dropping or renaming 
> columns. All of which is guaranteed to break stuffs (that is, repair if you 
> mess up with on of it's table, or MVs if you mess up with 
> {{view_build_status}}).
> I'm pretty sure this was never intended and is an oversight of the condition 
> on {{ALTERABLE_SYSTEM_KEYSPACES}} in 
> [ClientState|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/ClientState.java#L397].
>  That condition is such that any keyspace not listed in 
> {{ALTERABLE_SYSTEM_KEYSPACES}} (which happens to be the case for 
> {{system_distributed}}) has no specific restrictions whatsoever, while given 
> the naming it's fair to assume the intention that exactly the opposite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13910) Consider deprecating (then removing) read_repair_chance/dclocal_read_repair_chance

2017-10-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198459#comment-16198459
 ] 

Sylvain Lebresne commented on CASSANDRA-13910:
--

[~iamaleksey]: I'll try to write a patch before eow. I'll ping you though if it 
looks like this is not going to happen.

> Consider deprecating (then removing) 
> read_repair_chance/dclocal_read_repair_chance
> --
>
> Key: CASSANDRA-13910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13910
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: CommunityFeedbackRequested
> Fix For: 4.0, 3.11.x
>
>
> First, let me clarify so this is not misunderstood that I'm not *at all* 
> suggesting to remove the read-repair mechanism of detecting and repairing 
> inconsistencies between read responses: that mechanism is imo fine and 
> useful.  But the {{read_repair_chance}} and {{dclocal_read_repair_chance}} 
> have never been about _enabling_ that mechanism, they are about querying all 
> replicas (even when this is not required by the consistency level) for the 
> sole purpose of maybe read-repairing some of the replica that wouldn't have 
> been queried otherwise. Which btw, bring me to reason 1 for considering their 
> removal: their naming/behavior is super confusing. Over the years, I've seen 
> countless users (and not only newbies) misunderstanding what those options 
> do, and as a consequence misunderstand when read-repair itself was happening.
> But my 2nd reason for suggesting this is that I suspect 
> {{read_repair_chance}}/{{dclocal_read_repair_chance}} are, especially 
> nowadays, more harmful than anything else when enabled. When those option 
> kick in, what you trade-off is additional resources consumption (all nodes 
> have to execute the read) for a _fairly remote chance_ of having some 
> inconsistencies repaired on _some_ replica _a bit faster_ than they would 
> otherwise be. To justify that last part, let's recall that:
> # most inconsistencies are actually fixed by hints in practice; and in the 
> case where a node stay dead for a long time so that hints ends up timing-out, 
> you really should repair the node when it comes back (if not simply 
> re-bootstrapping it).  Read-repair probably don't fix _that_ much stuff in 
> the first place.
> # again, read-repair do happen without those options kicking in. If you do 
> reads at {{QUORUM}}, inconsistencies will eventually get read-repaired all 
> the same.  Just a tiny bit less quickly.
> # I suspect almost everyone use a low "chance" for those options at best 
> (because the extra resources consumption is real), so at the end of the day, 
> it's up to chance how much faster this fixes inconsistencies.
> Overall, I'm having a hard time imagining real cases where that trade-off 
> really make sense. Don't get me wrong, those options had their places a long 
> time ago when hints weren't working all that well, but I think they bring 
> more confusion than benefits now.
> And I think it's sane to reconsider stuffs every once in a while, and to 
> clean up anything that may not make all that much sense anymore, which I 
> think is the case here.
> Tl;dr, I feel the benefits brought by those options are very slim at best and 
> well overshadowed by the confusion they bring, and not worth maintaining the 
> code that supports them (which, to be fair, isn't huge, but getting rid of 
> {{ReadCallback.AsyncRepairRunner}} wouldn't hurt for instance).
> Lastly, if the consensus here ends up being that they can have their use in 
> weird case and that we fill supporting those cases is worth confusing 
> everyone else and maintaining that code, I would still suggest disabling them 
> totally by default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13910) Consider deprecating (then removing) read_repair_chance/dclocal_read_repair_chance

2017-10-10 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-13910:


Assignee: Sylvain Lebresne

> Consider deprecating (then removing) 
> read_repair_chance/dclocal_read_repair_chance
> --
>
> Key: CASSANDRA-13910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13910
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: CommunityFeedbackRequested
> Fix For: 4.0, 3.11.x
>
>
> First, let me clarify so this is not misunderstood that I'm not *at all* 
> suggesting to remove the read-repair mechanism of detecting and repairing 
> inconsistencies between read responses: that mechanism is imo fine and 
> useful.  But the {{read_repair_chance}} and {{dclocal_read_repair_chance}} 
> have never been about _enabling_ that mechanism, they are about querying all 
> replicas (even when this is not required by the consistency level) for the 
> sole purpose of maybe read-repairing some of the replica that wouldn't have 
> been queried otherwise. Which btw, bring me to reason 1 for considering their 
> removal: their naming/behavior is super confusing. Over the years, I've seen 
> countless users (and not only newbies) misunderstanding what those options 
> do, and as a consequence misunderstand when read-repair itself was happening.
> But my 2nd reason for suggesting this is that I suspect 
> {{read_repair_chance}}/{{dclocal_read_repair_chance}} are, especially 
> nowadays, more harmful than anything else when enabled. When those option 
> kick in, what you trade-off is additional resources consumption (all nodes 
> have to execute the read) for a _fairly remote chance_ of having some 
> inconsistencies repaired on _some_ replica _a bit faster_ than they would 
> otherwise be. To justify that last part, let's recall that:
> # most inconsistencies are actually fixed by hints in practice; and in the 
> case where a node stay dead for a long time so that hints ends up timing-out, 
> you really should repair the node when it comes back (if not simply 
> re-bootstrapping it).  Read-repair probably don't fix _that_ much stuff in 
> the first place.
> # again, read-repair do happen without those options kicking in. If you do 
> reads at {{QUORUM}}, inconsistencies will eventually get read-repaired all 
> the same.  Just a tiny bit less quickly.
> # I suspect almost everyone use a low "chance" for those options at best 
> (because the extra resources consumption is real), so at the end of the day, 
> it's up to chance how much faster this fixes inconsistencies.
> Overall, I'm having a hard time imagining real cases where that trade-off 
> really make sense. Don't get me wrong, those options had their places a long 
> time ago when hints weren't working all that well, but I think they bring 
> more confusion than benefits now.
> And I think it's sane to reconsider stuffs every once in a while, and to 
> clean up anything that may not make all that much sense anymore, which I 
> think is the case here.
> Tl;dr, I feel the benefits brought by those options are very slim at best and 
> well overshadowed by the confusion they bring, and not worth maintaining the 
> code that supports them (which, to be fair, isn't huge, but getting rid of 
> {{ReadCallback.AsyncRepairRunner}} wouldn't hurt for instance).
> Lastly, if the consensus here ends up being that they can have their use in 
> weird case and that we fill supporting those cases is worth confusing 
> everyone else and maintaining that code, I would still suggest disabling them 
> totally by default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13813) Don't let user drop (or generally break) tables in system_distributed

2017-10-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198456#comment-16198456
 ] 

Sylvain Lebresne commented on CASSANDRA-13813:
--

bq. an experience person can still get around this restriction by doing inserts 
into the schema tables

I also just realized that doing so actually avoids the issue we currently have 
with {{ALTER}} that it rewrites all columns, so it makes it a somewhat better 
work-around (of course, still a work-around and that don't dispense us for 
fixing all this more cleanly). My only bother is that while I haven't actually 
tried it recently, last time I did try updating the schema tables manually, it 
was annoying because the changes were not automatically picked-up and in fact 
tended to be overridden, so I had to force a reload in weird ways (altering 
some other unrelated table in the keyspace, which here would actually be an 
issue). So it would be nice if we added a JMX call to force reload schema 
tables from disk to make this easier (should be easy). If we do, I'm warming up 
to the idea of considering this is the only really safe work-around until we 
find a better way to deal with all thise.

bq. if we can't provide a data model for our tables that works for all 
scenarios then we need to allow operators to make changes.

I'm not sure what you are trying to mean by that. If it's a reference to 
CASSANDRA-12701, then what makes that change problematic is the very same 
reason why leaving {{ALTER}} working here is problematic. So feel free to 
suggest a concrete solution to those problems if you have one, but otherwise, 
I'm not sure how this statement helps make a decision on this issue.

bq. I've had quite a few occasions where modifying "system" tables was 
necessary, and I'm sure more tables will be introduced that don't work in all 
scenarios in the future.

First, it would be nice if you could be a bit more concrete on those time where 
it was "necessary": which tables, what modification and why what it necessary? 
We're trying to find the best course of action for a very concrete problem and 
we are all experienced C* developers: let's be specific.

Second, I'm not sure how to re-conciliate that sentence as a whole to the 
concrete problem at end. Let's remind that we do _already_ refuse {{ALTER}} on 
most system tables, and this ticket is not about discussing whether we should 
allow {{ALTER}} on system tables _in general_. If you want to discuss that, I'm 
good with it (outside of the fact that we will have to solve the technical 
gotcha mentioned above) and your arguments seem to really be applied to such 
change, but please open a separate ticket and let's not divert that one.

> Don't let user drop (or generally break) tables in system_distributed
> -
>
> Key: CASSANDRA-13813
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13813
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Sylvain Lebresne
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.x, 3.11.x
>
>
> There is not currently no particular restrictions on schema modifications to 
> tables of the {{system_distributed}} keyspace. This does mean you can drop 
> those tables, or even alter them in wrong ways like dropping or renaming 
> columns. All of which is guaranteed to break stuffs (that is, repair if you 
> mess up with on of it's table, or MVs if you mess up with 
> {{view_build_status}}).
> I'm pretty sure this was never intended and is an oversight of the condition 
> on {{ALTERABLE_SYSTEM_KEYSPACES}} in 
> [ClientState|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/ClientState.java#L397].
>  That condition is such that any keyspace not listed in 
> {{ALTERABLE_SYSTEM_KEYSPACES}} (which happens to be the case for 
> {{system_distributed}}) has no specific restrictions whatsoever, while given 
> the naming it's fair to assume the intention that exactly the opposite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10857) Allow dropping COMPACT STORAGE flag from tables in 3.X

2017-10-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-10857:

Reviewer: Sylvain Lebresne
  Status: Patch Available  (was: Open)

I've created a patch for the problem.

In order to facilitate uninterrupted upgrades, the new client option was added, 
called {{NO_COMPACT}}. When this option is supplied, {{SELECT}}, {{UPDATE}}, 
{{DELETE}} and {{BATCH}} statements are functioning in "compatibility" mode, 
which allows seeing the tables as if they were "regular" CQL tables. {{ALTER}} 
and other DDL statements are not allowed in this compatibility mode as this 
would create a potential conflict with flags and such schema changes do not 
really feel safe. We could invest a lot of time into testing it and making it 
possible, but since this feature is just for a transition period, it makes 
sense to allow only the "safe" operations since after the compact flags are 
dropped, any CQL operation will be possible on the table anyways. Since 
technically we have only added a CQL option to the startup message, this sounds 
like a change that does not require a protocol version bump (also considering 
that this feature is 3.0/3.11 only).

For the problem with empty name in supercolumns, empty identifier was allowed 
where applicable and an explicit check for it was added in all DDL statements.

|[3.0 
patch|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:10857-3.0]|[Python
 Driver 
Patch|https://github.com/datastax/python-driver/compare/master...ifesdjeen:10857]|[dtests|https://github.com/apache/cassandra-dtest/compare/master...ifesdjeen:10857]|

> Allow dropping COMPACT STORAGE flag from tables in 3.X
> --
>
> Key: CASSANDRA-10857
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10857
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Alex Petrov
>Priority: Blocker
> Fix For: 3.0.x, 3.11.x
>
>
> Thrift allows users to define flexible mixed column families - where certain 
> columns would have explicitly pre-defined names, potentially non-default 
> validation types, and be indexed.
> Example:
> {code}
> create column family foo
> and default_validation_class = UTF8Type
> and column_metadata = [
> {column_name: bar, validation_class: Int32Type, index_type: KEYS},
> {column_name: baz, validation_class: UUIDType, index_type: KEYS}
> ];
> {code}
> Columns named {{bar}} and {{baz}} will be validated as {{Int32Type}} and 
> {{UUIDType}}, respectively, and be indexed. Columns with any other name will 
> be validated by {{UTF8Type}} and will not be indexed.
> With CASSANDRA-8099, {{bar}} and {{baz}} would be mapped to static columns 
> internally. However, being {{WITH COMPACT STORAGE}}, the table will only 
> expose {{bar}} and {{baz}} columns. Accessing any dynamic columns (any column 
> not named {{bar}} and {{baz}}) right now requires going through Thrift.
> This is blocking Thrift -> CQL migration for users who have mixed 
> dynamic/static column families. That said, it *shouldn't* be hard to allow 
> users to drop the {{compact}} flag to expose the table as it is internally 
> now, and be able to access all columns.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13643) converting expired ttl cells to tombstones causing unnecessary digest mismatches

2017-10-10 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13643:
---
Fix Version/s: 3.11.1
   3.0.15

> converting expired ttl cells to tombstones causing unnecessary digest 
> mismatches
> 
>
> Key: CASSANDRA-13643
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13643
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> In 
> [{{AbstractCell#purge}}|https://github.com/apache/cassandra/blob/26e025804c6777a0d124dbc257747cba85b18f37/src/java/org/apache/cassandra/db/rows/AbstractCell.java#L77]
>   , we convert expired ttl'd cells to tombstones, and set the the local 
> deletion time to the cell's expiration time, less the ttl time. Depending on 
> the timing of the purge, this can cause purge to generate tombstones that are 
> otherwise purgeable. If compaction for a row with ttls isn't at the same 
> state between replicas, this will then cause digest mismatches between 
> logically identical rows, leading to unnecessary repair streaming and read 
> repairs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-6936) Make all byte representations of types comparable by their unsigned byte representation only

2017-10-10 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov reassigned CASSANDRA-6936:
--

Assignee: (was: Branimir Lambov)

> Make all byte representations of types comparable by their unsigned byte 
> representation only
> 
>
> Key: CASSANDRA-6936
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6936
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>  Labels: compaction, performance
> Fix For: 4.x
>
>
> This could be a painful change, but is necessary for implementing a 
> trie-based index, and settling for less would be suboptimal; it also should 
> make comparisons cheaper all-round, and since comparison operations are 
> pretty much the majority of C*'s business, this should be easily felt (see 
> CASSANDRA-6553 and CASSANDRA-6934 for an example of some minor changes with 
> major performance impacts). No copying/special casing/slicing should mean 
> fewer opportunities to introduce performance regressions as well.
> Since I have slated for 3.0 a lot of non-backwards-compatible sstable 
> changes, hopefully this shouldn't be too much more of a burden.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13943) Infinite compaction of L0 SSTables in JBOD

2017-10-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198388#comment-16198388
 ] 

Marcus Eriksson commented on CASSANDRA-13943:
-

It would be really helpful if you could start one of the nodes with 
[this|https://github.com/krummas/cassandra/commits/marcuse/log_compactionindex] 
patch and post the logs

> Infinite compaction of L0 SSTables in JBOD
> --
>
> Key: CASSANDRA-13943
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13943
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.11.0 / Centos 6
>Reporter: Dan Kinder
>Assignee: Marcus Eriksson
> Attachments: debug.log
>
>
> I recently upgraded from 2.2.6 to 3.11.0.
> I am seeing Cassandra loop infinitely compacting the same data over and over. 
> Attaching logs.
> It is compacting two tables, one on /srv/disk10, the other on /srv/disk1. It 
> does create new SSTables but immediately recompacts again. Note that I am not 
> inserting anything at the moment, there is no flushing happening on this 
> table (Memtable switch count has not changed).
> My theory is that it somehow thinks those should be compaction candidates. 
> But they shouldn't be, they are on different disks and I ran nodetool 
> relocatesstables as well as nodetool compact. So, it tries to compact them 
> together, but the compaction results in the exact same 2 SSTables on the 2 
> disks, because the keys are split by data disk.
> This is pretty serious, because all our nodes right now are consuming CPU 
> doing this for multiple tables, it seems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13943) Infinite compaction of L0 SSTables in JBOD

2017-10-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198356#comment-16198356
 ] 

Marcus Eriksson commented on CASSANDRA-13943:
-

There is clearly a bug in the startsWith code, patch for that 
[here|https://github.com/krummas/cassandra/commits/marcuse/13943]

But since you have another subdirectory after the similar prefix, I don't think 
that is the problem here. Reading back some other issues - it seems you were 
running the (somewhat broken) patch from CASSANDRA-13215 for a while - is that 
still true on these nodes?

Could you start these nodes with a patch that logs a bit more?

> Infinite compaction of L0 SSTables in JBOD
> --
>
> Key: CASSANDRA-13943
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13943
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.11.0 / Centos 6
>Reporter: Dan Kinder
>Assignee: Marcus Eriksson
> Attachments: debug.log
>
>
> I recently upgraded from 2.2.6 to 3.11.0.
> I am seeing Cassandra loop infinitely compacting the same data over and over. 
> Attaching logs.
> It is compacting two tables, one on /srv/disk10, the other on /srv/disk1. It 
> does create new SSTables but immediately recompacts again. Note that I am not 
> inserting anything at the moment, there is no flushing happening on this 
> table (Memtable switch count has not changed).
> My theory is that it somehow thinks those should be compaction candidates. 
> But they shouldn't be, they are on different disks and I ran nodetool 
> relocatesstables as well as nodetool compact. So, it tries to compact them 
> together, but the compaction results in the exact same 2 SSTables on the 2 
> disks, because the keys are split by data disk.
> This is pretty serious, because all our nodes right now are consuming CPU 
> doing this for multiple tables, it seems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-6936) Make all byte representations of types comparable by their unsigned byte representation only

2017-10-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198353#comment-16198353
 ] 

Alex Petrov commented on CASSANDRA-6936:


Nick Dimiduk also worked on byte-ordered types in HBase: 
[HBASE-8201|https://issues.apache.org/jira/browse/HBASE-8201] and 
[OrderedBytes|https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java].

> Make all byte representations of types comparable by their unsigned byte 
> representation only
> 
>
> Key: CASSANDRA-6936
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6936
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Branimir Lambov
>  Labels: compaction, performance
> Fix For: 4.x
>
>
> This could be a painful change, but is necessary for implementing a 
> trie-based index, and settling for less would be suboptimal; it also should 
> make comparisons cheaper all-round, and since comparison operations are 
> pretty much the majority of C*'s business, this should be easily felt (see 
> CASSANDRA-6553 and CASSANDRA-6934 for an example of some minor changes with 
> major performance impacts). No copying/special casing/slicing should mean 
> fewer opportunities to introduce performance regressions as well.
> Since I have slated for 3.0 a lot of non-backwards-compatible sstable 
> changes, hopefully this shouldn't be too much more of a burden.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13945) How to change from Cassandra 1 to Cassandra 2

2017-10-10 Thread nicole wells (JIRA)
nicole wells created CASSANDRA-13945:


 Summary: How to change from Cassandra 1 to Cassandra 2
 Key: CASSANDRA-13945
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13945
 Project: Cassandra
  Issue Type: Wish
  Components: Documentation and Website
 Environment: Windows 10 Operating System
Reporter: nicole wells
Priority: Minor


I am trying to upgrade 
Cassandra1(https://mindmajix.com/apache-cassandra-training) to Cassandra 2.. 
And to do that I upgraded Java (to Java 7) but whenever I execute : cassandra. 
Its launching like this :


{code:java}
INFO 17:32:41,413 Logging initialized INFO 17:32:41,437 Loading
settings from file:/etc/cassandra/cassandra.yaml INFO 17:32:41,642
Data files directories: [/var/lib/cassandra/data] INFO 17:32:41,643
Commit log directory: /var/lib/cassandra/commitlog INFO 17:32:41,643
DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 17:32:41,643 disk_failure_policy is stop INFO 17:32:41,643
commit_failure_policy is stop INFO 17:32:41,647 Global memtable
threshold is enabled at 986MB INFO 17:32:41,727 Not using
multi-threaded compaction INFO 17:32:41,869 JVM vendor/version:
OpenJDK 64-Bit Server VM/1.7.0_55 WARN 17:32:41,869 OpenJDK is not
recommended. Please upgrade to the newest Oracle Java release INFO
17:32:41,869 Heap size: 4137680896/4137680896 INFO 17:32:41,870 Code
Cache Non-heap memory: init = 2555904(2496K) used = 657664(642K)
committed = 2555904(2496K) max = 50331648(49152K) INFO 17:32:41,870
Par Eden Space Heap memory: init = 335544320(327680K) used =
80545080(78657K) committed = 335544320(327680K) max =
335544320(327680K) INFO 17:32:41,870 Par Survivor Space Heap memory:
init = 41943040(40960K) used = 0(0K) committed = 41943040(40960K) max
= 41943040(40960K) INFO 17:32:41,870 CMS Old Gen Heap memory: init = 
3760193536(3672064K) used = 0(0K) committed = 3760193536(3672064K) max
= 3760193536(3672064K) INFO 17:32:41,872 CMS Perm Gen Non-heap memory: init = 
21757952(21248K) used = 14994304(14642K) committed =
21757952(21248K) max = 174063616(169984K) INFO 17:32:41,872 Classpath:
/etc/cassandra:/usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/guava-15.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jline-1.0.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.1.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/lz4-1.2.0.jar:/usr/share/cassandra/lib/metrics-core-2.2.0.jar:/usr/share/cassandra/lib/netty-3.6.6.Final.jar:/usr/share/cassandra/lib/reporter-config-2.1.0.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.7.2.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.7.2.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.5.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-internal-only-0.3.3.jar:/usr/share/cassandra/apache-cassandra-2.0.8.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/apache-cassandra-thrift-2.0.8.jar:/usr/share/cassandra/stress.jar::/usr/share/cassandra/lib/jamm-0.2.5.jar
INFO 17:32:41,873 JNA not found. Native methods will be disabled. INFO
17:32:41,884 Initializing key cache with capacity of 100 MBs. INFO
17:32:41,890 Scheduling key cache save to each 14400 seconds (going to
save all keys). INFO 17:32:41,890 Initializing row cache with capacity
of 0 MBs INFO 17:32:41,895 Scheduling row cache save to each 0 seconds
(going to save all keys). INFO 17:32:41,968 Initializing
system.schema_triggers INFO 17:32:41,985 Initializing
system.compaction_history INFO 17:32:41,988 Initializing
system.batchlog INFO 17:32:41,991 Initializing system.sstable_activity
INFO 17:32:41,994 Initializing system.peer_events INFO 17:32:41,997
Initializing system.compactions_in_progress INFO 17:32:42,000
Initializing system.hints ERROR 17:32:42,001 Exception encountered
during startup java.lang.RuntimeException: Incompatible SSTable found.
Current version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables. at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:415)
at

[jira] [Commented] (CASSANDRA-13442) Support a means of strongly consistent highly available replication with tunable storage requirements

2017-10-10 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198253#comment-16198253
 ] 

DOAN DuyHai commented on CASSANDRA-13442:
-

So I've wrapped my head around the design of transient replicas. So far I can 
spot 2 concerns

1) It is not working with ONE or LOCAL_ONE. Of course transient replication is 
an opt-in feature but it means users should be super-careful about issuing 
queries at ONE/LOCAL_ONE for the keyspaces having transient replication 
enabled. Considering that ONE/LOCAL_ONE is the *default consistency level* for 
drivers and spark connector, maybe should we throw exception whenever a query 
with those consistency level are issued against transiently replicated 
keyspaces ?

2) *Consistency level* and *repair* have been 2 distinct and orthogonal notions 
so far. With transient replication they are strongly tied. Indeed transient 
replication relies heavily on incremental repair. Of course it is a detail of 
impl, [~aweisberg] has mentioned replicated hints as another impl alternative 
but in this case we're making transient replication dependent of hints impl. 
Same story

 The consequence of point 2) is that any bug in the incremental 
repair/replicated hints will impact terribly the correctness/assumptions of 
transient replication. This point worries me much more than point 1)

> Support a means of strongly consistent highly available replication with 
> tunable storage requirements
> -
>
> Key: CASSANDRA-13442
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13442
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, Coordination, Distributed Metadata, Local 
> Write-Read Paths
>Reporter: Ariel Weisberg
>
> Replication factors like RF=2 can't provide strong consistency and 
> availability because if a single node is lost it's impossible to reach a 
> quorum of replicas. Stepping up to RF=3 will allow you to lose a node and 
> still achieve quorum for reads and writes, but requires committing additional 
> storage.
> The requirement of a quorum for writes/reads doesn't seem to be something 
> that can be relaxed without additional constraints on queries, but it seems 
> like it should be possible to relax the requirement that 3 full copies of the 
> entire data set are kept. What is actually required is a covering data set 
> for the range and we should be able to achieve a covering data set and high 
> availability without having three full copies. 
> After a repair we know that some subset of the data set is fully replicated. 
> At that point we don't have to read from a quorum of nodes for the repaired 
> data. It is sufficient to read from a single node for the repaired data and a 
> quorum of nodes for the unrepaired data.
> One way to exploit this would be to have N replicas, say the last N replicas 
> (where N varies with RF) in the preference list, delete all repaired data 
> after a repair completes. Subsequent quorum reads will be able to retrieve 
> the repaired data from any of the two full replicas and the unrepaired data 
> from a quorum read of any replica including the "transient" replicas.
> Configuration for something like this in NTS might be something similar to { 
> DC1="3-1", DC2="3-2" } where the first value is the replication factor used 
> for consistency and the second values is the number of transient replicas. If 
> you specify { DC1=3, DC2=3 } then the number of transient replicas defaults 
> to 0 and you get the same behavior you have today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org