[jira] [Updated] (CASSANDRA-6168) nodetool status should issue a warning when no keyspace is specified
[ https://issues.apache.org/jira/browse/CASSANDRA-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vijay updated CASSANDRA-6168: - Attachment: 0001-CASSANDRA-6168.patch One line change. > nodetool status should issue a warning when no keyspace is specified > > > Key: CASSANDRA-6168 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6168 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Patricio Echague >Assignee: Vijay >Priority: Minor > Labels: lhf > Attachments: 0001-CASSANDRA-6168.patch > > > Seen in 1.2.10. > Apologies if this is expected behavior. Nodetool status reports 0% ownership > unless I add a keyspace name. > nodetool help docs says: > ..." status - Print cluster information (state, load, IDs, > ...)"... > output without keyspace name > {code} > Datacenter: DC1 > === > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns Host ID >Rack > UN 10.x.x.146 81.96 GB 256 0.0% > a70c59b3-a667-4d76-ba5b-ba849ad672da r1 > UN 10.x.x.63 95.32 GB 256 0.0% > f8cb7b10-4ebe-484a-a1c0-6cb2d053901b r1 > UN 10.x.x.184 89.54 GB 256 0.1% > cd86c420-55e2-4d99-8ed9-d9ee8d6a9d9c r1 > UN 10.x.x.190 79.68 GB 256 0.0% > 544c3906-bc02-400d-9fd2-1e39ecadd6ff r1 > UN 10.x.x.168 93.44 GB 256 0.7% > 33be316f-1276-475d-90cf-2667950d3a2c r1 > UN 10.x.x.132 84.4 GB256 0.0% > b327d9f1-cab0-4583-8e5e-95c50b4074fd r1 > Datacenter: DCOFFLINE > = > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns Host ID >Rack > UN 10.x.x.62 56.09 GB 256 32.4% > c8994d27-767b-431f-bdc2-9196eeeb6f44 r1 > UN 10.x.x.131 60.11 GB 256 32.8% > 0b9d3314-039e-4f88-8ba6-d0f2885d9a30 r1 > UN 10.x.x.167 56.45 GB 256 34.0% > ba76f4fe-4250-4839-a37d-c1a7c24e585d r1 > {code} > and with keyspace. Example: nodetool status MYKSPS > {code} > Datacenter: DC1 > === > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns (effective) Host ID > Rack > UN 10.x.x.184 89.51 GB 256 50.0% > cd86c420-55e2-4d99-8ed9-d9ee8d6a9d9c r1 > UN 10.x.x.146 81.96 GB 256 50.0% > a70c59b3-a667-4d76-ba5b-ba849ad672da r1 > UN 10.x.x.168 93.44 GB 256 50.0% > 33be316f-1276-475d-90cf-2667950d3a2c r1 > UN 10.x.x.63 95.32 GB 256 50.0% > f8cb7b10-4ebe-484a-a1c0-6cb2d053901b r1 > UN 10.x.x.190 79.68 GB 256 50.0% > 544c3906-bc02-400d-9fd2-1e39ecadd6ff r1 > UN 10.x.x.132 84.4 GB256 50.0% > b327d9f1-cab0-4583-8e5e-95c50b4074fd r1 > Datacenter: DCOFFLINE > = > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns (effective) Host ID > Rack > UN 10.x.x.131 60.11 GB 256 32.8% > 0b9d3314-039e-4f88-8ba6-d0f2885d9a30 r1 > UN 10.x.x.167 56.45 GB 256 34.7% > ba76f4fe-4250-4839-a37d-c1a7c24e585d r1 > UN 10.x.x.62 56.09 GB 256 32.5% > c8994d27-767b-431f-bdc2-9196eeeb6f44 r1 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6904) commitlog segments may not be archived after restart
[ https://issues.apache.org/jira/browse/CASSANDRA-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943887#comment-13943887 ] Vijay commented on CASSANDRA-6904: -- Hi Jonathan, We can also encode it in the header, but either ways SGTM. > commitlog segments may not be archived after restart > > > Key: CASSANDRA-6904 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6904 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jonathan Ellis > Fix For: 2.0.7 > > > commitlog segments are archived when they are full, so the current active > segment will not be archived on restart (and its contents will not be > available for pitr). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-6764) Using Batch commitlog_sync is slow and doesn't actually batch writes
[ https://issues.apache.org/jira/browse/CASSANDRA-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-6764. --- Resolution: Fixed Fair enough; committed to 2.1 > Using Batch commitlog_sync is slow and doesn't actually batch writes > > > Key: CASSANDRA-6764 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6764 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: John Carrino >Assignee: John Carrino > Fix For: 2.1 beta2 > > Attachments: cassandra_6764_v2.patch, cassandra_6764_v3.patch > > > The assumption behind batch commit mode is that the client does it's own > batching and wants to wait until the write is durable before returning. The > problem is that the queue that cassandra uses under the covers only allows > for a single ROW (RowMutation) per thread (concurrent_writes). This means > that commitlog_sync_batch_window_in_ms should really be called sleep_between > each_concurrent_writes_rows_in_ms. > I assume the reason this slipped by for so long is that no one uses batch > mode, probably because people say "it's slow". We need durability so this > isn't an option. > However it doesn't need to be this slow. > Also, if you write a row that is larger than the commit log size it silently > (warn) fails to put it in the commit log. This is not ideal for batch mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-6796) StorageProxy may submit hint to itself (with an assert) for CL.Any
[ https://issues.apache.org/jira/browse/CASSANDRA-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-6796. --- Resolution: Not A Problem Fix Version/s: (was: 2.0.7) Not a problem post CASSANDRA-6510 > StorageProxy may submit hint to itself (with an assert) for CL.Any > -- > > Key: CASSANDRA-6796 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6796 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Viktor Kuzmin >Priority: Minor > > StorageProxy.mutate may produce WriteTimoutException and with > ConsistencyLevel.ANY it tries to submitHint. But submitHint function have > assertation - we may not send hints to ourself. That may lead to exception > (in case we're among natural endpoints) and hint will be not submitted to > other endpoints. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6796) StorageProxy may submit hint to itself (with an assert) for CL.Any
[ https://issues.apache.org/jira/browse/CASSANDRA-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943870#comment-13943870 ] Jonathan Ellis commented on CASSANDRA-6796: --- Do you have a stacktrace? I thought this was fixed in CASSANDRA-6132 > StorageProxy may submit hint to itself (with an assert) for CL.Any > -- > > Key: CASSANDRA-6796 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6796 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Viktor Kuzmin >Priority: Minor > Fix For: 2.0.7 > > > StorageProxy.mutate may produce WriteTimoutException and with > ConsistencyLevel.ANY it tries to submitHint. But submitHint function have > assertation - we may not send hints to ourself. That may lead to exception > (in case we're among natural endpoints) and hint will be not submitted to > other endpoints. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6867) MeteredFlusher should ignore memtables not affected by it
[ https://issues.apache.org/jira/browse/CASSANDRA-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-6867: - Attachment: 6867-v3-incomplete.txt Started with some cosmetic-ish changes when I realised that the patch is not complete enough. Counting flushing bytes, for one, should not include unaffected memtables in the total. > MeteredFlusher should ignore memtables not affected by it > - > > Key: CASSANDRA-6867 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6867 > Project: Cassandra > Issue Type: Improvement >Reporter: Anthony Cozzie >Priority: Minor > Fix For: 2.0.7 > > Attachments: 2.0.5-6867-2.txt, 2.0.5-6867.txt, 6867-v3-incomplete.txt > > > Before metered flusher runs, count up the number of bytes used by memtables > unaffected by metered flusher and subtract that from the maximum allowed > bytes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-6820) NPE in MeteredFlusher.run
[ https://issues.apache.org/jira/browse/CASSANDRA-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-6820. --- Resolution: Fixed Reviewer: Jonathan Ellis Assignee: Nicolas Favre-Felix I bet you are right. Fixed in 535c56fb217c1a12d2fb9a217203c03d26642444 > NPE in MeteredFlusher.run > - > > Key: CASSANDRA-6820 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6820 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Nicolas Favre-Felix >Assignee: Nicolas Favre-Felix >Priority: Minor > Fix For: 2.0.7 > > > Hello, > I've been seeing this exception with Cassandra 2.0.5: > {code} > ERROR 15:41:46,754 Exception in thread Thread[OptionalTasks:1,5,main] > java.lang.NullPointerException > at org.apache.cassandra.db.MeteredFlusher.run(MeteredFlusher.java:40) > at > org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > {code} > Could it be that {{Memtable.activelyMeasuring}} becomes null right after the > test? -- This message was sent by Atlassian JIRA (v6.2#6252)
[9/9] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/699189f1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/699189f1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/699189f1 Branch: refs/heads/trunk Commit: 699189f197ec93606492d03d93917884471bc216 Parents: 0493da9 2574f05 Author: Jonathan Ellis Authored: Fri Mar 21 22:00:26 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 22:00:26 2014 -0500 -- src/java/org/apache/cassandra/db/DataTracker.java | 6 ++ 1 file changed, 6 insertions(+) --
[8/9] git commit: merge from 2.0
merge from 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2574f05b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2574f05b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2574f05b Branch: refs/heads/trunk Commit: 2574f05bb4b8384339aa1fec255585db535d4fd8 Parents: fdae99d 535c56f Author: Jonathan Ellis Authored: Fri Mar 21 21:59:54 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:59:58 2014 -0500 -- src/java/org/apache/cassandra/db/DataTracker.java | 6 ++ 1 file changed, 6 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2574f05b/src/java/org/apache/cassandra/db/DataTracker.java -- diff --cc src/java/org/apache/cassandra/db/DataTracker.java index 30bd360,a1de8e5..c8fc699 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@@ -556,10 -513,16 +556,16 @@@ public class DataTracke public final Set sstables; public final SSTableIntervalTree intervalTree; -View(Memtable memtable, Set pendingFlush, Set sstables, Set compacting, SSTableIntervalTree intervalTree) +View(List liveMemtables, List flushingMemtables, Set sstables, Set compacting, SSTableIntervalTree intervalTree) { -assert memtable != null; -assert pendingFlush != null; ++assert liveMemtables != null; ++assert flushingMemtables != null; + assert sstables != null; + assert compacting != null; + assert intervalTree != null; + -this.memtable = memtable; -this.memtablesPendingFlush = pendingFlush; +this.liveMemtables = liveMemtables; +this.flushingMemtables = flushingMemtables; this.sstables = sstables; this.compacting = compacting; this.intervalTree = intervalTree;
[4/9] git commit: Fix NPE in MeteredFlusher patch by Nicolas Favre-Felix; reviewed by jbellis for CASSANDRA-6820
Fix NPE in MeteredFlusher patch by Nicolas Favre-Felix; reviewed by jbellis for CASSANDRA-6820 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/535c56fb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/535c56fb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/535c56fb Branch: refs/heads/cassandra-2.0 Commit: 535c56fb217c1a12d2fb9a217203c03d26642444 Parents: d1cc701 Author: Jonathan Ellis Authored: Fri Mar 21 21:58:26 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:58:26 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/MeteredFlusher.java | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/535c56fb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 64dc248..ed202cc 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.7 + * Fix NPE in MeteredFlusher (CASSANDRA-6820) * Fix race processing range scan responses (CASSANDRA-6820) * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) http://git-wip-us.apache.org/repos/asf/cassandra/blob/535c56fb/src/java/org/apache/cassandra/db/MeteredFlusher.java -- diff --git a/src/java/org/apache/cassandra/db/MeteredFlusher.java b/src/java/org/apache/cassandra/db/MeteredFlusher.java index f1a3ac9..5c71fc6 100644 --- a/src/java/org/apache/cassandra/db/MeteredFlusher.java +++ b/src/java/org/apache/cassandra/db/MeteredFlusher.java @@ -37,9 +37,8 @@ public class MeteredFlusher implements Runnable long totalMemtableBytesAllowed = DatabaseDescriptor.getTotalMemtableSpaceInMB() * 1048576L; // first, find how much memory non-active memtables are using -long flushingBytes = Memtable.activelyMeasuring == null - ? 0 - : Memtable.activelyMeasuring.getMemtableThreadSafe().getLiveSize(); +ColumnFamilyStore measuredCfs = Memtable.activelyMeasuring; +long flushingBytes = measuredCfs == null ? 0 : measuredCfs.getMemtableThreadSafe().getLiveSize(); flushingBytes += countFlushingBytes(); if (flushingBytes > 0) logger.debug("Currently flushing {} bytes of {} max", flushingBytes, totalMemtableBytesAllowed);
[5/9] git commit: Fix NPE in MeteredFlusher patch by Nicolas Favre-Felix; reviewed by jbellis for CASSANDRA-6820
Fix NPE in MeteredFlusher patch by Nicolas Favre-Felix; reviewed by jbellis for CASSANDRA-6820 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/535c56fb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/535c56fb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/535c56fb Branch: refs/heads/cassandra-2.1 Commit: 535c56fb217c1a12d2fb9a217203c03d26642444 Parents: d1cc701 Author: Jonathan Ellis Authored: Fri Mar 21 21:58:26 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:58:26 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/MeteredFlusher.java | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/535c56fb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 64dc248..ed202cc 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.7 + * Fix NPE in MeteredFlusher (CASSANDRA-6820) * Fix race processing range scan responses (CASSANDRA-6820) * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) http://git-wip-us.apache.org/repos/asf/cassandra/blob/535c56fb/src/java/org/apache/cassandra/db/MeteredFlusher.java -- diff --git a/src/java/org/apache/cassandra/db/MeteredFlusher.java b/src/java/org/apache/cassandra/db/MeteredFlusher.java index f1a3ac9..5c71fc6 100644 --- a/src/java/org/apache/cassandra/db/MeteredFlusher.java +++ b/src/java/org/apache/cassandra/db/MeteredFlusher.java @@ -37,9 +37,8 @@ public class MeteredFlusher implements Runnable long totalMemtableBytesAllowed = DatabaseDescriptor.getTotalMemtableSpaceInMB() * 1048576L; // first, find how much memory non-active memtables are using -long flushingBytes = Memtable.activelyMeasuring == null - ? 0 - : Memtable.activelyMeasuring.getMemtableThreadSafe().getLiveSize(); +ColumnFamilyStore measuredCfs = Memtable.activelyMeasuring; +long flushingBytes = measuredCfs == null ? 0 : measuredCfs.getMemtableThreadSafe().getLiveSize(); flushingBytes += countFlushingBytes(); if (flushingBytes > 0) logger.debug("Currently flushing {} bytes of {} max", flushingBytes, totalMemtableBytesAllowed);
[1/9] git commit: add asserts
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 434798297 -> 535c56fb2 refs/heads/cassandra-2.1 fdae99d76 -> 2574f05bb refs/heads/trunk 0493da933 -> 699189f19 add asserts Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d1cc7013 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d1cc7013 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d1cc7013 Branch: refs/heads/cassandra-2.0 Commit: d1cc70138ca9088e6d390af767b357bc40d147fc Parents: 4347982 Author: Jonathan Ellis Authored: Fri Mar 21 21:56:14 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:56:14 2014 -0500 -- src/java/org/apache/cassandra/db/DataTracker.java | 6 ++ 1 file changed, 6 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d1cc7013/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index c1ae00f..a1de8e5 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -515,6 +515,12 @@ public class DataTracker View(Memtable memtable, Set pendingFlush, Set sstables, Set compacting, SSTableIntervalTree intervalTree) { +assert memtable != null; +assert pendingFlush != null; +assert sstables != null; +assert compacting != null; +assert intervalTree != null; + this.memtable = memtable; this.memtablesPendingFlush = pendingFlush; this.sstables = sstables;
[6/9] git commit: Fix NPE in MeteredFlusher patch by Nicolas Favre-Felix; reviewed by jbellis for CASSANDRA-6820
Fix NPE in MeteredFlusher patch by Nicolas Favre-Felix; reviewed by jbellis for CASSANDRA-6820 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/535c56fb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/535c56fb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/535c56fb Branch: refs/heads/trunk Commit: 535c56fb217c1a12d2fb9a217203c03d26642444 Parents: d1cc701 Author: Jonathan Ellis Authored: Fri Mar 21 21:58:26 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:58:26 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/MeteredFlusher.java | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/535c56fb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 64dc248..ed202cc 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.7 + * Fix NPE in MeteredFlusher (CASSANDRA-6820) * Fix race processing range scan responses (CASSANDRA-6820) * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) http://git-wip-us.apache.org/repos/asf/cassandra/blob/535c56fb/src/java/org/apache/cassandra/db/MeteredFlusher.java -- diff --git a/src/java/org/apache/cassandra/db/MeteredFlusher.java b/src/java/org/apache/cassandra/db/MeteredFlusher.java index f1a3ac9..5c71fc6 100644 --- a/src/java/org/apache/cassandra/db/MeteredFlusher.java +++ b/src/java/org/apache/cassandra/db/MeteredFlusher.java @@ -37,9 +37,8 @@ public class MeteredFlusher implements Runnable long totalMemtableBytesAllowed = DatabaseDescriptor.getTotalMemtableSpaceInMB() * 1048576L; // first, find how much memory non-active memtables are using -long flushingBytes = Memtable.activelyMeasuring == null - ? 0 - : Memtable.activelyMeasuring.getMemtableThreadSafe().getLiveSize(); +ColumnFamilyStore measuredCfs = Memtable.activelyMeasuring; +long flushingBytes = measuredCfs == null ? 0 : measuredCfs.getMemtableThreadSafe().getLiveSize(); flushingBytes += countFlushingBytes(); if (flushingBytes > 0) logger.debug("Currently flushing {} bytes of {} max", flushingBytes, totalMemtableBytesAllowed);
[2/9] git commit: add asserts
add asserts Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d1cc7013 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d1cc7013 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d1cc7013 Branch: refs/heads/cassandra-2.1 Commit: d1cc70138ca9088e6d390af767b357bc40d147fc Parents: 4347982 Author: Jonathan Ellis Authored: Fri Mar 21 21:56:14 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:56:14 2014 -0500 -- src/java/org/apache/cassandra/db/DataTracker.java | 6 ++ 1 file changed, 6 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d1cc7013/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index c1ae00f..a1de8e5 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -515,6 +515,12 @@ public class DataTracker View(Memtable memtable, Set pendingFlush, Set sstables, Set compacting, SSTableIntervalTree intervalTree) { +assert memtable != null; +assert pendingFlush != null; +assert sstables != null; +assert compacting != null; +assert intervalTree != null; + this.memtable = memtable; this.memtablesPendingFlush = pendingFlush; this.sstables = sstables;
[7/9] git commit: merge from 2.0
merge from 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2574f05b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2574f05b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2574f05b Branch: refs/heads/cassandra-2.1 Commit: 2574f05bb4b8384339aa1fec255585db535d4fd8 Parents: fdae99d 535c56f Author: Jonathan Ellis Authored: Fri Mar 21 21:59:54 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:59:58 2014 -0500 -- src/java/org/apache/cassandra/db/DataTracker.java | 6 ++ 1 file changed, 6 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2574f05b/src/java/org/apache/cassandra/db/DataTracker.java -- diff --cc src/java/org/apache/cassandra/db/DataTracker.java index 30bd360,a1de8e5..c8fc699 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@@ -556,10 -513,16 +556,16 @@@ public class DataTracke public final Set sstables; public final SSTableIntervalTree intervalTree; -View(Memtable memtable, Set pendingFlush, Set sstables, Set compacting, SSTableIntervalTree intervalTree) +View(List liveMemtables, List flushingMemtables, Set sstables, Set compacting, SSTableIntervalTree intervalTree) { -assert memtable != null; -assert pendingFlush != null; ++assert liveMemtables != null; ++assert flushingMemtables != null; + assert sstables != null; + assert compacting != null; + assert intervalTree != null; + -this.memtable = memtable; -this.memtablesPendingFlush = pendingFlush; +this.liveMemtables = liveMemtables; +this.flushingMemtables = flushingMemtables; this.sstables = sstables; this.compacting = compacting; this.intervalTree = intervalTree;
[3/9] git commit: add asserts
add asserts Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d1cc7013 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d1cc7013 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d1cc7013 Branch: refs/heads/trunk Commit: d1cc70138ca9088e6d390af767b357bc40d147fc Parents: 4347982 Author: Jonathan Ellis Authored: Fri Mar 21 21:56:14 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:56:14 2014 -0500 -- src/java/org/apache/cassandra/db/DataTracker.java | 6 ++ 1 file changed, 6 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d1cc7013/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index c1ae00f..a1de8e5 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -515,6 +515,12 @@ public class DataTracker View(Memtable memtable, Set pendingFlush, Set sstables, Set compacting, SSTableIntervalTree intervalTree) { +assert memtable != null; +assert pendingFlush != null; +assert sstables != null; +assert compacting != null; +assert intervalTree != null; + this.memtable = memtable; this.memtablesPendingFlush = pendingFlush; this.sstables = sstables;
[jira] [Commented] (CASSANDRA-6879) ConcurrentModificationException while doing range slice query.
[ https://issues.apache.org/jira/browse/CASSANDRA-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943863#comment-13943863 ] Jonathan Ellis commented on CASSANDRA-6879: --- done and committed > ConcurrentModificationException while doing range slice query. > -- > > Key: CASSANDRA-6879 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6879 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.0.4 >Reporter: Shao-Chuan Wang >Assignee: Jonathan Ellis > Fix For: 2.0.7, 2.1 beta2 > > Attachments: 6879.txt > > > The paging read request (either from thrift or native) would sporadically > fail due to a race condition between read repair and requesting thread > waiting for read repair results list. The READ_REPAIR is queued in > ReadCallback.maybeResolveForRepair(), and it does not seem to have guarantee > that its resolve() method (which internally create > RangeSliceResponseResolver.Reducer and doing repairResults.addAll inside > RangeSliceResponseResolver.Reducer) would be invoked before the requesting > thread starts waiting on resolver.repairResults. So, there is a small window > that the list is partially populated, while requesting thread starts waiting > on repairResults. I believe for the most of the time, the requesting thread > is either wait for the entire repair results or not waiting for repair > results at all. The original intent here seems to be waiting for repair > results always (if the repair is triggered by repair chance). > {code} > ERROR [Native-Transport-Requests:70827] 2014-03-18 05:00:12,774 > ErrorMessage.java (line 222) Unexpected exception during request > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:188) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:163) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:58) > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188) > at > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:358) > at > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:131) > at > org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) > at > org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43) > at > org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > {code} > {code} > ERROR [Thrift:1] 2014-03-18 07:18:02,434 CustomTThreadPoolServer.java (line > 212) Error occurred during processing of message. > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.service.pager.RangeSliceQueryPager.queryNextPage(RangeSliceQueryPager.java:85) > at > org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:71) > at > org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:36) > at > org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:202) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:169) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.jav
[9/9] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0493da93 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0493da93 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0493da93 Branch: refs/heads/trunk Commit: 0493da933d272ece0c46a9f646ea096c48706ead Parents: e2bef98 fdae99d Author: Jonathan Ellis Authored: Fri Mar 21 21:52:07 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:52:07 2014 -0500 -- CHANGES.txt | 1 + .../apache/cassandra/service/ReadCallback.java | 20 ++-- .../apache/cassandra/service/StorageProxy.java | 3 ++- 3 files changed, 9 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0493da93/CHANGES.txt --
[3/9] git commit: inline
inline Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e682c037 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e682c037 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e682c037 Branch: refs/heads/trunk Commit: e682c0370e68af75779bf2b0cd1622fce6e03eab Parents: 37b9410 Author: Jonathan Ellis Authored: Fri Mar 21 16:55:36 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:49:27 2014 -0500 -- .../apache/cassandra/service/ReadCallback.java| 18 -- 1 file changed, 4 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e682c037/src/java/org/apache/cassandra/service/ReadCallback.java -- diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java b/src/java/org/apache/cassandra/service/ReadCallback.java index 777ef90..085787c 100644 --- a/src/java/org/apache/cassandra/service/ReadCallback.java +++ b/src/java/org/apache/cassandra/service/ReadCallback.java @@ -114,7 +114,10 @@ public class ReadCallback implements IAsyncCallback= blockfor && resolver.isDataPresent()) { condition.signalAll(); -maybeResolveForRepair(n); +// kick off a background digest comparison if this is a result that (may have) arrived after +// the original resolve that get() kicks off as soon as the condition is signaled +if (blockfor < endpoints.size() && n == endpoints.size()) +StageManager.getStage(Stage.READ_REPAIR).execute(new AsyncRepairRunner()); } } @@ -146,19 +149,6 @@ public class ReadCallback implements IAsyncCallback
[1/9] git commit: inline
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 37b941067 -> 434798297 refs/heads/cassandra-2.1 21a1d525b -> fdae99d76 refs/heads/trunk e2bef98c0 -> 0493da933 inline Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e682c037 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e682c037 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e682c037 Branch: refs/heads/cassandra-2.0 Commit: e682c0370e68af75779bf2b0cd1622fce6e03eab Parents: 37b9410 Author: Jonathan Ellis Authored: Fri Mar 21 16:55:36 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:49:27 2014 -0500 -- .../apache/cassandra/service/ReadCallback.java| 18 -- 1 file changed, 4 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e682c037/src/java/org/apache/cassandra/service/ReadCallback.java -- diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java b/src/java/org/apache/cassandra/service/ReadCallback.java index 777ef90..085787c 100644 --- a/src/java/org/apache/cassandra/service/ReadCallback.java +++ b/src/java/org/apache/cassandra/service/ReadCallback.java @@ -114,7 +114,10 @@ public class ReadCallback implements IAsyncCallback= blockfor && resolver.isDataPresent()) { condition.signalAll(); -maybeResolveForRepair(n); +// kick off a background digest comparison if this is a result that (may have) arrived after +// the original resolve that get() kicks off as soon as the condition is signaled +if (blockfor < endpoints.size() && n == endpoints.size()) +StageManager.getStage(Stage.READ_REPAIR).execute(new AsyncRepairRunner()); } } @@ -146,19 +149,6 @@ public class ReadCallback implements IAsyncCallback
[2/9] git commit: inline
inline Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e682c037 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e682c037 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e682c037 Branch: refs/heads/cassandra-2.1 Commit: e682c0370e68af75779bf2b0cd1622fce6e03eab Parents: 37b9410 Author: Jonathan Ellis Authored: Fri Mar 21 16:55:36 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:49:27 2014 -0500 -- .../apache/cassandra/service/ReadCallback.java| 18 -- 1 file changed, 4 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e682c037/src/java/org/apache/cassandra/service/ReadCallback.java -- diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java b/src/java/org/apache/cassandra/service/ReadCallback.java index 777ef90..085787c 100644 --- a/src/java/org/apache/cassandra/service/ReadCallback.java +++ b/src/java/org/apache/cassandra/service/ReadCallback.java @@ -114,7 +114,10 @@ public class ReadCallback implements IAsyncCallback= blockfor && resolver.isDataPresent()) { condition.signalAll(); -maybeResolveForRepair(n); +// kick off a background digest comparison if this is a result that (may have) arrived after +// the original resolve that get() kicks off as soon as the condition is signaled +if (blockfor < endpoints.size() && n == endpoints.size()) +StageManager.getStage(Stage.READ_REPAIR).execute(new AsyncRepairRunner()); } } @@ -146,19 +149,6 @@ public class ReadCallback implements IAsyncCallback
[6/9] git commit: Fix race processing range scan responses patch by jbellis; reviewed by mstepura and ayeschenko for CASSANDRA-6820
Fix race processing range scan responses patch by jbellis; reviewed by mstepura and ayeschenko for CASSANDRA-6820 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/43479829 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/43479829 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/43479829 Branch: refs/heads/trunk Commit: 4347982974176bd20bfa69d72a72f895e1b05d25 Parents: e682c03 Author: Jonathan Ellis Authored: Fri Mar 21 21:45:09 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:49:28 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/ReadCallback.java | 2 ++ src/java/org/apache/cassandra/service/StorageProxy.java | 3 ++- 3 files changed, 5 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/43479829/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c89ae51..64dc248 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.7 + * Fix race processing range scan responses (CASSANDRA-6820) * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) http://git-wip-us.apache.org/repos/asf/cassandra/blob/43479829/src/java/org/apache/cassandra/service/ReadCallback.java -- diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java b/src/java/org/apache/cassandra/service/ReadCallback.java index 085787c..150fabe 100644 --- a/src/java/org/apache/cassandra/service/ReadCallback.java +++ b/src/java/org/apache/cassandra/service/ReadCallback.java @@ -76,6 +76,8 @@ public class ReadCallback implements IAsyncCallback= endpoints.size(); } public boolean await(long timePastStart, TimeUnit unit) http://git-wip-us.apache.org/repos/asf/cassandra/blob/43479829/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index a6912c2..12f9ece 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -1473,7 +1473,8 @@ public class StorageProxy implements StorageProxyMBean // collect replies and resolve according to consistency level RangeSliceResponseResolver resolver = new RangeSliceResponseResolver(nodeCmd.keyspace, command.timestamp); -ReadCallback> handler = new ReadCallback(resolver, consistency_level, nodeCmd, filteredEndpoints); +List minimalEndpoints = filteredEndpoints.subList(0, Math.min(filteredEndpoints.size(), consistency_level.blockFor(keyspace))); +ReadCallback> handler = new ReadCallback<>(resolver, consistency_level, nodeCmd, minimalEndpoints); handler.assureSufficientLiveNodes(); resolver.setSources(filteredEndpoints); if (filteredEndpoints.size() == 1
[5/9] git commit: Fix race processing range scan responses patch by jbellis; reviewed by mstepura and ayeschenko for CASSANDRA-6820
Fix race processing range scan responses patch by jbellis; reviewed by mstepura and ayeschenko for CASSANDRA-6820 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/43479829 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/43479829 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/43479829 Branch: refs/heads/cassandra-2.1 Commit: 4347982974176bd20bfa69d72a72f895e1b05d25 Parents: e682c03 Author: Jonathan Ellis Authored: Fri Mar 21 21:45:09 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:49:28 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/ReadCallback.java | 2 ++ src/java/org/apache/cassandra/service/StorageProxy.java | 3 ++- 3 files changed, 5 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/43479829/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c89ae51..64dc248 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.7 + * Fix race processing range scan responses (CASSANDRA-6820) * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) http://git-wip-us.apache.org/repos/asf/cassandra/blob/43479829/src/java/org/apache/cassandra/service/ReadCallback.java -- diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java b/src/java/org/apache/cassandra/service/ReadCallback.java index 085787c..150fabe 100644 --- a/src/java/org/apache/cassandra/service/ReadCallback.java +++ b/src/java/org/apache/cassandra/service/ReadCallback.java @@ -76,6 +76,8 @@ public class ReadCallback implements IAsyncCallback= endpoints.size(); } public boolean await(long timePastStart, TimeUnit unit) http://git-wip-us.apache.org/repos/asf/cassandra/blob/43479829/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index a6912c2..12f9ece 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -1473,7 +1473,8 @@ public class StorageProxy implements StorageProxyMBean // collect replies and resolve according to consistency level RangeSliceResponseResolver resolver = new RangeSliceResponseResolver(nodeCmd.keyspace, command.timestamp); -ReadCallback> handler = new ReadCallback(resolver, consistency_level, nodeCmd, filteredEndpoints); +List minimalEndpoints = filteredEndpoints.subList(0, Math.min(filteredEndpoints.size(), consistency_level.blockFor(keyspace))); +ReadCallback> handler = new ReadCallback<>(resolver, consistency_level, nodeCmd, minimalEndpoints); handler.assureSufficientLiveNodes(); resolver.setSources(filteredEndpoints); if (filteredEndpoints.size() == 1
[7/9] git commit: merge from 2.0
merge from 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fdae99d7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fdae99d7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fdae99d7 Branch: refs/heads/cassandra-2.1 Commit: fdae99d76745263af6432b75627f46daf33f4f8f Parents: 21a1d52 4347982 Author: Jonathan Ellis Authored: Fri Mar 21 21:51:59 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:51:59 2014 -0500 -- CHANGES.txt | 1 + .../apache/cassandra/service/ReadCallback.java | 20 ++-- .../apache/cassandra/service/StorageProxy.java | 3 ++- 3 files changed, 9 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/fdae99d7/CHANGES.txt -- diff --cc CHANGES.txt index 1b39e30,64dc248..535f894 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,35 -1,5 +1,36 @@@ -2.0.7 +2.1.0-beta2 + * Eliminate possibility of CL segment appearing twice in active list + (CASSANDRA-6557) + * Apply DONTNEED fadvise to commitlog segments (CASSANDRA-6759) + * Switch CRC component to Adler and include it for compressed sstables + (CASSANDRA-4165) + * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451) + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899) + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897) + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573) + * Fix ABTC NPE and apply update function correctly (CASSANDRA-6692) + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660) + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742) + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705) + * Scrub should not always clear out repaired status (CASSANDRA-5351) + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446) + * Fix ClassCastException for compact table with composites (CASSANDRA-6738) + * Fix potentially repairing with wrong nodes (CASSANDRA-6808) + * Change caching option syntax (CASSANDRA-6745) + * Fix stress to do proper counter reads (CASSANDRA-6835) + * Fix help message for stress counter_write (CASSANDRA-6824) + * Fix stress smart Thrift client to pick servers correctly (CASSANDRA-6848) + * Add logging levels (minimal, normal or verbose) to stress tool (CASSANDRA-6849) + * Fix race condition in Batch CLE (CASSANDRA-6860) + * Improve cleanup/scrub/upgradesstables failure handling (CASSANDRA-6774) + * ByteBuffer write() methods for serializing sstables (CASSANDRA-6781) + * Proper compare function for CollectionType (CASSANDRA-6783) + * Update native server to Netty 4 (CASSANDRA-6236) + * Fix off-by-one error in stress (CASSANDRA-6883) + * Make OpOrder AutoCloseable (CASSANDRA-6901) + * Remove sync repair JMX interface (CASSANDRA-6900) +Merged from 2.0: + * Fix race processing range scan responses (CASSANDRA-6820) * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) http://git-wip-us.apache.org/repos/asf/cassandra/blob/fdae99d7/src/java/org/apache/cassandra/service/ReadCallback.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/fdae99d7/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --cc src/java/org/apache/cassandra/service/StorageProxy.java index c8a161c,12f9ece..f31e092 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@@ -1460,137 -1420,109 +1460,138 @@@ public class StorageProxy implements St List nextFilteredEndpoints = null; while (i < ranges.size()) { -AbstractBounds range = nextRange == null - ? ranges.get(i) - : nextRange; -List liveEndpoints = nextEndpoints == null -? getLiveSortedEndpoints(keyspace, range.right) -: nextEndpoints; -List filteredEndpoints = nextFilteredEndpoints == null -? consistency_level.filterForQuery(keyspace, liveEndpoints) -: nextFilteredEndpoints; -++i; - -// getRestrictedRange has broken the queried range into per-[vnode] token ranges, but this doe
[4/9] git commit: Fix race processing range scan responses patch by jbellis; reviewed by mstepura and ayeschenko for CASSANDRA-6820
Fix race processing range scan responses patch by jbellis; reviewed by mstepura and ayeschenko for CASSANDRA-6820 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/43479829 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/43479829 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/43479829 Branch: refs/heads/cassandra-2.0 Commit: 4347982974176bd20bfa69d72a72f895e1b05d25 Parents: e682c03 Author: Jonathan Ellis Authored: Fri Mar 21 21:45:09 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:49:28 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/ReadCallback.java | 2 ++ src/java/org/apache/cassandra/service/StorageProxy.java | 3 ++- 3 files changed, 5 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/43479829/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c89ae51..64dc248 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.7 + * Fix race processing range scan responses (CASSANDRA-6820) * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) http://git-wip-us.apache.org/repos/asf/cassandra/blob/43479829/src/java/org/apache/cassandra/service/ReadCallback.java -- diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java b/src/java/org/apache/cassandra/service/ReadCallback.java index 085787c..150fabe 100644 --- a/src/java/org/apache/cassandra/service/ReadCallback.java +++ b/src/java/org/apache/cassandra/service/ReadCallback.java @@ -76,6 +76,8 @@ public class ReadCallback implements IAsyncCallback= endpoints.size(); } public boolean await(long timePastStart, TimeUnit unit) http://git-wip-us.apache.org/repos/asf/cassandra/blob/43479829/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index a6912c2..12f9ece 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -1473,7 +1473,8 @@ public class StorageProxy implements StorageProxyMBean // collect replies and resolve according to consistency level RangeSliceResponseResolver resolver = new RangeSliceResponseResolver(nodeCmd.keyspace, command.timestamp); -ReadCallback> handler = new ReadCallback(resolver, consistency_level, nodeCmd, filteredEndpoints); +List minimalEndpoints = filteredEndpoints.subList(0, Math.min(filteredEndpoints.size(), consistency_level.blockFor(keyspace))); +ReadCallback> handler = new ReadCallback<>(resolver, consistency_level, nodeCmd, minimalEndpoints); handler.assureSufficientLiveNodes(); resolver.setSources(filteredEndpoints); if (filteredEndpoints.size() == 1
[8/9] git commit: merge from 2.0
merge from 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fdae99d7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fdae99d7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fdae99d7 Branch: refs/heads/trunk Commit: fdae99d76745263af6432b75627f46daf33f4f8f Parents: 21a1d52 4347982 Author: Jonathan Ellis Authored: Fri Mar 21 21:51:59 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 21:51:59 2014 -0500 -- CHANGES.txt | 1 + .../apache/cassandra/service/ReadCallback.java | 20 ++-- .../apache/cassandra/service/StorageProxy.java | 3 ++- 3 files changed, 9 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/fdae99d7/CHANGES.txt -- diff --cc CHANGES.txt index 1b39e30,64dc248..535f894 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,35 -1,5 +1,36 @@@ -2.0.7 +2.1.0-beta2 + * Eliminate possibility of CL segment appearing twice in active list + (CASSANDRA-6557) + * Apply DONTNEED fadvise to commitlog segments (CASSANDRA-6759) + * Switch CRC component to Adler and include it for compressed sstables + (CASSANDRA-4165) + * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451) + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899) + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897) + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573) + * Fix ABTC NPE and apply update function correctly (CASSANDRA-6692) + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660) + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742) + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705) + * Scrub should not always clear out repaired status (CASSANDRA-5351) + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446) + * Fix ClassCastException for compact table with composites (CASSANDRA-6738) + * Fix potentially repairing with wrong nodes (CASSANDRA-6808) + * Change caching option syntax (CASSANDRA-6745) + * Fix stress to do proper counter reads (CASSANDRA-6835) + * Fix help message for stress counter_write (CASSANDRA-6824) + * Fix stress smart Thrift client to pick servers correctly (CASSANDRA-6848) + * Add logging levels (minimal, normal or verbose) to stress tool (CASSANDRA-6849) + * Fix race condition in Batch CLE (CASSANDRA-6860) + * Improve cleanup/scrub/upgradesstables failure handling (CASSANDRA-6774) + * ByteBuffer write() methods for serializing sstables (CASSANDRA-6781) + * Proper compare function for CollectionType (CASSANDRA-6783) + * Update native server to Netty 4 (CASSANDRA-6236) + * Fix off-by-one error in stress (CASSANDRA-6883) + * Make OpOrder AutoCloseable (CASSANDRA-6901) + * Remove sync repair JMX interface (CASSANDRA-6900) +Merged from 2.0: + * Fix race processing range scan responses (CASSANDRA-6820) * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) http://git-wip-us.apache.org/repos/asf/cassandra/blob/fdae99d7/src/java/org/apache/cassandra/service/ReadCallback.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/fdae99d7/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --cc src/java/org/apache/cassandra/service/StorageProxy.java index c8a161c,12f9ece..f31e092 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@@ -1460,137 -1420,109 +1460,138 @@@ public class StorageProxy implements St List nextFilteredEndpoints = null; while (i < ranges.size()) { -AbstractBounds range = nextRange == null - ? ranges.get(i) - : nextRange; -List liveEndpoints = nextEndpoints == null -? getLiveSortedEndpoints(keyspace, range.right) -: nextEndpoints; -List filteredEndpoints = nextFilteredEndpoints == null -? consistency_level.filterForQuery(keyspace, liveEndpoints) -: nextFilteredEndpoints; -++i; - -// getRestrictedRange has broken the queried range into per-[vnode] token ranges, but this doesn't tak
[jira] [Commented] (CASSANDRA-6879) ConcurrentModificationException while doing range slice query.
[ https://issues.apache.org/jira/browse/CASSANDRA-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943830#comment-13943830 ] Aleksey Yeschenko commented on CASSANDRA-6879: -- LGTM as well. A tiny nit - while touching those lines anyway, fix the ReadCallback constructor call to new ReadCallback<>(.. > ConcurrentModificationException while doing range slice query. > -- > > Key: CASSANDRA-6879 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6879 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.0.4 >Reporter: Shao-Chuan Wang >Assignee: Jonathan Ellis > Fix For: 2.0.7 > > Attachments: 6879.txt > > > The paging read request (either from thrift or native) would sporadically > fail due to a race condition between read repair and requesting thread > waiting for read repair results list. The READ_REPAIR is queued in > ReadCallback.maybeResolveForRepair(), and it does not seem to have guarantee > that its resolve() method (which internally create > RangeSliceResponseResolver.Reducer and doing repairResults.addAll inside > RangeSliceResponseResolver.Reducer) would be invoked before the requesting > thread starts waiting on resolver.repairResults. So, there is a small window > that the list is partially populated, while requesting thread starts waiting > on repairResults. I believe for the most of the time, the requesting thread > is either wait for the entire repair results or not waiting for repair > results at all. The original intent here seems to be waiting for repair > results always (if the repair is triggered by repair chance). > {code} > ERROR [Native-Transport-Requests:70827] 2014-03-18 05:00:12,774 > ErrorMessage.java (line 222) Unexpected exception during request > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:188) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:163) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:58) > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188) > at > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:358) > at > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:131) > at > org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) > at > org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43) > at > org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > {code} > {code} > ERROR [Thrift:1] 2014-03-18 07:18:02,434 CustomTThreadPoolServer.java (line > 212) Error occurred during processing of message. > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.service.pager.RangeSliceQueryPager.queryNextPage(RangeSliceQueryPager.java:85) > at > org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:71) > at > org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:36) > at > org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:202) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.
[jira] [Commented] (CASSANDRA-6907) ignore snapshot repair flag on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943814#comment-13943814 ] Jonathan Ellis commented on CASSANDRA-6907: --- (I would prefer to leave the default of snapshot the same on both windows and linux, which means we should log a warning and ignore the flag instead of failing the operation.) > ignore snapshot repair flag on Windows > -- > > Key: CASSANDRA-6907 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6907 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Jonathan Ellis >Assignee: Joshua McKenzie > Fix For: 2.0.7 > > > Per discussion in CASSANDRA-4050, we should ignore the snapshot repair flag > on windows, and log a warning while proceeding to do non-snapshot repair. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-4050) Unable to remove snapshot files on Windows while original sstables are live
[ https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943815#comment-13943815 ] Jonathan Ellis commented on CASSANDRA-4050: --- Created CASSANDRA-6907 for that. > Unable to remove snapshot files on Windows while original sstables are live > --- > > Key: CASSANDRA-4050 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4050 > Project: Cassandra > Issue Type: Bug > Environment: Windows 7 >Reporter: Jim Newsham >Assignee: Joshua McKenzie >Priority: Minor > > I'm using Cassandra 1.0.8, on Windows 7. When I take a snapshot of the > database, I find that I am unable to delete the snapshot directory (i.e., dir > named "{datadir}\{keyspacename}\snapshots\{snapshottag}") while Cassandra is > running: "The action can't be completed because the folder or a file in it > is open in another program. Close the folder or file and try again" [in > Windows Explorer]. If I terminate Cassandra, then I can delete the directory > with no problem. > I expect to be able to move or delete the snapshotted files while Cassandra > is running, as this should not affect the runtime operation of Cassandra. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-6907) ignore snapshot repair flag on Windows
Jonathan Ellis created CASSANDRA-6907: - Summary: ignore snapshot repair flag on Windows Key: CASSANDRA-6907 URL: https://issues.apache.org/jira/browse/CASSANDRA-6907 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Jonathan Ellis Assignee: Joshua McKenzie Fix For: 2.0.7 Per discussion in CASSANDRA-4050, we should ignore the snapshot repair flag on windows, and log a warning while proceeding to do non-snapshot repair. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6879) ConcurrentModificationException while doing range slice query.
[ https://issues.apache.org/jira/browse/CASSANDRA-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943796#comment-13943796 ] Mikhail Stepura commented on CASSANDRA-6879: +1, LGTM > ConcurrentModificationException while doing range slice query. > -- > > Key: CASSANDRA-6879 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6879 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.0.4 >Reporter: Shao-Chuan Wang >Assignee: Jonathan Ellis > Fix For: 2.0.7 > > Attachments: 6879.txt > > > The paging read request (either from thrift or native) would sporadically > fail due to a race condition between read repair and requesting thread > waiting for read repair results list. The READ_REPAIR is queued in > ReadCallback.maybeResolveForRepair(), and it does not seem to have guarantee > that its resolve() method (which internally create > RangeSliceResponseResolver.Reducer and doing repairResults.addAll inside > RangeSliceResponseResolver.Reducer) would be invoked before the requesting > thread starts waiting on resolver.repairResults. So, there is a small window > that the list is partially populated, while requesting thread starts waiting > on repairResults. I believe for the most of the time, the requesting thread > is either wait for the entire repair results or not waiting for repair > results at all. The original intent here seems to be waiting for repair > results always (if the repair is triggered by repair chance). > {code} > ERROR [Native-Transport-Requests:70827] 2014-03-18 05:00:12,774 > ErrorMessage.java (line 222) Unexpected exception during request > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:188) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:163) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:58) > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188) > at > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:358) > at > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:131) > at > org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) > at > org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43) > at > org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > {code} > {code} > ERROR [Thrift:1] 2014-03-18 07:18:02,434 CustomTThreadPoolServer.java (line > 212) Error occurred during processing of message. > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.service.pager.RangeSliceQueryPager.queryNextPage(RangeSliceQueryPager.java:85) > at > org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:71) > at > org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:36) > at > org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:202) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:169) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:58) > at
[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables
[ https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943787#comment-13943787 ] Pavel Yaskevich commented on CASSANDRA-6689: One more thing I forgot to mention which is for consideration, we might take advantage of SlabAllocator (on-heap) to accommodate all of the temporary buffers created by copying from Memtable so when that allocation spills to oldgen it would create less fragmentation in there. > Partially Off Heap Memtables > > > Key: CASSANDRA-6689 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6689 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Benedict >Assignee: Benedict > Labels: performance > Fix For: 2.1 beta2 > > Attachments: CASSANDRA-6689-final-changes.patch, > CASSANDRA-6689-small-changes.patch > > > Move the contents of ByteBuffers off-heap for records written to a memtable. > (See comments for details) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6818) SSTable references not released if stream session fails before it starts
[ https://issues.apache.org/jira/browse/CASSANDRA-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-6818: -- Attachment: 6818-2.0-v2.txt Thanks, [~rlow]. I attached v2 to schedule timeout to release reference when not received ack after file is sent. I put hard coded 30min for timeout, but maybe we can set it longer. > SSTable references not released if stream session fails before it starts > > > Key: CASSANDRA-6818 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6818 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Richard Low >Assignee: Yuki Morishita > Fix For: 1.2.16, 2.0.7, 2.1 beta2 > > Attachments: 6818-1.2.txt, 6818-2.0-v2.txt, 6818-2.0.txt > > > I observed a large number of 'orphan' SSTables - SSTables that are in the > data directory but not loaded by Cassandra - on a 1.1.12 node that had a > large stream fail before it started. These orphan files are particularly > dangerous because if the node is restarted and picks up these SSTables it > could bring data back to life if tombstones have been GCed. To confirm the > SSTables are orphan, I created a snapshot and it didn't contain these files. > I can see in the logs that they have been compacted so should have been > deleted. > The log entries for the stream are: > {{INFO [StreamStage:1] 2014-02-21 19:41:48,742 StreamOut.java (line 115) > Beginning transfer to /10.0.0.1}} > {{INFO [StreamStage:1] 2014-02-21 19:41:48,743 StreamOut.java (line 96) > Flushing memtables for [CFS(Keyspace='ks', ColumnFamily='cf1'), > CFS(Keyspace='ks', ColumnFamily='cf2')]...}} > {{ERROR [GossipTasks:1] 2014-02-21 19:41:49,239 AbstractStreamSession.java > (line 113) Stream failed because /10.0.0.1 died or was restarted/removed > (streams may still be active in background, but further streams won't be > started)}} > {{INFO [StreamStage:1] 2014-02-21 19:41:51,783 StreamOut.java (line 161) > Stream context metadata [...] 2267 sstables.}} > {{INFO [StreamStage:1] 2014-02-21 19:41:51,789 StreamOutSession.java (line > 182) Streaming to /10.0.0.1}} > {{INFO [Streaming to /10.0.0.1:1] 2014-02-21 19:42:02,218 FileStreamTask.java > (line 99) Found no stream out session at end of file stream task - this is > expected if the receiver went down}} > After digging in the code, here's what I think the issue is: > 1. StreamOutSession.transferRanges() creates a streaming session, which is > registered with the failure detector in AbstractStreamSession's constructor. > 2. Memtables are flushed, potentially taking a long time. > 3. The remote node fails, convict() is called and the StreamOutSession is > closed. However, at this time StreamOutSession.files is empty because it's > still waiting for the memtables to flush. > 4. Memtables finish flusing, references are obtained to SSTables to be > streamed and the PendingFiles are added to StreamOutSession.files. > 5. The first stream fails but the StreamOutSession isn't found so is never > closed and the references are never released. > This code is more or less the same on 1.2 so I would expect it to reproduce > there. I looked at 2.0 and can't even see where SSTable references are > released when the stream fails. > Some possible fixes for 1.1/1.2: > 1. Don't register with the failure detector until after the PendingFiles are > set up. I think this is the behaviour in 2.0 but I don't know if it was done > like this to avoid this issue. > 2. Detect the above case in (e.g.) StreamOutSession.begin() by noticing the > session has been closed with care to avoid double frees. > 3. Add some synchronization so closeInternal() doesn't race with setting up > the session. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-6689) Partially Off Heap Memtables
[ https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943737#comment-13943737 ] Pavel Yaskevich edited comment on CASSANDRA-6689 at 3/21/14 11:52 PM: -- Thanks, [~benedict], I reviewed the branch and this is exactly what I expected to see, I'm attaching a patch with some minor changes e.g. add missing option description to yaml plus option name change and values rename and more explicit naming in SlabAllocator for offheap region storage etc. I think with those changes we are good to go and merge this if nobody else has any concerns... was (Author: xedin): Thanks, [~benedict], I reviewed the branch and this is exactly what I expected to see, I'm attaching a patch with some minor changes e.g. yaml heap + option name change and values rename and more explicit naming in SlabAllocator for offheap region storage etc. I think with those changes we are good to go and merge this if nobody else has any concerns... > Partially Off Heap Memtables > > > Key: CASSANDRA-6689 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6689 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Benedict >Assignee: Benedict > Labels: performance > Fix For: 2.1 beta2 > > Attachments: CASSANDRA-6689-final-changes.patch, > CASSANDRA-6689-small-changes.patch > > > Move the contents of ByteBuffers off-heap for records written to a memtable. > (See comments for details) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6689) Partially Off Heap Memtables
[ https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Yaskevich updated CASSANDRA-6689: --- Attachment: CASSANDRA-6689-final-changes.patch Thanks, [~benedict], I reviewed the branch and this is exactly what I expected to see, I'm attaching a patch with some minor changes e.g. yaml heap + option name change and values rename and more explicit naming in SlabAllocator for offheap region storage etc. I think with those changes we are good to go and merge this if nobody else has any concerns... > Partially Off Heap Memtables > > > Key: CASSANDRA-6689 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6689 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Benedict >Assignee: Benedict > Labels: performance > Fix For: 2.1 beta2 > > Attachments: CASSANDRA-6689-final-changes.patch, > CASSANDRA-6689-small-changes.patch > > > Move the contents of ByteBuffers off-heap for records written to a memtable. > (See comments for details) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943723#comment-13943723 ] Christian Spriegel commented on CASSANDRA-6892: --- Imho taking the thrift-columname to access column_metadata must be wrong. I think in ThriftValidation.validateColumnData() should not work on CFMetaData.column_metadata, but on regularColumns or regularAndStaticColumns() instead. [~jbellis], [~thobbs]: Do you guys think I am on the right track here? > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is > expected to be the value validator! > In CFMetaData: > {code} > private List > addDefaultKeyAliases(List pkCols) > { > for (int i = 0; i < pkCols.size(); i++) > { > if (pkCols.get(i) == null) > { > Integer idx = null; > AbstractType type = keyValidator; > if (keyValidator instanceof CompositeType) > { > idx = i; > type = ((CompositeType)keyValidator).types.get(i); > } > // For compatibility sake, we call the first alias 'key' > rather than 'key1'. This > // is inconsistent with column alias, but it's probably not > worth risking breaking compatibility now. > ByteBuffer name = ByteBufferUtil.bytes(i == 0 ? > DEFAULT_KEY_ALIAS : DEFA
[jira] [Created] (CASSANDRA-6906) Skip Replica Calculation for Range Slice on LocalStrategy Keyspace
Tyler Hobbs created CASSANDRA-6906: -- Summary: Skip Replica Calculation for Range Slice on LocalStrategy Keyspace Key: CASSANDRA-6906 URL: https://issues.apache.org/jira/browse/CASSANDRA-6906 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Tyler Hobbs Assignee: Tyler Hobbs Priority: Minor For vnode-enabled clusters, the "Determining replicas to query" portion of range slice commands can be expensive. When querying LocalStrategy keyspaces, we can safely skip this step. On a 15 node cluster with vnodes, skipping this saves about 100ms. This makes a big difference for the drivers, which frequently execute queries like "select * from system.peers" and "select * from system.local". -- This message was sent by Atlassian JIRA (v6.2#6252)
[Cassandra Wiki] Update of "ClientOptions" by cassandracomm
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "ClientOptions" page has been changed by cassandracomm: https://wiki.apache.org/cassandra/ClientOptions?action=diff&rev1=185&rev2=186 * [[https://github.com/mkjellman/perlcassa|perlcassa]] * [[https://metacpan.org/module/Net::Async::CassandraCQL|CassandraCQL]] * Go - * [[https://github.com/tux21b/gocql|gocql]] + * [[https://github.com/gocql/gocql|gocql]] * Haskell * [[http://hackage.haskell.org/package/cassandra-cql|cassandra-cql]] * C++
[Cassandra Wiki] Update of "ContributorsGroup" by JonathanEllis
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "ContributorsGroup" page has been changed by JonathanEllis: https://wiki.apache.org/cassandra/ContributorsGroup?action=diff&rev1=26&rev2=27 Comment: add cassandracomm * Alexis Wilke * Ben McCann * BenHood + * cassandracomm * ChrisBroome * EricEvans * ErnieHershey
[jira] [Updated] (CASSANDRA-6879) ConcurrentModificationException while doing range slice query.
[ https://issues.apache.org/jira/browse/CASSANDRA-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-6879: --- Assignee: Jonathan Ellis (was: Mikhail Stepura) > ConcurrentModificationException while doing range slice query. > -- > > Key: CASSANDRA-6879 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6879 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.0.4 >Reporter: Shao-Chuan Wang >Assignee: Jonathan Ellis > Fix For: 2.0.7 > > Attachments: 6879.txt > > > The paging read request (either from thrift or native) would sporadically > fail due to a race condition between read repair and requesting thread > waiting for read repair results list. The READ_REPAIR is queued in > ReadCallback.maybeResolveForRepair(), and it does not seem to have guarantee > that its resolve() method (which internally create > RangeSliceResponseResolver.Reducer and doing repairResults.addAll inside > RangeSliceResponseResolver.Reducer) would be invoked before the requesting > thread starts waiting on resolver.repairResults. So, there is a small window > that the list is partially populated, while requesting thread starts waiting > on repairResults. I believe for the most of the time, the requesting thread > is either wait for the entire repair results or not waiting for repair > results at all. The original intent here seems to be waiting for repair > results always (if the repair is triggered by repair chance). > {code} > ERROR [Native-Transport-Requests:70827] 2014-03-18 05:00:12,774 > ErrorMessage.java (line 222) Unexpected exception during request > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:188) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:163) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:58) > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188) > at > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:358) > at > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:131) > at > org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) > at > org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43) > at > org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > {code} > {code} > ERROR [Thrift:1] 2014-03-18 07:18:02,434 CustomTThreadPoolServer.java (line > 212) Error occurred during processing of message. > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.service.pager.RangeSliceQueryPager.queryNextPage(RangeSliceQueryPager.java:85) > at > org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:71) > at > org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:36) > at > org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:202) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:169) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:58) > at > org.apache
[jira] [Updated] (CASSANDRA-6879) ConcurrentModificationException while doing range slice query.
[ https://issues.apache.org/jira/browse/CASSANDRA-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6879: -- Attachment: 6879.txt I'm actually confused that this doesn't happen more often, because we invoke maybeResolveForRepair every time we receive more responses than we need to satisfy consistencylevel -- and for range slices it looks like we always send the request to all replicas (possibly restricted to local DC). So at CL.ONE I would expect to see this frequently. Since we don't attempt read repair on range scans yet (CASSANDRA-967) it looks to me like we can fix by simply restricting the replicas contacted to what is required for the consistencylevel. Then resolve() will only be called by the StorageProxy thread as intended, and there is no race. > ConcurrentModificationException while doing range slice query. > -- > > Key: CASSANDRA-6879 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6879 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.0.4 >Reporter: Shao-Chuan Wang >Assignee: Mikhail Stepura > Fix For: 2.0.7 > > Attachments: 6879.txt > > > The paging read request (either from thrift or native) would sporadically > fail due to a race condition between read repair and requesting thread > waiting for read repair results list. The READ_REPAIR is queued in > ReadCallback.maybeResolveForRepair(), and it does not seem to have guarantee > that its resolve() method (which internally create > RangeSliceResponseResolver.Reducer and doing repairResults.addAll inside > RangeSliceResponseResolver.Reducer) would be invoked before the requesting > thread starts waiting on resolver.repairResults. So, there is a small window > that the list is partially populated, while requesting thread starts waiting > on repairResults. I believe for the most of the time, the requesting thread > is either wait for the entire repair results or not waiting for repair > results at all. The original intent here seems to be waiting for repair > results always (if the repair is triggered by repair chance). > {code} > ERROR [Native-Transport-Requests:70827] 2014-03-18 05:00:12,774 > ErrorMessage.java (line 222) Unexpected exception during request > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:188) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:163) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:58) > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188) > at > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:358) > at > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:131) > at > org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) > at > org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43) > at > org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > {code} > {code} > ERROR [Thrift:1] 2014-03-18 07:18:02,434 CustomTThreadPoolServer.java (line > 212) Error occurred during processing of message. > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:423) > at > org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1583) > at > org.apache.cassandra.service.pager.RangeSlic
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943642#comment-13943642 ] Christian Spriegel commented on CASSANDRA-6892: --- With CQLSH the insert works fine: {code} cqlsh:MDS> select * from "MasterdataIndex"; key | key2 | column1 | value -+--+-+-- K |1 |key1 | 0x1122112211221122112211221122aaff11aaff cqlsh:MDS> insert into "MasterdataIndex" (key, key2, column1, value) VALUES ('K',1,'key2',0x1122112211221122112211221122aaff11aaff); cqlsh:MDS> select * from "MasterdataIndex"; key | key2 | column1 | value -+--+-+-- K |1 |key1 | 0x1122112211221122112211221122aaff11aaff K |1 |key2 | 0x1122112211221122112211221122aaff11aaff cqlsh:MDS> {code} I can even list the value afterwards using CLI: {code} [default@MDS] list MasterdataIndex; Using default limit of 100 Using default cell limit of 100 --- RowKey: K:1 => (name=key1, value=1122112211221122112211221122aaff11aaff, timestamp=139543981152) => (name=key2, value=1122112211221122112211221122aaff11aaff, timestamp=1395439922582000) 1 Row Returned. Elapsed time: 2.02 msec(s). [default@MDS] {code} > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is > expected to be the value validator! > In CFMetaData: > {code}
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943630#comment-13943630 ] Tyler Hobbs commented on CASSANDRA-6892: No, I'm referring to the CFMetadata.column_metadata, which affects column validation. Specifically, CFMetadata.addDefaultKeyAliases() is what adds the key aliases to column_metadata. > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is > expected to be the value validator! > In CFMetaData: > {code} > private List > addDefaultKeyAliases(List pkCols) > { > for (int i = 0; i < pkCols.size(); i++) > { > if (pkCols.get(i) == null) > { > Integer idx = null; > AbstractType type = keyValidator; > if (keyValidator instanceof CompositeType) > { > idx = i; > type = ((CompositeType)keyValidator).types.get(i); > } > // For compatibility sake, we call the first alias 'key' > rather than 'key1'. This > // is inconsistent with column alias, but it's probably not > worth risking breaking compatibility now. > ByteBuffer name = ByteBufferUtil.bytes(i == 0 ? > DEFAULT_KEY_ALIAS : DEFAULT_KEY_ALIAS + (i + 1)); > ColumnDefinition newDef = > ColumnDefinition.partitionKeyDef(name, type, idx); // type i
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943624#comment-13943624 ] Jonathan Ellis commented on CASSANDRA-6892: --- bq. The key aliases don't show up in the normal column metadata Now I'm confused, I thought we're talking about {code} struct CfDef { ... 28: optional binary key_alias, {code} > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is > expected to be the value validator! > In CFMetaData: > {code} > private List > addDefaultKeyAliases(List pkCols) > { > for (int i = 0; i < pkCols.size(); i++) > { > if (pkCols.get(i) == null) > { > Integer idx = null; > AbstractType type = keyValidator; > if (keyValidator instanceof CompositeType) > { > idx = i; > type = ((CompositeType)keyValidator).types.get(i); > } > // For compatibility sake, we call the first alias 'key' > rather than 'key1'. This > // is inconsistent with column alias, but it's probably not > worth risking breaking compatibility now. > ByteBuffer name = ByteBufferUtil.bytes(i == 0 ? > DEFAULT_KEY_ALIAS : DEFAULT_KEY_ALIAS + (i + 1)); > ColumnDefinition newDef = > ColumnDefinition.partitionKeyDef(name, type, idx); // typ
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943614#comment-13943614 ] Tyler Hobbs commented on CASSANDRA-6892: Hmm, I'm not convinced this was entirely intentional. This behavior didn't exist in 1.2; you could insert a column named "key" with no validation problems. It looks like CASSANDRA-5125 started this. That ticket (and the code changes) don't directly acknowledge this regression, which makes me think it was accidental. In any case, it effectively creates reserved column names for Thrift clients, which is problematic. > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is > expected to be the value validator! > In CFMetaData: > {code} > private List > addDefaultKeyAliases(List pkCols) > { > for (int i = 0; i < pkCols.size(); i++) > { > if (pkCols.get(i) == null) > { > Integer idx = null; > AbstractType type = keyValidator; > if (keyValidator instanceof CompositeType) > { > idx = i; > type = ((CompositeType)keyValidator).types.get(i); > } > // For compatibility sake, we call the first alias 'key' > rather than 'key1'. This > // is inconsistent with column alias, but it's probably not > worth risking breaking comp
[jira] [Commented] (CASSANDRA-6840) 2.0.{5,6} node throws EOFExceptions on the Row mutation forwarding path during rolling upgrade from 1.2.15.
[ https://issues.apache.org/jira/browse/CASSANDRA-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943603#comment-13943603 ] Jeremiah Jordan commented on CASSANDRA-6840: Adding some follow up here for anyone hitting this issue. Cross DC forwarding from C* 2.0.0-2.0.6 to C* 1.2.X is broken. If you have multiple DC's and you already pushed through an upgrade, you will want to run repair to make sure everything is in sync. Hinted Handoff should take care of the messed up forwards, but better safe than sorry, so I would run a repair. > 2.0.{5,6} node throws EOFExceptions on the Row mutation forwarding path > during rolling upgrade from 1.2.15. > --- > > Key: CASSANDRA-6840 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6840 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Federico Piccinini >Assignee: Marcus Eriksson > Fix For: 2.0.7 > > Attachments: 0001-Read-id-properly-from-older-versions.patch > > > During a rolling upgrade from 1.2.15 to 2.0.5 nodes running on 2.0.5 throw an > EOFException: > {noformat} > ERROR [MutationStage:12] 2014-03-12 09:46:35,706 RowMutationVerbHandler.java > (line 63) Error in row mutation > java.io.EOFException > at java.io.DataInputStream.readFully(DataInputStream.java:197) > at > org.apache.cassandra.net.CompactEndpointSerializationHelper.deserialize(CompactEndpointSerializationHelper.java:37) > at > org.apache.cassandra.db.RowMutationVerbHandler.forwardToLocalNodes(RowMutationVerbHandler.java:81) > at > org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:49) > at > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > {noformat} > In this specific context we have a setup with 3 datacenters, 3 nodes in each > datacenter, NetworkTopologyStrategy as placement_strategy with 3 replicas in > each DC. We noticed the issue on the only 2.0.5 node in the ring. All nodes > run on Java7. We have tried to upgrade the node on 2.0.5 to 2.0.6 but that > didn't solve the issue. > At a first glance it seems that the size of the size of the list of forward > addresses in > org.apache.cassandra.db.RowMutationVerbHandler.forwardToLocalNodes() in > inconsistent with the length of the InputStream, which causes the > deserializer to try and read after the end of the InputStream. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6825) COUNT(*) with WHERE not finding all the matching rows
[ https://issues.apache.org/jira/browse/CASSANDRA-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943586#comment-13943586 ] Bill Mitchell commented on CASSANDRA-6825: -- As it happens, I have that info handy as my JUnit testcase includes it in the log4j output: CREATE TABLE testdb_1395374703023.sr ( siteid text, listid bigint, partition int, createdate timestamp, emailcrypt text, emailaddr text, properties text, removedate timestamp, PRIMARY KEY ((siteid, listid, partition), createdate, emailcrypt) ) WITH CLUSTERING ORDER BY (createdate DESC, emailcrypt ASC) AND read_repair_chance = 0.1 AND dclocal_read_repair_chance = 0.0 AND replicate_on_write = true AND gc_grace_seconds = 864000 AND bloom_filter_fp_chance = 0.01 AND caching = 'KEYS_ONLY' AND comment = '' AND compaction = { 'class' : 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' } AND compression = { 'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor' }; (siteID was a BIGINT until recently when the schema was changed to TEXT to match the use of siteID elsewhere in the product. I had not thought to represent our Java String as a Cassandra UUID.) > COUNT(*) with WHERE not finding all the matching rows > - > > Key: CASSANDRA-6825 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6825 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: quad core Windows7 x64, single node cluster > Cassandra 2.0.5 >Reporter: Bill Mitchell >Assignee: Tyler Hobbs > Attachments: cassandra.log, selectpartitions.zip, > selectrowcounts.txt, testdb_1395372407904.zip, testdb_1395372407904.zip > > > Investigating another problem, I needed to do COUNT(*) on the several > partitions of a table immediately after a test case ran, and I discovered > that count(*) on the full table and on each of the partitions returned > different counts. > In particular case, SELECT COUNT(*) FROM sr LIMIT 100; returned the > expected count from the test 9 rows. The composite primary key splits > the logical row into six distinct partitions, and when I issue a query asking > for the total across all six partitions, the returned result is only 83999. > Drilling down, I find that SELECT * from sr WHERE s = 5 AND l = 11 AND > partition = 0; returns 30,000 rows, but a SELECT COUNT(*) with the identical > WHERE predicate reports only 14,000. > This is failing immediately after running a single small test, such that > there are only two SSTables, sr-jb-1 and sr-jb-2. Compaction never needed to > run. > In selectrowcounts.txt is a copy of the cqlsh output showing the incorrect > count(*) results. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943580#comment-13943580 ] Christian Spriegel edited comment on CASSANDRA-6892 at 3/21/14 9:30 PM: Ok, its easily reproducable with CLI: {code} [default@MDS_0] set MasterdataIndex['K:1'][key0] = 1122112211221122112211221122AAFF11AAFF; Value inserted. Elapsed time: 1.08 msec(s). [default@MDS_0] set MasterdataIndex['K:1'][key1] = 1122112211221122112211221122AAFF11AAFF; Value inserted. Elapsed time: 1.08 msec(s). [default@MDS_0] set MasterdataIndex['K:1'][key] = 1122112211221122112211221122AAFF11AAFF; (String didn't validate.) [MDS_0][MasterdataIndex][key] failed validation InvalidRequestException(why:(String didn't validate.) [MDS_0][MasterdataIndex][key] failed validation) at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16640) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:848) at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:832) at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:982) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:225) at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:213) at org.apache.cassandra.cli.CliMain.main(CliMain.java:343) [default@MDS_0] set MasterdataIndex['K:1'][key2] = 1122112211221122112211221122AAFF11AAFF; (Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation InvalidRequestException(why:(Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation) at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16640) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:848) at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:832) at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:982) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:225) at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:213) at org.apache.cassandra.cli.CliMain.main(CliMain.java:343) [default@MDS_0] list MasterdataIndex ; Using default limit of 100 Using default cell limit of 100 --- RowKey: K:1 => (name=key0, value=1122112211221122112211221122aaff11aaff, timestamp=1395437337904000) => (name=key1, value=1122112211221122112211221122aaff11aaff, timestamp=1395437341326000) 2 Rows Returned. Elapsed time: 2.35 msec(s). [default@MDS_0] {code} was (Author: christianmovi): Ok, its easily reproducable with CLI: {code} [default@MDS_0] set MasterdataIndex['K:1'][key0] = 1122112211221122112211221122AAFF11AAFF; Value inserted. Elapsed time: 1.08 msec(s). [default@MDS_0] set MasterdataIndex['K:1'][key1] = 1122112211221122112211221122AAFF11AAFF; Value inserted. Elapsed time: 1.08 msec(s). [default@MDS_0] set MasterdataIndex['K:1'][key] = 1122112211221122112211221122AAFF11AAFF; (String didn't validate.) [MDS_0][MasterdataIndex][key] failed validation InvalidRequestException(why:(String didn't validate.) [MDS_0][MasterdataIndex][key] failed validation) at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16640) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:848) at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:832) at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:982) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:225) at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:213) at org.apache.cassandra.cli.CliMain.main(CliMain.java:343) [default@MDS_0] set MasterdataIndex['K:1'][key2] = 1122112211221122112211221122AAFF11AAFF; (Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation InvalidRequestException(why:(Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation) at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16640) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:848) at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:832) at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:982) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java
[jira] [Comment Edited] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943580#comment-13943580 ] Christian Spriegel edited comment on CASSANDRA-6892 at 3/21/14 9:30 PM: Ok, its easily reproducable with CLI: {code} [default@MDS_0] set MasterdataIndex['K:1'][key0] = 1122112211221122112211221122AAFF11AAFF; Value inserted. Elapsed time: 1.08 msec(s). [default@MDS_0] set MasterdataIndex['K:1'][key1] = 1122112211221122112211221122AAFF11AAFF; Value inserted. Elapsed time: 1.08 msec(s). [default@MDS_0] set MasterdataIndex['K:1'][key] = 1122112211221122112211221122AAFF11AAFF; (String didn't validate.) [MDS_0][MasterdataIndex][key] failed validation InvalidRequestException(why:(String didn't validate.) [MDS_0][MasterdataIndex][key] failed validation) at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16640) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:848) at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:832) at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:982) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:225) at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:213) at org.apache.cassandra.cli.CliMain.main(CliMain.java:343) [default@MDS_0] set MasterdataIndex['K:1'][key2] = 1122112211221122112211221122AAFF11AAFF; (Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation InvalidRequestException(why:(Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation) at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16640) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:848) at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:832) at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:982) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:225) at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:213) at org.apache.cassandra.cli.CliMain.main(CliMain.java:343) [default@MDS_0] list MasterdataIndex ; Using default limit of 100 Using default cell limit of 100 --- RowKey: G:1 => (name=GOOD, value=474f4f44, timestamp=1395434320342000) --- RowKey: K:1 => (name=key0, value=1122112211221122112211221122aaff11aaff, timestamp=1395437337904000) => (name=key1, value=1122112211221122112211221122aaff11aaff, timestamp=1395437341326000) 2 Rows Returned. Elapsed time: 2.35 msec(s). [default@MDS_0] {code} was (Author: christianmovi): Ok, its easily reproducable with CLI: {code} [default@MDS_0] list MasterdataIndex ; Using default limit of 100 Using default cell limit of 100 --- RowKey: G:1 => (name=GOOD, value=474f4f44, timestamp=1395434320342000) --- RowKey: K:1 => (name=key0, value=160218046b6579301804474f4f4416c29a0c16, timestamp=1395434320347001) => (name=key1, value=160218046b6579311804474f4f4416c49a0c16, timestamp=1395434320351001) 2 Rows Returned. Elapsed time: 2.6 msec(s). [default@MDS_0] set MasterdataIndex['K:1'][key2] = 1122112211221122112211221122AAFF11AAFF; (Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation InvalidRequestException(why:(Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation) at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16640) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:848) at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:832) at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:982) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:225) at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:213) at org.apache.cassandra.cli.CliMain.main(CliMain.java:343) [default@MDS_0] {code} > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943580#comment-13943580 ] Christian Spriegel commented on CASSANDRA-6892: --- Ok, its easily reproducable with CLI: {code} [default@MDS_0] list MasterdataIndex ; Using default limit of 100 Using default cell limit of 100 --- RowKey: G:1 => (name=GOOD, value=474f4f44, timestamp=1395434320342000) --- RowKey: K:1 => (name=key0, value=160218046b6579301804474f4f4416c29a0c16, timestamp=1395434320347001) => (name=key1, value=160218046b6579311804474f4f4416c49a0c16, timestamp=1395434320351001) 2 Rows Returned. Elapsed time: 2.6 msec(s). [default@MDS_0] set MasterdataIndex['K:1'][key2] = 1122112211221122112211221122AAFF11AAFF; (Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation InvalidRequestException(why:(Expected 8 or 0 byte long (19)) [MDS_0][MasterdataIndex][key2] failed validation) at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16640) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:848) at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:832) at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:982) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:225) at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:213) at org.apache.cassandra.cli.CliMain.main(CliMain.java:343) [default@MDS_0] {code} > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is try
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943561#comment-13943561 ] Tyler Hobbs commented on CASSANDRA-6892: Besides doing {{ALTER TABLE RENAME key2 TO }}, is there another workaround? (The key aliases don't show up in the normal column metadata, so I don't think Thrift clients could drop that.) > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is > expected to be the value validator! > In CFMetaData: > {code} > private List > addDefaultKeyAliases(List pkCols) > { > for (int i = 0; i < pkCols.size(); i++) > { > if (pkCols.get(i) == null) > { > Integer idx = null; > AbstractType type = keyValidator; > if (keyValidator instanceof CompositeType) > { > idx = i; > type = ((CompositeType)keyValidator).types.get(i); > } > // For compatibility sake, we call the first alias 'key' > rather than 'key1'. This > // is inconsistent with column alias, but it's probably not > worth risking breaking compatibility now. > ByteBuffer name = ByteBufferUtil.bytes(i == 0 ? > DEFAULT_KEY_ALIAS : DEFAULT_KEY_ALIAS + (i + 1)); > ColumnDefinition newDef = > ColumnDefinition.partitionKeyDef(name, type, idx); //
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943557#comment-13943557 ] Jonathan Ellis commented on CASSANDRA-6892: --- bq. I didn't think that key aliases were supposed to affect validation of Thrift columns, but after looking at some tickets and code, it seems like that may be intentional. Can anybody confirm that? Confirmed > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is > expected to be the value validator! > In CFMetaData: > {code} > private List > addDefaultKeyAliases(List pkCols) > { > for (int i = 0; i < pkCols.size(); i++) > { > if (pkCols.get(i) == null) > { > Integer idx = null; > AbstractType type = keyValidator; > if (keyValidator instanceof CompositeType) > { > idx = i; > type = ((CompositeType)keyValidator).types.get(i); > } > // For compatibility sake, we call the first alias 'key' > rather than 'key1'. This > // is inconsistent with column alias, but it's probably not > worth risking breaking compatibility now. > ByteBuffer name = ByteBufferUtil.bytes(i == 0 ? > DEFAULT_KEY_ALIAS : DEFAULT_KEY_ALIAS + (i + 1)); > ColumnDefinition newDef = > ColumnDefinition.partitionKe
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943555#comment-13943555 ] Christian Spriegel commented on CASSANDRA-6892: --- Tyler can reproduce the issue now, but I am posting this anyway :-) The thrift-schema: {code} create column family MasterdataIndex with compression_options = {sstable_compression:SnappyCompressor, chunk_length_kb:64} and comparator = UTF8Type and key_validation_class = 'CompositeType(UTF8Type,LongType)' and default_validation_class = BytesType; {code} With the following data: {code} [default@MDS_0] list MasterdataIndex ; Using default limit of 100 Using default cell limit of 100 --- RowKey: G:1 => (name=GOOD, value=474f4f44, timestamp=1395434320342000) --- RowKey: K:1 => (name=key0, value=160218046b6579301804474f4f4416c29a0c16, timestamp=1395434320347001) => (name=key1, value=160218046b6579311804474f4f4416c49a0c16, timestamp=1395434320351001) 2 Rows Returned. Elapsed time: 30 msec(s). [default@MDS_0] {code} (and a "key2" which failed to insert) results in the following CFMetaData.toString(): {code} org.apache.cassandra.config.CFMetaData@54196399[ cfId=1d46d5a5-726e-3610-b08e-ebeca28b6325,ksName=MDS_0, cfName=MasterdataIndex, cfType=Standard, comparator=org.apache.cassandra.db.marshal.UTF8Type, comment=,readRepairChance=0.1,dclocalReadRepairChance=0.0,replicateOnWrite=true, gcGraceSeconds=864000, defaultValidator=org.apache.cassandra.db.marshal.BytesType, keyValidator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.LongType),minCompactionThreshold=4,maxCompactionThreshold=32, column_metadata= { java.nio.HeapByteBuffer[pos=0 lim=4 cap=4]=ColumnDefinition{name=6b657932, validator=org.apache.cassandra.db.marshal.LongType, type=PARTITION_KEY, componentIndex=1, indexName=null, indexType=null}, java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=ColumnDefinition{name=6b6579, validator=org.apache.cassandra.db.marshal.UTF8Type, type=PARTITION_KEY, componentIndex=0, indexName=null, indexType=null} }, compactionStrategyClass=class org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,compactionStrategyOptions={},compressionOptions={sstable_compression=org.apache.cassandra.io.compress.SnappyCompressor, chunk_length_kb=64},bloomFilterFpChance=,memtable_flush_period_in_ms=0,caching=KEYS_ONLY,defaultTimeToLive=0,speculative_retry=NONE,indexInterval=128,populateIoCacheOnFlush=false,droppedColumns={},triggers={}] {code} > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943544#comment-13943544 ] Tyler Hobbs commented on CASSANDRA-6892: After investigating this a bit with Christian, it turns out the column that was failing the insert was named "key2", which was one of the key aliases. I didn't think that key aliases were supposed to affect validation of Thrift columns, but after looking at some tickets and code, it seems like that may be intentional. Can anybody confirm that? > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is > expected to be the value validator! > In CFMetaData: > {code} > private List > addDefaultKeyAliases(List pkCols) > { > for (int i = 0; i < pkCols.size(); i++) > { > if (pkCols.get(i) == null) > { > Integer idx = null; > AbstractType type = keyValidator; > if (keyValidator instanceof CompositeType) > { > idx = i; > type = ((CompositeType)keyValidator).types.get(i); > } > // For compatibility sake, we call the first alias 'key' > rather than 'key1'. This > // is inconsistent with column alias, but it's probably not > worth risking breaking compatibility now. > ByteBuffer name = ByteBufferUtil.bytes(i =
[jira] [Comment Edited] (CASSANDRA-6825) COUNT(*) with WHERE not finding all the matching rows
[ https://issues.apache.org/jira/browse/CASSANDRA-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943492#comment-13943492 ] Tyler Hobbs edited comment on CASSANDRA-6825 at 3/21/14 8:21 PM: - [~wtmitchell3] what type is the siteid column supposed to be? So far I've tried varint, uuid, and text and had problems with each. Just pasting "DESCRIBE KEYSPACE testdb_" from cqlsh would also work. was (Author: thobbs): [~wtmitchell3] what type is the siteid column supposed to be? So far I've tried varint, uuid, and text and had problems with each. Just pasting "DESCRIBE KEYSPACE testdb_" would also work. > COUNT(*) with WHERE not finding all the matching rows > - > > Key: CASSANDRA-6825 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6825 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: quad core Windows7 x64, single node cluster > Cassandra 2.0.5 >Reporter: Bill Mitchell >Assignee: Tyler Hobbs > Attachments: cassandra.log, selectpartitions.zip, > selectrowcounts.txt, testdb_1395372407904.zip, testdb_1395372407904.zip > > > Investigating another problem, I needed to do COUNT(*) on the several > partitions of a table immediately after a test case ran, and I discovered > that count(*) on the full table and on each of the partitions returned > different counts. > In particular case, SELECT COUNT(*) FROM sr LIMIT 100; returned the > expected count from the test 9 rows. The composite primary key splits > the logical row into six distinct partitions, and when I issue a query asking > for the total across all six partitions, the returned result is only 83999. > Drilling down, I find that SELECT * from sr WHERE s = 5 AND l = 11 AND > partition = 0; returns 30,000 rows, but a SELECT COUNT(*) with the identical > WHERE predicate reports only 14,000. > This is failing immediately after running a single small test, such that > there are only two SSTables, sr-jb-1 and sr-jb-2. Compaction never needed to > run. > In selectrowcounts.txt is a copy of the cqlsh output showing the incorrect > count(*) results. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6825) COUNT(*) with WHERE not finding all the matching rows
[ https://issues.apache.org/jira/browse/CASSANDRA-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943492#comment-13943492 ] Tyler Hobbs commented on CASSANDRA-6825: [~wtmitchell3] what type is the siteid column supposed to be? So far I've tried varint, uuid, and text and had problems with each. Just pasting "DESCRIBE KEYSPACE testdb_" would also work. > COUNT(*) with WHERE not finding all the matching rows > - > > Key: CASSANDRA-6825 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6825 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: quad core Windows7 x64, single node cluster > Cassandra 2.0.5 >Reporter: Bill Mitchell >Assignee: Tyler Hobbs > Attachments: cassandra.log, selectpartitions.zip, > selectrowcounts.txt, testdb_1395372407904.zip, testdb_1395372407904.zip > > > Investigating another problem, I needed to do COUNT(*) on the several > partitions of a table immediately after a test case ran, and I discovered > that count(*) on the full table and on each of the partitions returned > different counts. > In particular case, SELECT COUNT(*) FROM sr LIMIT 100; returned the > expected count from the test 9 rows. The composite primary key splits > the logical row into six distinct partitions, and when I issue a query asking > for the total across all six partitions, the returned result is only 83999. > Drilling down, I find that SELECT * from sr WHERE s = 5 AND l = 11 AND > partition = 0; returns 30,000 rows, but a SELECT COUNT(*) with the identical > WHERE predicate reports only 14,000. > This is failing immediately after running a single small test, such that > there are only two SSTables, sr-jb-1 and sr-jb-2. Compaction never needed to > run. > In selectrowcounts.txt is a copy of the cqlsh output showing the incorrect > count(*) results. -- This message was sent by Atlassian JIRA (v6.2#6252)
[3/3] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e2bef98c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e2bef98c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e2bef98c Branch: refs/heads/trunk Commit: e2bef98c0d5765857d6b0284e84df8b504bebc76 Parents: 2e48e0c 21a1d52 Author: Yuki Morishita Authored: Fri Mar 21 15:15:00 2014 -0500 Committer: Yuki Morishita Committed: Fri Mar 21 15:15:00 2014 -0500 -- CHANGES.txt | 1 + .../cassandra/service/StorageService.java | 34 .../cassandra/service/StorageServiceMBean.java | 24 -- .../org/apache/cassandra/tools/NodeProbe.java | 15 - 4 files changed, 1 insertion(+), 73 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2bef98c/CHANGES.txt --
[1/3] git commit: Remove sync repair JMX interface
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 35b7da4d0 -> 21a1d525b refs/heads/trunk 2e48e0c43 -> e2bef98c0 Remove sync repair JMX interface patch by yukim; reviewed by jbellis for CASSANDRA-6900 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21a1d525 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21a1d525 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21a1d525 Branch: refs/heads/cassandra-2.1 Commit: 21a1d525bd97de6f5ae0e5c54d779fbcfb733e96 Parents: 35b7da4 Author: Yuki Morishita Authored: Fri Mar 21 15:14:26 2014 -0500 Committer: Yuki Morishita Committed: Fri Mar 21 15:14:26 2014 -0500 -- CHANGES.txt | 1 + .../cassandra/service/StorageService.java | 34 .../cassandra/service/StorageServiceMBean.java | 24 -- .../org/apache/cassandra/tools/NodeProbe.java | 15 - 4 files changed, 1 insertion(+), 73 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/21a1d525/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 21d4d5a..1b39e30 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -28,6 +28,7 @@ * Update native server to Netty 4 (CASSANDRA-6236) * Fix off-by-one error in stress (CASSANDRA-6883) * Make OpOrder AutoCloseable (CASSANDRA-6901) + * Remove sync repair JMX interface (CASSANDRA-6900) Merged from 2.0: * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) http://git-wip-us.apache.org/repos/asf/cassandra/blob/21a1d525/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index 16d8628..57fbaf0 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -2531,40 +2531,6 @@ public class StorageService extends NotificationBroadcasterSupport implements IE return forceRepairAsync(keyspaceName, isSequential, isLocal, Collections.singleton(new Range(parsedBeginToken, parsedEndToken)), fullRepair, columnFamilies); } - -/** - * Trigger proactive repair for a keyspace and column families. - * @param keyspaceName - * @param columnFamilies - * @throws IOException - */ -public void forceKeyspaceRepair(final String keyspaceName, boolean isSequential, boolean isLocal, boolean fullRepair, final String... columnFamilies) throws IOException -{ -forceKeyspaceRepairRange(keyspaceName, getLocalRanges(keyspaceName), isSequential, isLocal, fullRepair, columnFamilies); -} - -public void forceKeyspaceRepairPrimaryRange(final String keyspaceName, boolean isSequential, boolean isLocal, boolean fullRepair, final String... columnFamilies) throws IOException -{ -forceKeyspaceRepairRange(keyspaceName, getLocalPrimaryRanges(keyspaceName), isSequential, isLocal, fullRepair, columnFamilies); -} - -public void forceKeyspaceRepairRange(String beginToken, String endToken, final String keyspaceName, boolean isSequential, boolean isLocal, boolean fullRepair, final String... columnFamilies) throws IOException -{ -Token parsedBeginToken = getPartitioner().getTokenFactory().fromString(beginToken); -Token parsedEndToken = getPartitioner().getTokenFactory().fromString(endToken); - -logger.info("starting user-requested repair of range ({}, {}] for keyspace {} and column families {}", -parsedBeginToken, parsedEndToken, keyspaceName, columnFamilies); -forceKeyspaceRepairRange(keyspaceName, Collections.singleton(new Range(parsedBeginToken, parsedEndToken)), isSequential, isLocal, fullRepair, columnFamilies); -} - -public void forceKeyspaceRepairRange(final String keyspaceName, final Collection> ranges, boolean isSequential, boolean isLocal, boolean fullRepair, final String... columnFamilies) throws IOException -{ -if (Keyspace.SYSTEM_KS.equalsIgnoreCase(keyspaceName)) -return; -createRepairTask(nextRepairCommand.incrementAndGet(), keyspaceName, ranges, isSequential, isLocal, fullRepair, columnFamilies).run(); -} - private FutureTask createRepairTask(final int cmd, final String keyspace, final Collection> ranges, http://git-wip-us.apache.org/repos/asf/cassandra/blob/21a1d525/src/java/org/apache/cassandra/service/StorageServiceMBean.java
[2/3] git commit: Remove sync repair JMX interface
Remove sync repair JMX interface patch by yukim; reviewed by jbellis for CASSANDRA-6900 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21a1d525 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21a1d525 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21a1d525 Branch: refs/heads/trunk Commit: 21a1d525bd97de6f5ae0e5c54d779fbcfb733e96 Parents: 35b7da4 Author: Yuki Morishita Authored: Fri Mar 21 15:14:26 2014 -0500 Committer: Yuki Morishita Committed: Fri Mar 21 15:14:26 2014 -0500 -- CHANGES.txt | 1 + .../cassandra/service/StorageService.java | 34 .../cassandra/service/StorageServiceMBean.java | 24 -- .../org/apache/cassandra/tools/NodeProbe.java | 15 - 4 files changed, 1 insertion(+), 73 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/21a1d525/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 21d4d5a..1b39e30 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -28,6 +28,7 @@ * Update native server to Netty 4 (CASSANDRA-6236) * Fix off-by-one error in stress (CASSANDRA-6883) * Make OpOrder AutoCloseable (CASSANDRA-6901) + * Remove sync repair JMX interface (CASSANDRA-6900) Merged from 2.0: * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) http://git-wip-us.apache.org/repos/asf/cassandra/blob/21a1d525/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index 16d8628..57fbaf0 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -2531,40 +2531,6 @@ public class StorageService extends NotificationBroadcasterSupport implements IE return forceRepairAsync(keyspaceName, isSequential, isLocal, Collections.singleton(new Range(parsedBeginToken, parsedEndToken)), fullRepair, columnFamilies); } - -/** - * Trigger proactive repair for a keyspace and column families. - * @param keyspaceName - * @param columnFamilies - * @throws IOException - */ -public void forceKeyspaceRepair(final String keyspaceName, boolean isSequential, boolean isLocal, boolean fullRepair, final String... columnFamilies) throws IOException -{ -forceKeyspaceRepairRange(keyspaceName, getLocalRanges(keyspaceName), isSequential, isLocal, fullRepair, columnFamilies); -} - -public void forceKeyspaceRepairPrimaryRange(final String keyspaceName, boolean isSequential, boolean isLocal, boolean fullRepair, final String... columnFamilies) throws IOException -{ -forceKeyspaceRepairRange(keyspaceName, getLocalPrimaryRanges(keyspaceName), isSequential, isLocal, fullRepair, columnFamilies); -} - -public void forceKeyspaceRepairRange(String beginToken, String endToken, final String keyspaceName, boolean isSequential, boolean isLocal, boolean fullRepair, final String... columnFamilies) throws IOException -{ -Token parsedBeginToken = getPartitioner().getTokenFactory().fromString(beginToken); -Token parsedEndToken = getPartitioner().getTokenFactory().fromString(endToken); - -logger.info("starting user-requested repair of range ({}, {}] for keyspace {} and column families {}", -parsedBeginToken, parsedEndToken, keyspaceName, columnFamilies); -forceKeyspaceRepairRange(keyspaceName, Collections.singleton(new Range(parsedBeginToken, parsedEndToken)), isSequential, isLocal, fullRepair, columnFamilies); -} - -public void forceKeyspaceRepairRange(final String keyspaceName, final Collection> ranges, boolean isSequential, boolean isLocal, boolean fullRepair, final String... columnFamilies) throws IOException -{ -if (Keyspace.SYSTEM_KS.equalsIgnoreCase(keyspaceName)) -return; -createRepairTask(nextRepairCommand.incrementAndGet(), keyspaceName, ranges, isSequential, isLocal, fullRepair, columnFamilies).run(); -} - private FutureTask createRepairTask(final int cmd, final String keyspace, final Collection> ranges, http://git-wip-us.apache.org/repos/asf/cassandra/blob/21a1d525/src/java/org/apache/cassandra/service/StorageServiceMBean.java -- diff --git a/src/java/org/apache/cassandra/service/StorageServiceMBean.java b/src/java/org/apa
[jira] [Commented] (CASSANDRA-6897) Add checksum to the Summary File and Bloom Filter file of SSTables
[ https://issues.apache.org/jira/browse/CASSANDRA-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943458#comment-13943458 ] John Carrino commented on CASSANDRA-6897: - I understand that the mmap path means that the actually sstable cannot contain checksums unless it is compressed. We compress all tables for this reason. We need to detect failures fast and early as we cannot afford any data loss and need to repair right away. I would also like to add that the Index file should have check-summing on each entry because a corruption in that file may mean that bogus data is read and returned. Maybe not on each entry, but every "block" (512B - 1KB) starting at the entry points from the summary file. I think this will go a long way towards piece of mind that cassandra is returning the right results even on hardware that may have issues. > Add checksum to the Summary File and Bloom Filter file of SSTables > -- > > Key: CASSANDRA-6897 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6897 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Adam Hattrell > > Could we add a checksum to the Summary file and filter file of the SSTable. > Since reads the whole bloom filter before actually reading data, it seems > like it would make sense to checksum the bloom filter to make sure there is > no corruption there. Same is true with the summary file. The core of our > question is, can you add checksumming to all elements of the SSTable so if we > read anything corrupt we immediately see a failure? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-6897) Add checksum to the Summary File and Bloom Filter file of SSTables
[ https://issues.apache.org/jira/browse/CASSANDRA-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943458#comment-13943458 ] John Carrino edited comment on CASSANDRA-6897 at 3/21/14 7:46 PM: -- I understand that the mmap path means that the actual sstable cannot contain checksums unless it is compressed. We (my clusters) compress all tables for this reason. We need to detect failures fast and early as we cannot afford any data loss and need to repair right away. I would also like to add that the Index file should have check-summing on each entry because a corruption in that file may mean that bogus data is read and returned. Maybe not on each entry, but every "block" (512B - 1KB) starting at the entry points from the summary file. I think this will go a long way towards piece of mind that cassandra is returning the right results even on hardware that may have issues. was (Author: johnyoh): I understand that the mmap path means that the actual sstable cannot contain checksums unless it is compressed. We compress all tables for this reason. We need to detect failures fast and early as we cannot afford any data loss and need to repair right away. I would also like to add that the Index file should have check-summing on each entry because a corruption in that file may mean that bogus data is read and returned. Maybe not on each entry, but every "block" (512B - 1KB) starting at the entry points from the summary file. I think this will go a long way towards piece of mind that cassandra is returning the right results even on hardware that may have issues. > Add checksum to the Summary File and Bloom Filter file of SSTables > -- > > Key: CASSANDRA-6897 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6897 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Adam Hattrell > > Could we add a checksum to the Summary file and filter file of the SSTable. > Since reads the whole bloom filter before actually reading data, it seems > like it would make sense to checksum the bloom filter to make sure there is > no corruption there. Same is true with the summary file. The core of our > question is, can you add checksumming to all elements of the SSTable so if we > read anything corrupt we immediately see a failure? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-6897) Add checksum to the Summary File and Bloom Filter file of SSTables
[ https://issues.apache.org/jira/browse/CASSANDRA-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943458#comment-13943458 ] John Carrino edited comment on CASSANDRA-6897 at 3/21/14 7:46 PM: -- I understand that the mmap path means that the actual sstable cannot contain checksums unless it is compressed. We compress all tables for this reason. We need to detect failures fast and early as we cannot afford any data loss and need to repair right away. I would also like to add that the Index file should have check-summing on each entry because a corruption in that file may mean that bogus data is read and returned. Maybe not on each entry, but every "block" (512B - 1KB) starting at the entry points from the summary file. I think this will go a long way towards piece of mind that cassandra is returning the right results even on hardware that may have issues. was (Author: johnyoh): I understand that the mmap path means that the actually sstable cannot contain checksums unless it is compressed. We compress all tables for this reason. We need to detect failures fast and early as we cannot afford any data loss and need to repair right away. I would also like to add that the Index file should have check-summing on each entry because a corruption in that file may mean that bogus data is read and returned. Maybe not on each entry, but every "block" (512B - 1KB) starting at the entry points from the summary file. I think this will go a long way towards piece of mind that cassandra is returning the right results even on hardware that may have issues. > Add checksum to the Summary File and Bloom Filter file of SSTables > -- > > Key: CASSANDRA-6897 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6897 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Adam Hattrell > > Could we add a checksum to the Summary file and filter file of the SSTable. > Since reads the whole bloom filter before actually reading data, it seems > like it would make sense to checksum the bloom filter to make sure there is > no corruption there. Same is true with the summary file. The core of our > question is, can you add checksumming to all elements of the SSTable so if we > read anything corrupt we immediately see a failure? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943457#comment-13943457 ] Christian Spriegel commented on CASSANDRA-6892: --- I think this must be some special case I am stumbling upon. This error does only happen for a single testcase (out of >1100), and there are actually some values in that column family. My schema creation is a bit more complicated: First create the schema by calling cassandra-cli from inside my unit test. Right after that I modify the schema though Hector/thrift and disable gc-grace for my test-schema: {code} final Cluster cluster = keyspace.getCluster(); final String keyspaceName = keyspace.getKeyspace().getKeyspaceName(); final KeyspaceDefinition keyspaceDefinition = cluster.describeKeyspace(keyspaceName); final List cfDefs = keyspaceDefinition.getCfDefs(); for (final ColumnFamilyDefinition cfDef : cfDefs) { cfDef.setGcGraceSeconds(0); cfDef.setMemtableFlushAfterMins(Integer.MAX_VALUE); cfDef.setReadRepairChance(0.0); cfDef.setKeyCacheSavePeriodInSeconds(Integer.MAX_VALUE); cluster.updateColumnFamily(cfDef); } {code} I could imagine that modifying the schema through thrift breaks/broke the schema for 2.0. > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside Colum
[jira] [Updated] (CASSANDRA-6905) commitlog archive replay should attempt to replay all mutations
[ https://issues.apache.org/jira/browse/CASSANDRA-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-6905: Description: Currently when you do a point-in-time recovery using archived commitlogs, the replay stops when the time is encountered, but since timestamps are supplied by the client we can't guarantee the segment is ordered by timestamp, so some mutations can be lost. Instead we could continue past the given timestamp, and just filter out any mutations greater than it. (was: Currently when you do a point-in-time recovery using archived commitlogs, the replay stops when the time is encountered, but since timestamp are supplied by the client we can't guarantee the segment is ordered by timestamp, so some mutations can be lost. Instead we could continue past the given timestamp, and just filter out any mutations greater than it.) > commitlog archive replay should attempt to replay all mutations > --- > > Key: CASSANDRA-6905 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6905 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Brandon Williams >Priority: Minor > Fix For: 2.0.7 > > > Currently when you do a point-in-time recovery using archived commitlogs, the > replay stops when the time is encountered, but since timestamps are supplied > by the client we can't guarantee the segment is ordered by timestamp, so some > mutations can be lost. Instead we could continue past the given timestamp, > and just filter out any mutations greater than it. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-6905) commitlog archive replay should attempt to replay all mutations
Brandon Williams created CASSANDRA-6905: --- Summary: commitlog archive replay should attempt to replay all mutations Key: CASSANDRA-6905 URL: https://issues.apache.org/jira/browse/CASSANDRA-6905 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Brandon Williams Priority: Minor Fix For: 2.0.7 Currently when you do a point-in-time recovery using archived commitlogs, the replay stops when the time is encountered, but since timestamp are supplied by the client we can't guarantee the segment is ordered by timestamp, so some mutations can be lost. Instead we could continue past the given timestamp, and just filter out any mutations greater than it. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-6825) COUNT(*) with WHERE not finding all the matching rows
[ https://issues.apache.org/jira/browse/CASSANDRA-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13942881#comment-13942881 ] Bill Mitchell edited comment on CASSANDRA-6825 at 3/21/14 7:09 PM: --- Tyler, you used an interesting word, "flush". After running a test with a different database name, I went back and looked at the first keyspace, as I did not drain the node before zipping the file the first time. A third SSTable had now been written. See the larger .zip file I have attached. When I try the same statements through cqlsh, a SELECT * FROM sr WHERE ... AND partition = 2 now shows 2 rows, but SELECT COUNT(*) FROM sr WHERE ... AND partition=2 still returns a count of 1. So the count is still incorrect. was (Author: wtmitchell3): Tyler, you use an interesting word, "flush". After running a test with a different database name, I went back and looked at the first keyspace, as I did not drain the node before zipping the file the first time. A third SSTable had now been written. See the larger .zip file I have attached. When I try the same statements through cqlsh, a SELECT * FROM sr WHERE ... AND partition = 2 now shows 2 rows, but SELECT COUNT(*) FROM sr WHERE ... AND partition=2 still returns a count of 1. So the count is still incorrect. > COUNT(*) with WHERE not finding all the matching rows > - > > Key: CASSANDRA-6825 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6825 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: quad core Windows7 x64, single node cluster > Cassandra 2.0.5 >Reporter: Bill Mitchell >Assignee: Tyler Hobbs > Attachments: cassandra.log, selectpartitions.zip, > selectrowcounts.txt, testdb_1395372407904.zip, testdb_1395372407904.zip > > > Investigating another problem, I needed to do COUNT(*) on the several > partitions of a table immediately after a test case ran, and I discovered > that count(*) on the full table and on each of the partitions returned > different counts. > In particular case, SELECT COUNT(*) FROM sr LIMIT 100; returned the > expected count from the test 9 rows. The composite primary key splits > the logical row into six distinct partitions, and when I issue a query asking > for the total across all six partitions, the returned result is only 83999. > Drilling down, I find that SELECT * from sr WHERE s = 5 AND l = 11 AND > partition = 0; returns 30,000 rows, but a SELECT COUNT(*) with the identical > WHERE predicate reports only 14,000. > This is failing immediately after running a single small test, such that > there are only two SSTables, sr-jb-1 and sr-jb-2. Compaction never needed to > run. > In selectrowcounts.txt is a copy of the cqlsh output showing the incorrect > count(*) results. -- This message was sent by Atlassian JIRA (v6.2#6252)
[6/6] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e48e0c4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e48e0c4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e48e0c4 Branch: refs/heads/trunk Commit: 2e48e0c43a3eb2c733a498a5bde49ec5d2953578 Parents: a161f08 35b7da4 Author: Mikhail Stepura Authored: Fri Mar 21 11:45:42 2014 -0700 Committer: Mikhail Stepura Committed: Fri Mar 21 11:45:42 2014 -0700 -- bin/cqlsh | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) --
[2/6] git commit: Make cqlsh prompt for a password if the user doesn't enter one.
Make cqlsh prompt for a password if the user doesn't enter one. patch by J.B. Langston; reviewed by Mikhail Stepura for CASSANDRA-6902 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37b94106 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37b94106 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37b94106 Branch: refs/heads/cassandra-2.1 Commit: 37b9410675b93843bc8223e57621a10dd841c291 Parents: 6b6139c Author: J.B. Langston Authored: Fri Mar 21 11:42:06 2014 -0700 Committer: Mikhail Stepura Committed: Fri Mar 21 11:42:06 2014 -0700 -- bin/cqlsh | 5 + 1 file changed, 5 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/37b94106/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 4669cda..6fa3afe 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -51,6 +51,7 @@ import locale import platform import warnings import csv +import getpass readline = None @@ -467,6 +468,10 @@ class Shell(cmd.Cmd): self.hostname = hostname self.port = port self.transport_factory = transport_factory + +if username and not password: +password = getpass.getpass() + self.username = username self.password = password self.keyspace = keyspace
[5/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Conflicts: bin/cqlsh Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35b7da4d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35b7da4d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35b7da4d Branch: refs/heads/cassandra-2.1 Commit: 35b7da4d0d2a3bd538f6c3394d5a60d553bda458 Parents: 25af168 37b9410 Author: Mikhail Stepura Authored: Fri Mar 21 11:45:19 2014 -0700 Committer: Mikhail Stepura Committed: Fri Mar 21 11:45:19 2014 -0700 -- bin/cqlsh | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/35b7da4d/bin/cqlsh -- diff --cc bin/cqlsh index f704f0e,6fa3afe..aa19c4d --- a/bin/cqlsh +++ b/bin/cqlsh @@@ -464,9 -467,13 +465,11 @@@ class Shell(cmd.Cmd) cmd.Cmd.__init__(self, completekey=completekey) self.hostname = hostname self.port = port -self.transport_factory = transport_factory - -if username and not password: -password = getpass.getpass() - -self.username = username -self.password = password +self.auth_provider = None - if username and password: ++if username: ++if not password: ++password = getpass.getpass() +self.auth_provider = lambda host: dict(username=username, password=password) self.keyspace = keyspace self.tracing_enabled = tracing_enabled self.expand_enabled = expand_enabled
[4/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Conflicts: bin/cqlsh Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35b7da4d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35b7da4d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35b7da4d Branch: refs/heads/trunk Commit: 35b7da4d0d2a3bd538f6c3394d5a60d553bda458 Parents: 25af168 37b9410 Author: Mikhail Stepura Authored: Fri Mar 21 11:45:19 2014 -0700 Committer: Mikhail Stepura Committed: Fri Mar 21 11:45:19 2014 -0700 -- bin/cqlsh | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/35b7da4d/bin/cqlsh -- diff --cc bin/cqlsh index f704f0e,6fa3afe..aa19c4d --- a/bin/cqlsh +++ b/bin/cqlsh @@@ -464,9 -467,13 +465,11 @@@ class Shell(cmd.Cmd) cmd.Cmd.__init__(self, completekey=completekey) self.hostname = hostname self.port = port -self.transport_factory = transport_factory - -if username and not password: -password = getpass.getpass() - -self.username = username -self.password = password +self.auth_provider = None - if username and password: ++if username: ++if not password: ++password = getpass.getpass() +self.auth_provider = lambda host: dict(username=username, password=password) self.keyspace = keyspace self.tracing_enabled = tracing_enabled self.expand_enabled = expand_enabled
[3/6] git commit: Make cqlsh prompt for a password if the user doesn't enter one.
Make cqlsh prompt for a password if the user doesn't enter one. patch by J.B. Langston; reviewed by Mikhail Stepura for CASSANDRA-6902 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37b94106 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37b94106 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37b94106 Branch: refs/heads/trunk Commit: 37b9410675b93843bc8223e57621a10dd841c291 Parents: 6b6139c Author: J.B. Langston Authored: Fri Mar 21 11:42:06 2014 -0700 Committer: Mikhail Stepura Committed: Fri Mar 21 11:42:06 2014 -0700 -- bin/cqlsh | 5 + 1 file changed, 5 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/37b94106/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 4669cda..6fa3afe 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -51,6 +51,7 @@ import locale import platform import warnings import csv +import getpass readline = None @@ -467,6 +468,10 @@ class Shell(cmd.Cmd): self.hostname = hostname self.port = port self.transport_factory = transport_factory + +if username and not password: +password = getpass.getpass() + self.username = username self.password = password self.keyspace = keyspace
[1/6] git commit: Make cqlsh prompt for a password if the user doesn't enter one.
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 6b6139cc8 -> 37b941067 refs/heads/cassandra-2.1 25af168c0 -> 35b7da4d0 refs/heads/trunk a161f088c -> 2e48e0c43 Make cqlsh prompt for a password if the user doesn't enter one. patch by J.B. Langston; reviewed by Mikhail Stepura for CASSANDRA-6902 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37b94106 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37b94106 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37b94106 Branch: refs/heads/cassandra-2.0 Commit: 37b9410675b93843bc8223e57621a10dd841c291 Parents: 6b6139c Author: J.B. Langston Authored: Fri Mar 21 11:42:06 2014 -0700 Committer: Mikhail Stepura Committed: Fri Mar 21 11:42:06 2014 -0700 -- bin/cqlsh | 5 + 1 file changed, 5 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/37b94106/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 4669cda..6fa3afe 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -51,6 +51,7 @@ import locale import platform import warnings import csv +import getpass readline = None @@ -467,6 +468,10 @@ class Shell(cmd.Cmd): self.hostname = hostname self.port = port self.transport_factory = transport_factory + +if username and not password: +password = getpass.getpass() + self.username = username self.password = password self.keyspace = keyspace
[jira] [Commented] (CASSANDRA-6902) Make cqlsh prompt for a password if the user doesn't enter one
[ https://issues.apache.org/jira/browse/CASSANDRA-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943359#comment-13943359 ] Jonathan Ellis commented on CASSANDRA-6902: --- FWIW if there are no conflicts I would be fine with this going into 2.0 as well as 2.1. > Make cqlsh prompt for a password if the user doesn't enter one > -- > > Key: CASSANDRA-6902 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6902 > Project: Cassandra > Issue Type: New Feature > Components: Tools >Reporter: J.B. Langston >Assignee: J.B. Langston >Priority: Minor > Fix For: 2.0.7, 2.1 beta2 > > Attachments: trunk-6902.txt > > > If the user specifies -u username and leaves off -p password, cqlsh should > prompt for a password without echoing it to the screen instead of throwing an > exception, which it currently does. I know that you can put a username and > password in the .cqlshrc file but if a user wants to log in with multiple > accounts and not have the password visible on the screen, there's no way to > currently do that. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Issue Comment Deleted] (CASSANDRA-6902) Make cqlsh prompt for a password if the user doesn't enter one
[ https://issues.apache.org/jira/browse/CASSANDRA-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6902: -- Comment: was deleted (was: FWIW if there are no conflicts I would be fine with this going into 2.0 as well as 2.1.) > Make cqlsh prompt for a password if the user doesn't enter one > -- > > Key: CASSANDRA-6902 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6902 > Project: Cassandra > Issue Type: New Feature > Components: Tools >Reporter: J.B. Langston >Assignee: J.B. Langston >Priority: Minor > Fix For: 2.0.7, 2.1 beta2 > > Attachments: trunk-6902.txt > > > If the user specifies -u username and leaves off -p password, cqlsh should > prompt for a password without echoing it to the screen instead of throwing an > exception, which it currently does. I know that you can put a username and > password in the .cqlshrc file but if a user wants to log in with multiple > accounts and not have the password visible on the screen, there's no way to > currently do that. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6902) Make cqlsh prompt for a password if the user doesn't enter one
[ https://issues.apache.org/jira/browse/CASSANDRA-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-6902: --- Fix Version/s: 2.1 beta2 > Make cqlsh prompt for a password if the user doesn't enter one > -- > > Key: CASSANDRA-6902 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6902 > Project: Cassandra > Issue Type: New Feature > Components: Tools >Reporter: J.B. Langston >Assignee: J.B. Langston >Priority: Minor > Fix For: 2.0.7, 2.1 beta2 > > Attachments: trunk-6902.txt > > > If the user specifies -u username and leaves off -p password, cqlsh should > prompt for a password without echoing it to the screen instead of throwing an > exception, which it currently does. I know that you can put a username and > password in the .cqlshrc file but if a user wants to log in with multiple > accounts and not have the password visible on the screen, there's no way to > currently do that. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6904) commitlog segments may not be archived after restart
[ https://issues.apache.org/jira/browse/CASSANDRA-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943343#comment-13943343 ] Benedict commented on CASSANDRA-6904: - To clarify, I think the absolute safest thing to do is to hard-link as soon as a CL is swapped into active, and then only recycle after an archive operation is successfully run on the hard-linked version. This should make sure we take care of not-yet-finished segments (and hence not-yet-archived) on startup as well, if we push all recycles through this path. > commitlog segments may not be archived after restart > > > Key: CASSANDRA-6904 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6904 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jonathan Ellis > Fix For: 2.0.7 > > > commitlog segments are archived when they are full, so the current active > segment will not be archived on restart (and its contents will not be > available for pitr). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6821) Cassandra can't delete snapshots for keyspaces that no longer exist.
[ https://issues.apache.org/jira/browse/CASSANDRA-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943340#comment-13943340 ] Jonathan Ellis commented on CASSANDRA-6821: --- Sounds good, and patch applies to both. Committed > Cassandra can't delete snapshots for keyspaces that no longer exist. > > > Key: CASSANDRA-6821 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6821 > Project: Cassandra > Issue Type: Improvement >Reporter: Nick Bailey >Assignee: Lyuben Todorov > Labels: nodetool > Fix For: 2.0.7, 2.1 beta2 > > Attachments: trunk-6821_v2.patch > > > If you drop a keyspace you can no longer clean up the snapshots for that > keyspace without resorting to the command line. It would be nice to be able > clean up those via jmx, especially for external tools. -- This message was sent by Atlassian JIRA (v6.2#6252)
[2/6] git commit: Allow deleting snapshots from dropped keyspaces patch by Lyuben Todorov; reviewed by Nick Bailey for CASSANDRA-6281
Allow deleting snapshots from dropped keyspaces patch by Lyuben Todorov; reviewed by Nick Bailey for CASSANDRA-6281 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b6139cc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b6139cc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b6139cc Branch: refs/heads/cassandra-2.1 Commit: 6b6139cc8bf903f76dfc422d1f45a800a84b9000 Parents: c843b6b Author: Jonathan Ellis Authored: Fri Mar 21 13:17:33 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 13:17:33 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/Keyspace.java | 4 ++-- .../cassandra/service/SnapshotVerbHandler.java | 4 +++- .../cassandra/service/StorageService.java | 23 ++-- 4 files changed, 17 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c5f2666..c89ae51 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.7 + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884) http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/src/java/org/apache/cassandra/db/Keyspace.java -- diff --git a/src/java/org/apache/cassandra/db/Keyspace.java b/src/java/org/apache/cassandra/db/Keyspace.java index f5369f9..714956a 100644 --- a/src/java/org/apache/cassandra/db/Keyspace.java +++ b/src/java/org/apache/cassandra/db/Keyspace.java @@ -237,9 +237,9 @@ public class Keyspace * @param snapshotName the user supplied snapshot name. It empty or null, * all the snapshots will be cleaned */ -public void clearSnapshot(String snapshotName) +public static void clearSnapshot(String snapshotName, String keyspace) { -List snapshotDirs = Directories.getKSChildDirectories(getName()); +List snapshotDirs = Directories.getKSChildDirectories(keyspace); Directories.clearSnapshot(snapshotName, snapshotDirs); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java -- diff --git a/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java b/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java index 2d393cf..a997533 100644 --- a/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java +++ b/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java @@ -35,7 +35,9 @@ public class SnapshotVerbHandler implements IVerbHandler { SnapshotCommand command = message.payload; if (command.clear_snapshot) - Keyspace.open(command.keyspace).clearSnapshot(command.snapshot_name); +{ +Keyspace.clearSnapshot(command.snapshot_name, command.keyspace); +} else Keyspace.open(command.keyspace).getColumnFamilyStore(command.column_family).snapshot(command.snapshot_name); logger.debug("Enqueuing response to snapshot request {} to {}", command.snapshot_name, message.from); http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index b66873c..42a58b0 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -2294,21 +2294,20 @@ public class StorageService extends NotificationBroadcasterSupport implements IE if(tag == null) tag = ""; -Iterable keyspaces; -if (keyspaceNames.length == 0) -{ -keyspaces = Keyspace.all(); -} -else +Set keyspaces = new HashSet<>(); +for (String dataDir : DatabaseDescriptor.getAllDataFileLocations()) { -ArrayList tempKeyspaces = new ArrayList(keyspaceNames.length); -for(String keyspaceName : keyspaceNames) -tempKeyspaces.add(getValidKeyspace(keyspaceName)); -keyspaces = tempKeyspaces; +for(String keyspaceDir : new File(dataDir).list()) +{ +// Only add a ks if it has been specified as a param, assuming params were a
[5/6] git commit: merge from 2.0
merge from 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25af168c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25af168c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25af168c Branch: refs/heads/cassandra-2.1 Commit: 25af168c06b81c632ddd1521160f128d5bf4ae13 Parents: 7c4a889 6b6139c Author: Jonathan Ellis Authored: Fri Mar 21 13:18:00 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 13:18:00 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/Keyspace.java | 4 ++-- .../cassandra/service/SnapshotVerbHandler.java | 4 +++- .../cassandra/service/StorageService.java | 23 ++-- 4 files changed, 17 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/25af168c/CHANGES.txt -- diff --cc CHANGES.txt index 1fbf50c,c89ae51..21d4d5a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,34 -1,5 +1,35 @@@ -2.0.7 +2.1.0-beta2 + * Eliminate possibility of CL segment appearing twice in active list + (CASSANDRA-6557) + * Apply DONTNEED fadvise to commitlog segments (CASSANDRA-6759) + * Switch CRC component to Adler and include it for compressed sstables + (CASSANDRA-4165) + * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451) + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899) + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897) + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573) + * Fix ABTC NPE and apply update function correctly (CASSANDRA-6692) + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660) + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742) + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705) + * Scrub should not always clear out repaired status (CASSANDRA-5351) + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446) + * Fix ClassCastException for compact table with composites (CASSANDRA-6738) + * Fix potentially repairing with wrong nodes (CASSANDRA-6808) + * Change caching option syntax (CASSANDRA-6745) + * Fix stress to do proper counter reads (CASSANDRA-6835) + * Fix help message for stress counter_write (CASSANDRA-6824) + * Fix stress smart Thrift client to pick servers correctly (CASSANDRA-6848) + * Add logging levels (minimal, normal or verbose) to stress tool (CASSANDRA-6849) + * Fix race condition in Batch CLE (CASSANDRA-6860) + * Improve cleanup/scrub/upgradesstables failure handling (CASSANDRA-6774) + * ByteBuffer write() methods for serializing sstables (CASSANDRA-6781) + * Proper compare function for CollectionType (CASSANDRA-6783) + * Update native server to Netty 4 (CASSANDRA-6236) + * Fix off-by-one error in stress (CASSANDRA-6883) + * Make OpOrder AutoCloseable (CASSANDRA-6901) +Merged from 2.0: + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884) http://git-wip-us.apache.org/repos/asf/cassandra/blob/25af168c/src/java/org/apache/cassandra/db/Keyspace.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/25af168c/src/java/org/apache/cassandra/service/StorageService.java --
[6/6] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a161f088 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a161f088 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a161f088 Branch: refs/heads/trunk Commit: a161f088cb9d2cabb635ee891e1c2082f9a45eb2 Parents: 02aad29 25af168 Author: Jonathan Ellis Authored: Fri Mar 21 13:18:29 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 13:18:29 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/Keyspace.java | 4 ++-- .../cassandra/service/SnapshotVerbHandler.java | 4 +++- .../cassandra/service/StorageService.java | 23 ++-- 4 files changed, 17 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a161f088/CHANGES.txt --
[4/6] git commit: merge from 2.0
merge from 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25af168c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25af168c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25af168c Branch: refs/heads/trunk Commit: 25af168c06b81c632ddd1521160f128d5bf4ae13 Parents: 7c4a889 6b6139c Author: Jonathan Ellis Authored: Fri Mar 21 13:18:00 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 13:18:00 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/Keyspace.java | 4 ++-- .../cassandra/service/SnapshotVerbHandler.java | 4 +++- .../cassandra/service/StorageService.java | 23 ++-- 4 files changed, 17 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/25af168c/CHANGES.txt -- diff --cc CHANGES.txt index 1fbf50c,c89ae51..21d4d5a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,34 -1,5 +1,35 @@@ -2.0.7 +2.1.0-beta2 + * Eliminate possibility of CL segment appearing twice in active list + (CASSANDRA-6557) + * Apply DONTNEED fadvise to commitlog segments (CASSANDRA-6759) + * Switch CRC component to Adler and include it for compressed sstables + (CASSANDRA-4165) + * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451) + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899) + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897) + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573) + * Fix ABTC NPE and apply update function correctly (CASSANDRA-6692) + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660) + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742) + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705) + * Scrub should not always clear out repaired status (CASSANDRA-5351) + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446) + * Fix ClassCastException for compact table with composites (CASSANDRA-6738) + * Fix potentially repairing with wrong nodes (CASSANDRA-6808) + * Change caching option syntax (CASSANDRA-6745) + * Fix stress to do proper counter reads (CASSANDRA-6835) + * Fix help message for stress counter_write (CASSANDRA-6824) + * Fix stress smart Thrift client to pick servers correctly (CASSANDRA-6848) + * Add logging levels (minimal, normal or verbose) to stress tool (CASSANDRA-6849) + * Fix race condition in Batch CLE (CASSANDRA-6860) + * Improve cleanup/scrub/upgradesstables failure handling (CASSANDRA-6774) + * ByteBuffer write() methods for serializing sstables (CASSANDRA-6781) + * Proper compare function for CollectionType (CASSANDRA-6783) + * Update native server to Netty 4 (CASSANDRA-6236) + * Fix off-by-one error in stress (CASSANDRA-6883) + * Make OpOrder AutoCloseable (CASSANDRA-6901) +Merged from 2.0: + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884) http://git-wip-us.apache.org/repos/asf/cassandra/blob/25af168c/src/java/org/apache/cassandra/db/Keyspace.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/25af168c/src/java/org/apache/cassandra/service/StorageService.java --
[1/6] git commit: Allow deleting snapshots from dropped keyspaces patch by Lyuben Todorov; reviewed by Nick Bailey for CASSANDRA-6281
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 c843b6b85 -> 6b6139cc8 refs/heads/cassandra-2.1 7c4a88949 -> 25af168c0 refs/heads/trunk 02aad29e8 -> a161f088c Allow deleting snapshots from dropped keyspaces patch by Lyuben Todorov; reviewed by Nick Bailey for CASSANDRA-6281 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b6139cc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b6139cc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b6139cc Branch: refs/heads/cassandra-2.0 Commit: 6b6139cc8bf903f76dfc422d1f45a800a84b9000 Parents: c843b6b Author: Jonathan Ellis Authored: Fri Mar 21 13:17:33 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 13:17:33 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/Keyspace.java | 4 ++-- .../cassandra/service/SnapshotVerbHandler.java | 4 +++- .../cassandra/service/StorageService.java | 23 ++-- 4 files changed, 17 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c5f2666..c89ae51 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.7 + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884) http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/src/java/org/apache/cassandra/db/Keyspace.java -- diff --git a/src/java/org/apache/cassandra/db/Keyspace.java b/src/java/org/apache/cassandra/db/Keyspace.java index f5369f9..714956a 100644 --- a/src/java/org/apache/cassandra/db/Keyspace.java +++ b/src/java/org/apache/cassandra/db/Keyspace.java @@ -237,9 +237,9 @@ public class Keyspace * @param snapshotName the user supplied snapshot name. It empty or null, * all the snapshots will be cleaned */ -public void clearSnapshot(String snapshotName) +public static void clearSnapshot(String snapshotName, String keyspace) { -List snapshotDirs = Directories.getKSChildDirectories(getName()); +List snapshotDirs = Directories.getKSChildDirectories(keyspace); Directories.clearSnapshot(snapshotName, snapshotDirs); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java -- diff --git a/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java b/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java index 2d393cf..a997533 100644 --- a/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java +++ b/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java @@ -35,7 +35,9 @@ public class SnapshotVerbHandler implements IVerbHandler { SnapshotCommand command = message.payload; if (command.clear_snapshot) - Keyspace.open(command.keyspace).clearSnapshot(command.snapshot_name); +{ +Keyspace.clearSnapshot(command.snapshot_name, command.keyspace); +} else Keyspace.open(command.keyspace).getColumnFamilyStore(command.column_family).snapshot(command.snapshot_name); logger.debug("Enqueuing response to snapshot request {} to {}", command.snapshot_name, message.from); http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index b66873c..42a58b0 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -2294,21 +2294,20 @@ public class StorageService extends NotificationBroadcasterSupport implements IE if(tag == null) tag = ""; -Iterable keyspaces; -if (keyspaceNames.length == 0) -{ -keyspaces = Keyspace.all(); -} -else +Set keyspaces = new HashSet<>(); +for (String dataDir : DatabaseDescriptor.getAllDataFileLocations()) { -ArrayList tempKeyspaces = new ArrayList(keyspaceNames.length); -for(String keyspaceName : keyspaceNames) -tempKeyspaces.add(getValidKeyspace(keyspaceName)); -keyspaces = temp
[3/6] git commit: Allow deleting snapshots from dropped keyspaces patch by Lyuben Todorov; reviewed by Nick Bailey for CASSANDRA-6281
Allow deleting snapshots from dropped keyspaces patch by Lyuben Todorov; reviewed by Nick Bailey for CASSANDRA-6281 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b6139cc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b6139cc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b6139cc Branch: refs/heads/trunk Commit: 6b6139cc8bf903f76dfc422d1f45a800a84b9000 Parents: c843b6b Author: Jonathan Ellis Authored: Fri Mar 21 13:17:33 2014 -0500 Committer: Jonathan Ellis Committed: Fri Mar 21 13:17:33 2014 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/Keyspace.java | 4 ++-- .../cassandra/service/SnapshotVerbHandler.java | 4 +++- .../cassandra/service/StorageService.java | 23 ++-- 4 files changed, 17 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c5f2666..c89ae51 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.7 + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821) * Add uuid() function (CASSANDRA-6473) * Omit tombstones from schema digests (CASSANDRA-6862) * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884) http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/src/java/org/apache/cassandra/db/Keyspace.java -- diff --git a/src/java/org/apache/cassandra/db/Keyspace.java b/src/java/org/apache/cassandra/db/Keyspace.java index f5369f9..714956a 100644 --- a/src/java/org/apache/cassandra/db/Keyspace.java +++ b/src/java/org/apache/cassandra/db/Keyspace.java @@ -237,9 +237,9 @@ public class Keyspace * @param snapshotName the user supplied snapshot name. It empty or null, * all the snapshots will be cleaned */ -public void clearSnapshot(String snapshotName) +public static void clearSnapshot(String snapshotName, String keyspace) { -List snapshotDirs = Directories.getKSChildDirectories(getName()); +List snapshotDirs = Directories.getKSChildDirectories(keyspace); Directories.clearSnapshot(snapshotName, snapshotDirs); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java -- diff --git a/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java b/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java index 2d393cf..a997533 100644 --- a/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java +++ b/src/java/org/apache/cassandra/service/SnapshotVerbHandler.java @@ -35,7 +35,9 @@ public class SnapshotVerbHandler implements IVerbHandler { SnapshotCommand command = message.payload; if (command.clear_snapshot) - Keyspace.open(command.keyspace).clearSnapshot(command.snapshot_name); +{ +Keyspace.clearSnapshot(command.snapshot_name, command.keyspace); +} else Keyspace.open(command.keyspace).getColumnFamilyStore(command.column_family).snapshot(command.snapshot_name); logger.debug("Enqueuing response to snapshot request {} to {}", command.snapshot_name, message.from); http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6139cc/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index b66873c..42a58b0 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -2294,21 +2294,20 @@ public class StorageService extends NotificationBroadcasterSupport implements IE if(tag == null) tag = ""; -Iterable keyspaces; -if (keyspaceNames.length == 0) -{ -keyspaces = Keyspace.all(); -} -else +Set keyspaces = new HashSet<>(); +for (String dataDir : DatabaseDescriptor.getAllDataFileLocations()) { -ArrayList tempKeyspaces = new ArrayList(keyspaceNames.length); -for(String keyspaceName : keyspaceNames) -tempKeyspaces.add(getValidKeyspace(keyspaceName)); -keyspaces = tempKeyspaces; +for(String keyspaceDir : new File(dataDir).list()) +{ +// Only add a ks if it has been specified as a param, assuming params were actually
[jira] [Commented] (CASSANDRA-6904) commitlog segments may not be archived after restart
[ https://issues.apache.org/jira/browse/CASSANDRA-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943336#comment-13943336 ] Jonathan Ellis commented on CASSANDRA-6904: --- [~benedict] suggests that we could use hard links to track segments-pending-archive. Since we don't recycle segments until archive is complete this should be fine. > commitlog segments may not be archived after restart > > > Key: CASSANDRA-6904 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6904 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jonathan Ellis > Fix For: 2.0.7 > > > commitlog segments are archived when they are full, so the current active > segment will not be archived on restart (and its contents will not be > available for pitr). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6904) commitlog segments may not be archived after restart
[ https://issues.apache.org/jira/browse/CASSANDRA-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6904: -- Assignee: (was: Jonathan Ellis) Summary: commitlog segments may not be archived after restart (was: partially full commitlog segments are not archived after restart) actually it's not just the current active segment; it could include any other segments for which archive is not complete at restart time. No state is kept by the segment manager to record which have been archived and which have not. Keeping this state in a system table would have the obvious problem that you need to replay the CL before you know which segments have not been archived yet. (because of how pitr works, it would be much better to be able to archive before replaying.) So it's a tough problem to solve. /cc [~vijay2...@yahoo.com] > commitlog segments may not be archived after restart > > > Key: CASSANDRA-6904 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6904 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jonathan Ellis > Fix For: 2.0.7 > > > commitlog segments are archived when they are full, so the current active > segment will not be archived on restart (and its contents will not be > available for pitr). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-6904) partially full commitlog segments are not archived after restart
Jonathan Ellis created CASSANDRA-6904: - Summary: partially full commitlog segments are not archived after restart Key: CASSANDRA-6904 URL: https://issues.apache.org/jira/browse/CASSANDRA-6904 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 2.0.7 commitlog segments are archived when they are full, so the current active segment will not be archived on restart (and its contents will not be available for pitr). -- This message was sent by Atlassian JIRA (v6.2#6252)
[3/3] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/02aad29e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/02aad29e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/02aad29e Branch: refs/heads/trunk Commit: 02aad29e876d78263bc85e9e2ce3fc619b7b7f8d Parents: 66b304e 7c4a889 Author: Sylvain Lebresne Authored: Fri Mar 21 19:09:48 2014 +0100 Committer: Sylvain Lebresne Committed: Fri Mar 21 19:09:48 2014 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 3 ++- .../apache/cassandra/db/PagedRangeCommand.java | 24 +++- .../cassandra/service/pager/QueryPagers.java| 10 ++-- .../service/pager/RangeSliceQueryPager.java | 3 ++- .../cassandra/db/ColumnFamilyStoreTest.java | 4 ++-- 6 files changed, 33 insertions(+), 12 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/02aad29e/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/02aad29e/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java --
[1/2] git commit: Fix paging with SELECT DISTINCT
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 a7c4659db -> 7c4a88949 Fix paging with SELECT DISTINCT patch by slebresne; reviewed by thobbs for CASSANDRA-6857 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c843b6b8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c843b6b8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c843b6b8 Branch: refs/heads/cassandra-2.1 Commit: c843b6b85cf828fde16d8d8a04411cba515f715e Parents: 3b708f9 Author: Sylvain Lebresne Authored: Fri Mar 21 19:03:12 2014 +0100 Committer: Sylvain Lebresne Committed: Fri Mar 21 19:03:12 2014 +0100 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++- src/java/org/apache/cassandra/db/PagedRangeCommand.java | 12 ++-- .../org/apache/cassandra/service/pager/QueryPagers.java | 10 -- .../org/apache/cassandra/db/ColumnFamilyStoreTest.java | 4 ++-- 5 files changed, 23 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c843b6b8/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 31fd319..c5f2666 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -25,6 +25,7 @@ * Extend triggers to support CAS updates (CASSANDRA-6882) * Static columns with IF NOT EXISTS don't always work as expected (CASSANDRA-6873) * Add CqlRecordReader to take advantage of native CQL pagination (CASSANDRA-6311) + * Fix paging with SELECT DISTINCT (CASSANDRA-6857) Merged from 1.2: * Add UNLOGGED, COUNTER options to BATCH documentation (CASSANDRA-6816) * add extra SSL cipher suites (CASSANDRA-6613) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c843b6b8/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index 5c3eb19..b58329e 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1671,10 +1671,11 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean ByteBuffer columnStop, List rowFilter, int maxResults, + boolean countCQL3Rows, long now) { DataRange dataRange = new DataRange.Paging(keyRange, columnRange, columnStart, columnStop, metadata.comparator); -return ExtendedFilter.create(this, dataRange, rowFilter, maxResults, true, now); +return ExtendedFilter.create(this, dataRange, rowFilter, maxResults, countCQL3Rows, now); } public List getRangeSlice(AbstractBounds range, http://git-wip-us.apache.org/repos/asf/cassandra/blob/c843b6b8/src/java/org/apache/cassandra/db/PagedRangeCommand.java -- diff --git a/src/java/org/apache/cassandra/db/PagedRangeCommand.java b/src/java/org/apache/cassandra/db/PagedRangeCommand.java index e152f43..d6f3ca1 100644 --- a/src/java/org/apache/cassandra/db/PagedRangeCommand.java +++ b/src/java/org/apache/cassandra/db/PagedRangeCommand.java @@ -97,14 +97,22 @@ public class PagedRangeCommand extends AbstractRangeCommand public boolean countCQL3Rows() { -return true; +// We only use PagedRangeCommand for CQL3. However, for SELECT DISTINCT, we want to return false here, because +// we just want to pick the first cell of each partition and returning true here would throw off the logic in +// ColumnFamilyStore.filter(). +// What we do know is that for a SELECT DISTINCT the underlying SliceQueryFilter will have a compositesToGroup==-1 +// and a count==1. And while it would be possible for a normal SELECT on a COMPACT table to also have such +// parameters, it's fine returning false since if we do count one cell for each partition, then each partition +// will coincide with exactly one CQL3 row. +SliceQueryFilter filter = (SliceQueryFilter)predicate; +return filter.compositesToGroup >= 0 || filter.count != 1; } public List executeLocally() { ColumnFamilyStore cfs = Keyspace.open(keyspace).getColumnFamilyStore(columnFamily); -ExtendedFilter exFilter = cfs.makeExtendedFilter(keyRange, (SliceQueryFilter)predicate, start, stop, rowFilter, limit, timestamp); +ExtendedFilter exFilter = cfs.makeExtendedFilter(keyRange,
[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Conflicts: test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7c4a8894 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7c4a8894 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7c4a8894 Branch: refs/heads/trunk Commit: 7c4a889491ee4d3774f2e95c24ed7dbcbb62dcb1 Parents: a7c4659 c843b6b Author: Sylvain Lebresne Authored: Fri Mar 21 19:09:21 2014 +0100 Committer: Sylvain Lebresne Committed: Fri Mar 21 19:09:21 2014 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 3 ++- .../apache/cassandra/db/PagedRangeCommand.java | 24 +++- .../cassandra/service/pager/QueryPagers.java| 10 ++-- .../service/pager/RangeSliceQueryPager.java | 3 ++- .../cassandra/db/ColumnFamilyStoreTest.java | 4 ++-- 6 files changed, 33 insertions(+), 12 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c4a8894/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c4a8894/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java index 845352d,b58329e..1041812 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@@ -1961,10 -1667,11 +1961,11 @@@ public class ColumnFamilyStore implemen */ public ExtendedFilter makeExtendedFilter(AbstractBounds keyRange, SliceQueryFilter columnRange, - ByteBuffer columnStart, - ByteBuffer columnStop, + Composite columnStart, + Composite columnStop, List rowFilter, int maxResults, + boolean countCQL3Rows, long now) { DataRange dataRange = new DataRange.Paging(keyRange, columnRange, columnStart, columnStop, metadata.comparator); http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c4a8894/src/java/org/apache/cassandra/db/PagedRangeCommand.java -- diff --cc src/java/org/apache/cassandra/db/PagedRangeCommand.java index 5c8f3ba,d6f3ca1..f2d81b9 --- a/src/java/org/apache/cassandra/db/PagedRangeCommand.java +++ b/src/java/org/apache/cassandra/db/PagedRangeCommand.java @@@ -37,24 -37,24 +37,27 @@@ public class PagedRangeCommand extends { public static final IVersionedSerializer serializer = new Serializer(); -public final ByteBuffer start; -public final ByteBuffer stop; +public final Composite start; +public final Composite stop; public final int limit; ++private final boolean countCQL3Rows; public PagedRangeCommand(String keyspace, String columnFamily, long timestamp, AbstractBounds keyRange, SliceQueryFilter predicate, - ByteBuffer start, - ByteBuffer stop, + Composite start, + Composite stop, List rowFilter, -- int limit) ++ int limit, ++ boolean countCQL3Rows) { super(keyspace, columnFamily, timestamp, keyRange, predicate, rowFilter); this.start = start; this.stop = stop; this.limit = limit; ++this.countCQL3Rows = countCQL3Rows; } public MessageOut createMessage() @@@ -74,7 -74,7 +77,8 @@@ newStart, newStop, rowFilter, -- limit); ++ limit, ++ countCQL3Rows); } public AbstractRangeCommand withUpdatedLimit(int newLimit) @@@ -87,7 -87,7 +91,8 @@@ start, stop, rowFilter, -- newLimit); ++
[1/3] git commit: Fix paging with SELECT DISTINCT
Repository: cassandra Updated Branches: refs/heads/trunk 66b304e8c -> 02aad29e8 Fix paging with SELECT DISTINCT patch by slebresne; reviewed by thobbs for CASSANDRA-6857 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c843b6b8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c843b6b8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c843b6b8 Branch: refs/heads/trunk Commit: c843b6b85cf828fde16d8d8a04411cba515f715e Parents: 3b708f9 Author: Sylvain Lebresne Authored: Fri Mar 21 19:03:12 2014 +0100 Committer: Sylvain Lebresne Committed: Fri Mar 21 19:03:12 2014 +0100 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++- src/java/org/apache/cassandra/db/PagedRangeCommand.java | 12 ++-- .../org/apache/cassandra/service/pager/QueryPagers.java | 10 -- .../org/apache/cassandra/db/ColumnFamilyStoreTest.java | 4 ++-- 5 files changed, 23 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c843b6b8/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 31fd319..c5f2666 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -25,6 +25,7 @@ * Extend triggers to support CAS updates (CASSANDRA-6882) * Static columns with IF NOT EXISTS don't always work as expected (CASSANDRA-6873) * Add CqlRecordReader to take advantage of native CQL pagination (CASSANDRA-6311) + * Fix paging with SELECT DISTINCT (CASSANDRA-6857) Merged from 1.2: * Add UNLOGGED, COUNTER options to BATCH documentation (CASSANDRA-6816) * add extra SSL cipher suites (CASSANDRA-6613) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c843b6b8/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index 5c3eb19..b58329e 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1671,10 +1671,11 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean ByteBuffer columnStop, List rowFilter, int maxResults, + boolean countCQL3Rows, long now) { DataRange dataRange = new DataRange.Paging(keyRange, columnRange, columnStart, columnStop, metadata.comparator); -return ExtendedFilter.create(this, dataRange, rowFilter, maxResults, true, now); +return ExtendedFilter.create(this, dataRange, rowFilter, maxResults, countCQL3Rows, now); } public List getRangeSlice(AbstractBounds range, http://git-wip-us.apache.org/repos/asf/cassandra/blob/c843b6b8/src/java/org/apache/cassandra/db/PagedRangeCommand.java -- diff --git a/src/java/org/apache/cassandra/db/PagedRangeCommand.java b/src/java/org/apache/cassandra/db/PagedRangeCommand.java index e152f43..d6f3ca1 100644 --- a/src/java/org/apache/cassandra/db/PagedRangeCommand.java +++ b/src/java/org/apache/cassandra/db/PagedRangeCommand.java @@ -97,14 +97,22 @@ public class PagedRangeCommand extends AbstractRangeCommand public boolean countCQL3Rows() { -return true; +// We only use PagedRangeCommand for CQL3. However, for SELECT DISTINCT, we want to return false here, because +// we just want to pick the first cell of each partition and returning true here would throw off the logic in +// ColumnFamilyStore.filter(). +// What we do know is that for a SELECT DISTINCT the underlying SliceQueryFilter will have a compositesToGroup==-1 +// and a count==1. And while it would be possible for a normal SELECT on a COMPACT table to also have such +// parameters, it's fine returning false since if we do count one cell for each partition, then each partition +// will coincide with exactly one CQL3 row. +SliceQueryFilter filter = (SliceQueryFilter)predicate; +return filter.compositesToGroup >= 0 || filter.count != 1; } public List executeLocally() { ColumnFamilyStore cfs = Keyspace.open(keyspace).getColumnFamilyStore(columnFamily); -ExtendedFilter exFilter = cfs.makeExtendedFilter(keyRange, (SliceQueryFilter)predicate, start, stop, rowFilter, limit, timestamp); +ExtendedFilter exFilter = cfs.makeExtendedFilter(keyRange, (SliceQueryFil
[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Conflicts: test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7c4a8894 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7c4a8894 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7c4a8894 Branch: refs/heads/cassandra-2.1 Commit: 7c4a889491ee4d3774f2e95c24ed7dbcbb62dcb1 Parents: a7c4659 c843b6b Author: Sylvain Lebresne Authored: Fri Mar 21 19:09:21 2014 +0100 Committer: Sylvain Lebresne Committed: Fri Mar 21 19:09:21 2014 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 3 ++- .../apache/cassandra/db/PagedRangeCommand.java | 24 +++- .../cassandra/service/pager/QueryPagers.java| 10 ++-- .../service/pager/RangeSliceQueryPager.java | 3 ++- .../cassandra/db/ColumnFamilyStoreTest.java | 4 ++-- 6 files changed, 33 insertions(+), 12 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c4a8894/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c4a8894/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java index 845352d,b58329e..1041812 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@@ -1961,10 -1667,11 +1961,11 @@@ public class ColumnFamilyStore implemen */ public ExtendedFilter makeExtendedFilter(AbstractBounds keyRange, SliceQueryFilter columnRange, - ByteBuffer columnStart, - ByteBuffer columnStop, + Composite columnStart, + Composite columnStop, List rowFilter, int maxResults, + boolean countCQL3Rows, long now) { DataRange dataRange = new DataRange.Paging(keyRange, columnRange, columnStart, columnStop, metadata.comparator); http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c4a8894/src/java/org/apache/cassandra/db/PagedRangeCommand.java -- diff --cc src/java/org/apache/cassandra/db/PagedRangeCommand.java index 5c8f3ba,d6f3ca1..f2d81b9 --- a/src/java/org/apache/cassandra/db/PagedRangeCommand.java +++ b/src/java/org/apache/cassandra/db/PagedRangeCommand.java @@@ -37,24 -37,24 +37,27 @@@ public class PagedRangeCommand extends { public static final IVersionedSerializer serializer = new Serializer(); -public final ByteBuffer start; -public final ByteBuffer stop; +public final Composite start; +public final Composite stop; public final int limit; ++private final boolean countCQL3Rows; public PagedRangeCommand(String keyspace, String columnFamily, long timestamp, AbstractBounds keyRange, SliceQueryFilter predicate, - ByteBuffer start, - ByteBuffer stop, + Composite start, + Composite stop, List rowFilter, -- int limit) ++ int limit, ++ boolean countCQL3Rows) { super(keyspace, columnFamily, timestamp, keyRange, predicate, rowFilter); this.start = start; this.stop = stop; this.limit = limit; ++this.countCQL3Rows = countCQL3Rows; } public MessageOut createMessage() @@@ -74,7 -74,7 +77,8 @@@ newStart, newStop, rowFilter, -- limit); ++ limit, ++ countCQL3Rows); } public AbstractRangeCommand withUpdatedLimit(int newLimit) @@@ -87,7 -87,7 +91,8 @@@ start, stop, rowFilter, -- newLimit); ++
[jira] [Commented] (CASSANDRA-6821) Cassandra can't delete snapshots for keyspaces that no longer exist.
[ https://issues.apache.org/jira/browse/CASSANDRA-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943321#comment-13943321 ] Nick Bailey commented on CASSANDRA-6821: We commited the cf one to 2.0 I believe. It would be a nice to have in 2.0. > Cassandra can't delete snapshots for keyspaces that no longer exist. > > > Key: CASSANDRA-6821 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6821 > Project: Cassandra > Issue Type: Improvement >Reporter: Nick Bailey >Assignee: Lyuben Todorov > Labels: nodetool > Fix For: 2.1 beta2 > > Attachments: trunk-6821_v2.patch > > > If you drop a keyspace you can no longer clean up the snapshots for that > keyspace without resorting to the command line. It would be nice to be able > clean up those via jmx, especially for external tools. -- This message was sent by Atlassian JIRA (v6.2#6252)
git commit: Fix paging with SELECT DISTINCT
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 3b708f998 -> c843b6b85 Fix paging with SELECT DISTINCT patch by slebresne; reviewed by thobbs for CASSANDRA-6857 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c843b6b8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c843b6b8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c843b6b8 Branch: refs/heads/cassandra-2.0 Commit: c843b6b85cf828fde16d8d8a04411cba515f715e Parents: 3b708f9 Author: Sylvain Lebresne Authored: Fri Mar 21 19:03:12 2014 +0100 Committer: Sylvain Lebresne Committed: Fri Mar 21 19:03:12 2014 +0100 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++- src/java/org/apache/cassandra/db/PagedRangeCommand.java | 12 ++-- .../org/apache/cassandra/service/pager/QueryPagers.java | 10 -- .../org/apache/cassandra/db/ColumnFamilyStoreTest.java | 4 ++-- 5 files changed, 23 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c843b6b8/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 31fd319..c5f2666 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -25,6 +25,7 @@ * Extend triggers to support CAS updates (CASSANDRA-6882) * Static columns with IF NOT EXISTS don't always work as expected (CASSANDRA-6873) * Add CqlRecordReader to take advantage of native CQL pagination (CASSANDRA-6311) + * Fix paging with SELECT DISTINCT (CASSANDRA-6857) Merged from 1.2: * Add UNLOGGED, COUNTER options to BATCH documentation (CASSANDRA-6816) * add extra SSL cipher suites (CASSANDRA-6613) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c843b6b8/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index 5c3eb19..b58329e 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1671,10 +1671,11 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean ByteBuffer columnStop, List rowFilter, int maxResults, + boolean countCQL3Rows, long now) { DataRange dataRange = new DataRange.Paging(keyRange, columnRange, columnStart, columnStop, metadata.comparator); -return ExtendedFilter.create(this, dataRange, rowFilter, maxResults, true, now); +return ExtendedFilter.create(this, dataRange, rowFilter, maxResults, countCQL3Rows, now); } public List getRangeSlice(AbstractBounds range, http://git-wip-us.apache.org/repos/asf/cassandra/blob/c843b6b8/src/java/org/apache/cassandra/db/PagedRangeCommand.java -- diff --git a/src/java/org/apache/cassandra/db/PagedRangeCommand.java b/src/java/org/apache/cassandra/db/PagedRangeCommand.java index e152f43..d6f3ca1 100644 --- a/src/java/org/apache/cassandra/db/PagedRangeCommand.java +++ b/src/java/org/apache/cassandra/db/PagedRangeCommand.java @@ -97,14 +97,22 @@ public class PagedRangeCommand extends AbstractRangeCommand public boolean countCQL3Rows() { -return true; +// We only use PagedRangeCommand for CQL3. However, for SELECT DISTINCT, we want to return false here, because +// we just want to pick the first cell of each partition and returning true here would throw off the logic in +// ColumnFamilyStore.filter(). +// What we do know is that for a SELECT DISTINCT the underlying SliceQueryFilter will have a compositesToGroup==-1 +// and a count==1. And while it would be possible for a normal SELECT on a COMPACT table to also have such +// parameters, it's fine returning false since if we do count one cell for each partition, then each partition +// will coincide with exactly one CQL3 row. +SliceQueryFilter filter = (SliceQueryFilter)predicate; +return filter.compositesToGroup >= 0 || filter.count != 1; } public List executeLocally() { ColumnFamilyStore cfs = Keyspace.open(keyspace).getColumnFamilyStore(columnFamily); -ExtendedFilter exFilter = cfs.makeExtendedFilter(keyRange, (SliceQueryFilter)predicate, start, stop, rowFilter, limit, timestamp); +ExtendedFilter exFilter = cfs.makeExtendedFilter(keyRange,
[jira] [Commented] (CASSANDRA-6821) Cassandra can't delete snapshots for keyspaces that no longer exist.
[ https://issues.apache.org/jira/browse/CASSANDRA-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943319#comment-13943319 ] Jonathan Ellis commented on CASSANDRA-6821: --- This has a fixver of 2.1 but do we need it in 2.0 as well? > Cassandra can't delete snapshots for keyspaces that no longer exist. > > > Key: CASSANDRA-6821 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6821 > Project: Cassandra > Issue Type: Improvement >Reporter: Nick Bailey >Assignee: Lyuben Todorov > Labels: nodetool > Fix For: 2.1 beta2 > > Attachments: trunk-6821_v2.patch > > > If you drop a keyspace you can no longer clean up the snapshots for that > keyspace without resorting to the command line. It would be nice to be able > clean up those via jmx, especially for external tools. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6900) Remove unnecessary repair JMX interface
[ https://issues.apache.org/jira/browse/CASSANDRA-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943317#comment-13943317 ] Jonathan Ellis commented on CASSANDRA-6900: --- +1 > Remove unnecessary repair JMX interface > --- > > Key: CASSANDRA-6900 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6900 > Project: Cassandra > Issue Type: Task >Reporter: Yuki Morishita >Assignee: Yuki Morishita >Priority: Trivial > Fix For: 2.1 beta2 > > Attachments: 6900-2.1.txt > > > Since 1.1.9, 'nodetool repair' has been using async repair interface but sync > api is remained. > Also CASSANDRA-6218 added new interface and old one is no longer used. > I think 2.1 is good time to clean up. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException
[ https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943315#comment-13943315 ] Tyler Hobbs commented on CASSANDRA-6892: [~christianmovi] thanks, that would be useful. So far I didn't have any luck reproducing. Just to check, was the schema created with cassandra-cli/thrift or with CQL? > Cassandra 2.0.x validates Thrift columns incorrectly and causes > InvalidRequestException > --- > > Key: CASSANDRA-6892 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6892 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Christian Spriegel >Assignee: Tyler Hobbs >Priority: Minor > Fix For: 2.0.7 > > Attachments: CASSANDRA-6892_V1.patch > > > I just upgrade my local dev machine to Cassandra 2.0, which causes one of my > automated tests to fail now. With the latest 1.2.x it was working fine. > The Exception I get on my client (using Hector) is: > {code} > me.prettyprint.hector.api.exceptions.HInvalidRequestException: > InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265) > at > me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113) > at > me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115) > at > me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163) > at > me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69) > at > com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482) > > Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) > [MDS_0][MasterdataIndex][key2] failed validation) > at > org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) > at > org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246) > at > me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1) > at > me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104) > at > me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) > ... 46 more > {code} > The schema of my column family is: > {code} > create column family MasterdataIndex with > compression_options = {sstable_compression:SnappyCompressor, > chunk_length_kb:64} and > comparator = UTF8Type and > key_validation_class = 'CompositeType(UTF8Type,LongType)' and > default_validation_class = BytesType; > {code} > From the error message it looks like Cassandra is trying to validate the > value with the key-validator! (My value in this case it 21 bytes long) > I studied the Cassandra 2.0 code and found something wrong. It seems in > CFMetaData.addDefaultKeyAliases it passes the KeyValidator into > ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is > expected to be the value validator! > In CFMetaData: > {code} > private List > addDefaultKeyAliases(List pkCols) > { > for (int i = 0; i < pkCols.size(); i++) > { > if (pkCols.get(i) == null) > { > Integer idx = null; > AbstractType type = keyValidator; > if (keyValidator instanceof CompositeType) > { > idx = i; > type = ((CompositeType)keyValidator).types.get(i); > } > // For compatibility sake, we call the first alias 'key' > rather than 'key1'. This > // is inconsistent with column alias, but it's probably not > worth risking breaking compatibility now. > ByteBuffer name = ByteBufferUtil.bytes(i == 0 ? > DEFAULT_KEY_ALIAS : DEFAULT_KEY_ALIAS + (i + 1)); > ColumnDefinition newDef = > ColumnDefinition.partitionKeyDef(name, type, idx); // type is LongType in m
[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables
[ https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943299#comment-13943299 ] Jonathan Ellis commented on CASSANDRA-6689: --- Thanks! That looks fairly straightforward now. > Partially Off Heap Memtables > > > Key: CASSANDRA-6689 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6689 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Benedict >Assignee: Benedict > Labels: performance > Fix For: 2.1 beta2 > > Attachments: CASSANDRA-6689-small-changes.patch > > > Move the contents of ByteBuffers off-heap for records written to a memtable. > (See comments for details) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6857) SELECT DISTINCT with a LIMIT is broken by paging
[ https://issues.apache.org/jira/browse/CASSANDRA-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943295#comment-13943295 ] Tyler Hobbs commented on CASSANDRA-6857: Thanks. I thought something might be missing; that makes sense. It looks like some a ColumnFamilyStoreTest method needs to be updated for the 2.0 patch to compile (you just need to add the countCQL3Rows argument to two makeExtendedFilter() calls). Other than that, +1. > SELECT DISTINCT with a LIMIT is broken by paging > > > Key: CASSANDRA-6857 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6857 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 2.0.7 > > Attachments: 6857-2.0-v2.txt, 6857-2.0.txt, 6857-2.1.txt > > > The paging for RangeSliceCommand only support the case where we count CQL3 > rows . However, in the case of SELECT DISTINCT, we do actually want to use > the "count partitions, not CQL3 row" path and that's currently broken when > the paging commands are used (this was first reported on the [Java driver > JIRA|https://datastax-oss.atlassian.net/browse/JAVA-288] and there is a > reproduction script there). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables
[ https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943285#comment-13943285 ] Benedict commented on CASSANDRA-6689: - Patch for this ticket available [here|https://github.com/belliottsmith/cassandra/tree/iss-6689-final] I have made absolutely no unnecessary changes or refactors. There is one commit only, as there was no good way to make any of the changes for this specific commit whilst leaving it in a compilable state. For CASSANDRA-6694 I will split the patch up into multiple commits, but this is likely to take quite some time. > Partially Off Heap Memtables > > > Key: CASSANDRA-6689 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6689 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Benedict >Assignee: Benedict > Labels: performance > Fix For: 2.1 beta2 > > Attachments: CASSANDRA-6689-small-changes.patch > > > Move the contents of ByteBuffers off-heap for records written to a memtable. > (See comments for details) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6902) Make cqlsh prompt for a password if the user doesn't enter one
[ https://issues.apache.org/jira/browse/CASSANDRA-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] J.B. Langston updated CASSANDRA-6902: - Attachment: (was: trunk-6902.txt) > Make cqlsh prompt for a password if the user doesn't enter one > -- > > Key: CASSANDRA-6902 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6902 > Project: Cassandra > Issue Type: New Feature > Components: Tools >Reporter: J.B. Langston >Assignee: J.B. Langston >Priority: Minor > Fix For: 2.0.7 > > Attachments: trunk-6902.txt > > > If the user specifies -u username and leaves off -p password, cqlsh should > prompt for a password without echoing it to the screen instead of throwing an > exception, which it currently does. I know that you can put a username and > password in the .cqlshrc file but if a user wants to log in with multiple > accounts and not have the password visible on the screen, there's no way to > currently do that. -- This message was sent by Atlassian JIRA (v6.2#6252)