[jira] [Updated] (CASSANDRA-4321) stackoverflow building interval tree & possible sstable corruptions
[ https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-4321: Attachment: 0003-Create-standalone-scrub-v4.txt My bad. Forgot to exclude temporary and compacted files from the scrubbed files. Attaching a v4 of last patch to fix. Hopefully this should fix the offline scrub. > stackoverflow building interval tree & possible sstable corruptions > --- > > Key: CASSANDRA-4321 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4321 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: Anton Winter >Assignee: Sylvain Lebresne > Fix For: 1.1.2 > > Attachments: > 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, > 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, > 0003-Create-standalone-scrub-v3.txt, 0003-Create-standalone-scrub-v4.txt, > ooyala-hastur-stacktrace.txt > > > After upgrading to 1.1.1 (from 1.1.0) I have started experiencing > StackOverflowError's resulting in compaction backlog and failure to restart. > The ring currently consists of 6 DC's and 22 nodes using LCS & compression. > This issue was first noted on 2 nodes in one DC and then appears to have > spread to various other nodes in the other DC's. > When the first occurrence of this was found I restarted the instance but it > failed to start so I cleared its data and treated it as a replacement node > for the token it was previously responsible for. This node successfully > streamed all the relevant data back but failed again a number of hours later > with the same StackOverflowError and again was unable to restart. > The initial stack overflow error on a running instance looks like this: > ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 > AbstractCassandraDaemon.java (line 134) Exception in thread > Thread[CompactionExecutor:314,1,main] > java.lang.StackOverflowError > at java.util.Arrays.mergeSort(Arrays.java:1157) > at java.util.Arrays.sort(Arrays.java:1092) > at java.util.Collections.sort(Collections.java:134) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:49) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow. Compactions stop from this point > onwards] > I restarted this failing instance with DEBUG logging enabled and it throws > the following exception part way through startup: > ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main] > java.lang.StackOverflowError > at > org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307) > at > org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230) > at > org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124) > at > org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:45) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow] > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.util
[jira] [Commented] (CASSANDRA-4310) Multiple independent Level Compactions in Parallel(Useful for SSD).
[ https://issues.apache.org/jira/browse/CASSANDRA-4310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396529#comment-13396529 ] sankalp kohli commented on CASSANDRA-4310: -- What you are saying is true. But the improvement I am saying has more than this. It also does compactions in parallel between different levels and also multiple compactions per level. So it will definitely speed things up. It is quite frustrating to see Disk not being fully used when you are using SSD. Also like you said L0->L1 is the biggest bottleneck. This will help it in a way. So when L0(32 stable) gets merged with L1, then L1 will merge with L2 and so on. But with this, you will be doing L0-L1 compactions almost every cycle unless L1-L2 is happening. So when say L3 -> L4, L0->L1 compaction won't happen when it can. So this solution cannot help parallelize L0->L1, but it will help since it runs L0->L1 almost every time. It does not get blocked by compactions in higher levels. > Multiple independent Level Compactions in Parallel(Useful for SSD). > > > Key: CASSANDRA-4310 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4310 > Project: Cassandra > Issue Type: New Feature > Components: Core >Affects Versions: 1.1.1, 1.1.2 >Reporter: sankalp kohli > Labels: compaction, features, leveled, performance, ssd > > Problem: If you are inserting data into cassandra and level compaction cannot > catchup, you will create lot of files in L0. > Here is a solution which will help here and also increase the performance of > level compaction. > We can do many compactions in parallel for unrelated data. > 1) For no over lapping levels. Ex: when L0 stable is compacting with L1, we > can do compactions in other levels like L2 and L3 if they are eligible. > 2) We can also do compactions with files in L1 which are not participating in > L0 compactions. > This is specially useful if you are using SSD and is not bottlenecked by IO. > I am seeing this issue in my cluster. The compactions pending are more than > 50k and the disk usage is not that much(I am using SSD). > I am doing multithreaded to true and also not throttling the IO by putting > the value as 0. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4310) Multiple independent Level Compactions in Parallel(Useful for SSD).
[ https://issues.apache.org/jira/browse/CASSANDRA-4310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396517#comment-13396517 ] Jonathan Ellis commented on CASSANDRA-4310: --- The problem is that for common workloads we expect most L0 sstables to overlap with all L1 sstables. So there's very limited parallelism you can introduce in the L0 -> L1 stage, which is the biggest bottleneck. > Multiple independent Level Compactions in Parallel(Useful for SSD). > > > Key: CASSANDRA-4310 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4310 > Project: Cassandra > Issue Type: New Feature > Components: Core >Affects Versions: 1.1.1, 1.1.2 >Reporter: sankalp kohli > Labels: compaction, features, leveled, performance, ssd > > Problem: If you are inserting data into cassandra and level compaction cannot > catchup, you will create lot of files in L0. > Here is a solution which will help here and also increase the performance of > level compaction. > We can do many compactions in parallel for unrelated data. > 1) For no over lapping levels. Ex: when L0 stable is compacting with L1, we > can do compactions in other levels like L2 and L3 if they are eligible. > 2) We can also do compactions with files in L1 which are not participating in > L0 compactions. > This is specially useful if you are using SSD and is not bottlenecked by IO. > I am seeing this issue in my cluster. The compactions pending are more than > 50k and the disk usage is not that much(I am using SSD). > I am doing multithreaded to true and also not throttling the IO by putting > the value as 0. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3647) Support set and map value types in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396516#comment-13396516 ] Jonathan Ellis commented on CASSANDRA-3647: --- The read-before-write operations on list concern me, since we've avoided these operations thus far (e.g., {{UPDATE foo SET x=y WHERE w=z}}). I've been a fan of the status quo since forcing the client to do the read explicitly makes it clear that you're performing a race-prone sequence. Yes, it's less efficient, but in most cases the cost of doing random reads dwarfs the round-trip overhead. I'd also note that to my knowledge no other implementation of documents or containers allows efficient updates of individual items. If we force the user to fetch the list, then overwrite the entire list with the desired items removed, we're no worse than the competition. :) Am I off base? Is it time to embrace the race and add this kind of server-side sugar? > Support set and map value types in CQL > -- > > Key: CASSANDRA-3647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3647 > Project: Cassandra > Issue Type: New Feature > Components: API, Core >Reporter: Jonathan Ellis >Assignee: Sylvain Lebresne > Labels: cql > Fix For: 1.2 > > > Composite columns introduce the ability to have arbitrarily nested data in a > Cassandra row. We should expose this through CQL. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-4321) stackoverflow building interval tree & possible sstable corruptions
[ https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396433#comment-13396433 ] Anton Winter edited comment on CASSANDRA-4321 at 6/19/12 2:01 AM: -- I can confirm I also experienced the "Unexpected empty index file" errors on some of the nodes that I have run sstablescrub on. Other nodes had this error when running sstablescrub: {code} Scrub of SSTableReader(path='/var/lib//data/cassandra/KS/CF/KS-CF-hd-259648-Data.db') complete: 1592 rows in new sstable and 0 empty (tombstoned) rows dropped EOF after 6 bytes out of 8 {code} Compactions stop with the "java.lang.RuntimeException: Last written key DecoratedKey" error on the nodes affected by either of the above 2 errors . Nodes that seem to have been repaired by the sstablescrub still continue to have "java.lang.RuntimeException: Last written key DecoratedKey" errors scattered through the logs but are still be compacting. Is there any further information we can supply to help debug? was (Author: awinter): I can confirm I also experienced the "Unexpected empty index file" errors on some of the nodes that I have run sstablescrub on. On some other nodes the sstablescrub command appears to complete successfully but compactions still stops at the "java.lang.RuntimeException: Last written key DecoratedKey" error. Is there any further information we can supply to help debug? > stackoverflow building interval tree & possible sstable corruptions > --- > > Key: CASSANDRA-4321 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4321 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: Anton Winter >Assignee: Sylvain Lebresne > Fix For: 1.1.2 > > Attachments: > 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, > 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, > 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt > > > After upgrading to 1.1.1 (from 1.1.0) I have started experiencing > StackOverflowError's resulting in compaction backlog and failure to restart. > The ring currently consists of 6 DC's and 22 nodes using LCS & compression. > This issue was first noted on 2 nodes in one DC and then appears to have > spread to various other nodes in the other DC's. > When the first occurrence of this was found I restarted the instance but it > failed to start so I cleared its data and treated it as a replacement node > for the token it was previously responsible for. This node successfully > streamed all the relevant data back but failed again a number of hours later > with the same StackOverflowError and again was unable to restart. > The initial stack overflow error on a running instance looks like this: > ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 > AbstractCassandraDaemon.java (line 134) Exception in thread > Thread[CompactionExecutor:314,1,main] > java.lang.StackOverflowError > at java.util.Arrays.mergeSort(Arrays.java:1157) > at java.util.Arrays.sort(Arrays.java:1092) > at java.util.Collections.sort(Collections.java:134) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:49) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow. Compactions stop from this point > onwards] > I restarted this failing instance with DEBUG logging enabled and it throws > the following exception part way through startup: > ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main] > java.lang.StackOverflowError > at > org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307) > at > org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230) > at > org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124) > at > org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:45) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeat
[jira] [Comment Edited] (CASSANDRA-4321) stackoverflow building interval tree & possible sstable corruptions
[ https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396433#comment-13396433 ] Anton Winter edited comment on CASSANDRA-4321 at 6/19/12 2:02 AM: -- I can confirm I also experienced the "Unexpected empty index file" errors on some of the nodes that I have run sstablescrub on. Other nodes had this error when running sstablescrub: {code} Scrub of SSTableReader(path='/var/lib//data/cassandra/KS/CF/KS-CF-hd-259648-Data.db') complete: 1592 rows in new sstable and 0 empty (tombstoned) rows dropped EOF after 6 bytes out of 8 {code} Compactions stop with the "java.lang.RuntimeException: Last written key DecoratedKey" error on the nodes affected by either of the above 2 errors . Nodes that seem to have been repaired by the sstablescrub still continue to have "java.lang.RuntimeException: Last written key DecoratedKey" errors scattered through the logs but are still compacting. Is there any further information we can supply to help debug? was (Author: awinter): I can confirm I also experienced the "Unexpected empty index file" errors on some of the nodes that I have run sstablescrub on. Other nodes had this error when running sstablescrub: {code} Scrub of SSTableReader(path='/var/lib//data/cassandra/KS/CF/KS-CF-hd-259648-Data.db') complete: 1592 rows in new sstable and 0 empty (tombstoned) rows dropped EOF after 6 bytes out of 8 {code} Compactions stop with the "java.lang.RuntimeException: Last written key DecoratedKey" error on the nodes affected by either of the above 2 errors . Nodes that seem to have been repaired by the sstablescrub still continue to have "java.lang.RuntimeException: Last written key DecoratedKey" errors scattered through the logs but are still be compacting. Is there any further information we can supply to help debug? > stackoverflow building interval tree & possible sstable corruptions > --- > > Key: CASSANDRA-4321 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4321 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: Anton Winter >Assignee: Sylvain Lebresne > Fix For: 1.1.2 > > Attachments: > 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, > 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, > 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt > > > After upgrading to 1.1.1 (from 1.1.0) I have started experiencing > StackOverflowError's resulting in compaction backlog and failure to restart. > The ring currently consists of 6 DC's and 22 nodes using LCS & compression. > This issue was first noted on 2 nodes in one DC and then appears to have > spread to various other nodes in the other DC's. > When the first occurrence of this was found I restarted the instance but it > failed to start so I cleared its data and treated it as a replacement node > for the token it was previously responsible for. This node successfully > streamed all the relevant data back but failed again a number of hours later > with the same StackOverflowError and again was unable to restart. > The initial stack overflow error on a running instance looks like this: > ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 > AbstractCassandraDaemon.java (line 134) Exception in thread > Thread[CompactionExecutor:314,1,main] > java.lang.StackOverflowError > at java.util.Arrays.mergeSort(Arrays.java:1157) > at java.util.Arrays.sort(Arrays.java:1092) > at java.util.Collections.sort(Collections.java:134) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:49) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow. Compactions stop from this point > onwards] > I restarted this failing instance with DEBUG logging enabled and it throws > the following exception part way through startup: > ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main] > java.lang.StackOverflowError > at > org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307) > at > org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230) > at > org.slf4j.helpers.M
git commit: add #3762 to CHANGES
Updated Branches: refs/heads/trunk a89c8b4d3 -> bbcbfd865 add #3762 to CHANGES Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bbcbfd86 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bbcbfd86 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bbcbfd86 Branch: refs/heads/trunk Commit: bbcbfd865828c430e536ceaa6142b098432a3ace Parents: a89c8b4 Author: Jonathan Ellis Authored: Mon Jun 18 20:11:45 2012 -0500 Committer: Jonathan Ellis Committed: Mon Jun 18 20:11:45 2012 -0500 -- CHANGES.txt |1 + 1 files changed, 1 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bbcbfd86/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c85fd92..2bf2586 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2-dev + * rewrite key cache save/load to use only sequential i/o (CASSANDRA-3762) * update MS protocol with a version handshake + broadcast address id (CASSANDRA-4311) * multithreaded hint replay (CASSANDRA-4189)
[jira] [Resolved] (CASSANDRA-1625) make the row cache continuously durable
[ https://issues.apache.org/jira/browse/CASSANDRA-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-1625. --- Resolution: Duplicate Storing the full cache tuple (and evicting entries that turn out to be obsolete lazily) was done in CASSANDRA-1625 > make the row cache continuously durable > --- > > Key: CASSANDRA-1625 > URL: https://issues.apache.org/jira/browse/CASSANDRA-1625 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Peter Schuller >Priority: Minor > > I was looking into how the row cache worked today and realized only row keys > were saved and later pre-populated on start-up. > On the premise that row caches are typically used for small rows of which > there may be many, this is highly likely to be seek bound on large data sets > during pre-population. > The pre-population could be made faster by increasing I/O queue depth (by > concurrency or by libaio as in 1576), but especially on large data sets the > performance would be nowhere near what could be achieved if a reasonably > sized file containing the actual rows were to be read in a sequential fashion > on start. > On the one hand, Cassandra's design means that this should be possible to do > efficiently much easier than in some other cases, but on the other hand it is > still not entirely trivial. > The key problem with maintaining a continuously durable cache is that one > must never read stale data on start-up. Stale could mean either data that was > later deleted, or an old version of data that was updated. > In the case of Cassandra, this means that any cache restored on start-up must > be up-to-date with whatever position in the commit log that commit log > recovery will start at. (Because the row cache is for an entire row, we can't > couple updating of an on-disk row cache with memtable flushes.) > I can see two main approaches: > (a) Periodically dump the entire row cache, deferring commit log eviction in > synchronization with said dumping. > (b) Keep a change log of sorts, similar to the commit log but filtered to > only contain data written to the commit log that affects keys that were in > the row cache at the time. Eviction of commit logs or updating positional > markers that affect the point of commit log recovery start, would imply > fsync():ing this change log. An incremental traversal, or alternatively a > periodic full dump, would have to be used to ensure that old row change log > segments can be evicted without loss of cache warmness. > I like (b), but it is also the introduction of significant complexity (and > potential write path overhead) for the purpose of the row cache. In the worst > case where hotly read data is also hotly written, the overhead could be > particularly significant. > I am not convinced whether this is a good idea for Cassandra, but I have a > use-case where a similar cache might have to be written in the application to > achieve the desired effect (pre-population being too slow for a sufficiently > large row cache). But there are reasons why, in an ideal world, having such a > continuously durable cache in Cassandra would be much better than something > at the application level. The primary reason is that it does not interact > poorly with consistency in the cluster, since the cache is node-local and > appropriate measures would be taken to make it consistent locally on each > node. I.e., it would be entirely transparent to the application. > Thoughts? Like/dislike/too complex/not worth it? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4321) stackoverflow building interval tree & possible sstable corruptions
[ https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396433#comment-13396433 ] Anton Winter commented on CASSANDRA-4321: - I can confirm I also experienced the "Unexpected empty index file" errors on some of the nodes that I have run sstablescrub on. On some other nodes the sstablescrub command appears to complete successfully but compactions still stops at the "java.lang.RuntimeException: Last written key DecoratedKey" error. Is there any further information we can supply to help debug? > stackoverflow building interval tree & possible sstable corruptions > --- > > Key: CASSANDRA-4321 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4321 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: Anton Winter >Assignee: Sylvain Lebresne > Fix For: 1.1.2 > > Attachments: > 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, > 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, > 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt > > > After upgrading to 1.1.1 (from 1.1.0) I have started experiencing > StackOverflowError's resulting in compaction backlog and failure to restart. > The ring currently consists of 6 DC's and 22 nodes using LCS & compression. > This issue was first noted on 2 nodes in one DC and then appears to have > spread to various other nodes in the other DC's. > When the first occurrence of this was found I restarted the instance but it > failed to start so I cleared its data and treated it as a replacement node > for the token it was previously responsible for. This node successfully > streamed all the relevant data back but failed again a number of hours later > with the same StackOverflowError and again was unable to restart. > The initial stack overflow error on a running instance looks like this: > ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 > AbstractCassandraDaemon.java (line 134) Exception in thread > Thread[CompactionExecutor:314,1,main] > java.lang.StackOverflowError > at java.util.Arrays.mergeSort(Arrays.java:1157) > at java.util.Arrays.sort(Arrays.java:1092) > at java.util.Collections.sort(Collections.java:134) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:49) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow. Compactions stop from this point > onwards] > I restarted this failing instance with DEBUG logging enabled and it throws > the following exception part way through startup: > ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main] > java.lang.StackOverflowError > at > org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307) > at > org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230) > at > org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124) > at > org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:45) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow] > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.Int
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396410#comment-13396410 ] David Alves commented on CASSANDRA-3564: the patch does pretty much what is suggested, SIGINT and SIGKILL behave the same and in addition there is a way to flush everything and exit (by calling System.exit()). Any suggestion on the best approach to test? Also patch is still missing changes to scripts. > flush before shutdown so restart is faster > -- > > Key: CASSANDRA-3564 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 > Project: Cassandra > Issue Type: New Feature > Components: Packaging >Reporter: Jonathan Ellis >Assignee: David Alves >Priority: Minor > Fix For: 1.2 > > Attachments: 3564.patch > > > Cassandra handles flush in its shutdown hook for durable_writes=false CFs > (otherwise we're *guaranteed* to lose data) but leaves it up to the operator > otherwise. I'd rather leave it that way to offer these semantics: > - cassandra stop = shutdown nicely [explicit flush, then kill -int] > - kill -INT = shutdown faster but don't lose any updates [current behavior] > - kill -KILL = lose most recent writes unless durable_writes=true and batch > commits are on [also current behavior] > But if it's not reasonable to use nodetool from the init script then I guess > we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Alves updated CASSANDRA-3564: --- Attachment: 3564.patch -extracts flushing all tables to a method that takes a boolean flag - when flag is false only tables that have durable_writes=off are flushed, when true all tables are flushed. - added the method to the mbean and nodeprobe - added flushandexit command to nodetool > flush before shutdown so restart is faster > -- > > Key: CASSANDRA-3564 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 > Project: Cassandra > Issue Type: New Feature > Components: Packaging >Reporter: Jonathan Ellis >Assignee: David Alves >Priority: Minor > Fix For: 1.2 > > Attachments: 3564.patch > > > Cassandra handles flush in its shutdown hook for durable_writes=false CFs > (otherwise we're *guaranteed* to lose data) but leaves it up to the operator > otherwise. I'd rather leave it that way to offer these semantics: > - cassandra stop = shutdown nicely [explicit flush, then kill -int] > - kill -INT = shutdown faster but don't lose any updates [current behavior] > - kill -KILL = lose most recent writes unless durable_writes=true and batch > commits are on [also current behavior] > But if it's not reasonable to use nodetool from the init script then I guess > we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: CFMetaData.fromThrift to throw ConfigurationException upon error patch by Sam Overton; reviewed by Pavel Yaskevich for CASSANDRA-4353
Updated Branches: refs/heads/cassandra-1.1 45c8f53a2 -> 170e14ab9 CFMetaData.fromThrift to throw ConfigurationException upon error patch by Sam Overton; reviewed by Pavel Yaskevich for CASSANDRA-4353 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/170e14ab Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/170e14ab Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/170e14ab Branch: refs/heads/cassandra-1.1 Commit: 170e14ab97c5bbdaa3f75d85316e290a72e38959 Parents: 45c8f53 Author: Pavel Yaskevich Authored: Tue Jun 19 02:56:43 2012 +0300 Committer: Pavel Yaskevich Committed: Tue Jun 19 02:58:51 2012 +0300 -- CHANGES.txt|1 + .../org/apache/cassandra/config/CFMetaData.java| 19 ++ 2 files changed, 14 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/170e14ab/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index b6702cb..9e3dd67 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -13,6 +13,7 @@ composite primary keys (CASSANDRA-4328) * Set JVM stack size to 160k for java 7 (CASSANDRA-4275) * cqlsh: add COPY command to load data from CSV flat files (CASSANDRA-4012) + * CFMetaData.fromThrift to throw ConfigurationException upon error (CASSANDRA-4353) Merged from 1.0: * Set gc_grace on index CF to 0 (CASSANDRA-4314) http://git-wip-us.apache.org/repos/asf/cassandra/blob/170e14ab/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index c38841a..c6411af 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -645,12 +645,19 @@ public final class CFMetaData CompressionParameters cp = CompressionParameters.create(cf_def.compression_options); -return newCFMD.comment(cf_def.comment) - .replicateOnWrite(cf_def.replicate_on_write) - .defaultValidator(TypeParser.parse(cf_def.default_validation_class)) - .keyValidator(TypeParser.parse(cf_def.key_validation_class)) - .columnMetadata(ColumnDefinition.fromThrift(cf_def.column_metadata)) - .compressionParameters(cp); +try +{ +return newCFMD.comment(cf_def.comment) + .replicateOnWrite(cf_def.replicate_on_write) + .defaultValidator(TypeParser.parse(cf_def.default_validation_class)) + .keyValidator(TypeParser.parse(cf_def.key_validation_class)) + .columnMetadata(ColumnDefinition.fromThrift(cf_def.column_metadata)) + .compressionParameters(cp); +} +catch (MarshalException e) +{ +throw new ConfigurationException(e.getMessage()); +} } public void reload() throws IOException
[jira] [Commented] (CASSANDRA-3632) using an ant builder in Eclipse is painful
[ https://issues.apache.org/jira/browse/CASSANDRA-3632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396378#comment-13396378 ] ben commented on CASSANDRA-3632: For more general usage of ant, here is a tutorial for review: http://i-proving.com/2005/10/31/ant-tutorial/ > using an ant builder in Eclipse is painful > -- > > Key: CASSANDRA-3632 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3632 > Project: Cassandra > Issue Type: Bug > Components: Packaging, Tools >Affects Versions: 1.0.6 >Reporter: Eric Evans >Assignee: Eric Evans >Priority: Minor > Attachments: > v1-0001-CASSANDRA-3632-remove-ant-builder-restore-java-builder.txt > > > The {{generate-eclipse-files}} target creates project files that use an Ant > builder. Besides being painfully slow (I've had the runs stack up behind > frequent saves), many of Eclipses errors and warnings do not show unless an > internal builder is used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-4321) stackoverflow building interval tree & possible sstable corruptions
[ https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396319#comment-13396319 ] Al Tobey edited comment on CASSANDRA-4321 at 6/18/12 11:26 PM: --- Offline scrub ran fine for me. I downgraded to 1.1.0 and ran a compaction and it looks fine. (edit) finished offline scrub on both affected nodes and they're back to normal. was (Author: a...@ooyala.com): Offline scrub ran fine for me. I downgraded to 1.1.0 and ran a compaction and it looks fine. > stackoverflow building interval tree & possible sstable corruptions > --- > > Key: CASSANDRA-4321 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4321 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: Anton Winter >Assignee: Sylvain Lebresne > Fix For: 1.1.2 > > Attachments: > 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, > 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, > 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt > > > After upgrading to 1.1.1 (from 1.1.0) I have started experiencing > StackOverflowError's resulting in compaction backlog and failure to restart. > The ring currently consists of 6 DC's and 22 nodes using LCS & compression. > This issue was first noted on 2 nodes in one DC and then appears to have > spread to various other nodes in the other DC's. > When the first occurrence of this was found I restarted the instance but it > failed to start so I cleared its data and treated it as a replacement node > for the token it was previously responsible for. This node successfully > streamed all the relevant data back but failed again a number of hours later > with the same StackOverflowError and again was unable to restart. > The initial stack overflow error on a running instance looks like this: > ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 > AbstractCassandraDaemon.java (line 134) Exception in thread > Thread[CompactionExecutor:314,1,main] > java.lang.StackOverflowError > at java.util.Arrays.mergeSort(Arrays.java:1157) > at java.util.Arrays.sort(Arrays.java:1092) > at java.util.Collections.sort(Collections.java:134) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:49) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow. Compactions stop from this point > onwards] > I restarted this failing instance with DEBUG logging enabled and it throws > the following exception part way through startup: > ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main] > java.lang.StackOverflowError > at > org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307) > at > org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230) > at > org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124) > at > org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:45) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow] > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.In
[jira] [Commented] (CASSANDRA-4321) stackoverflow building interval tree & possible sstable corruptions
[ https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396319#comment-13396319 ] Al Tobey commented on CASSANDRA-4321: - Offline scrub ran fine for me. I downgraded to 1.1.0 and ran a compaction and it looks fine. > stackoverflow building interval tree & possible sstable corruptions > --- > > Key: CASSANDRA-4321 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4321 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: Anton Winter >Assignee: Sylvain Lebresne > Fix For: 1.1.2 > > Attachments: > 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, > 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, > 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt > > > After upgrading to 1.1.1 (from 1.1.0) I have started experiencing > StackOverflowError's resulting in compaction backlog and failure to restart. > The ring currently consists of 6 DC's and 22 nodes using LCS & compression. > This issue was first noted on 2 nodes in one DC and then appears to have > spread to various other nodes in the other DC's. > When the first occurrence of this was found I restarted the instance but it > failed to start so I cleared its data and treated it as a replacement node > for the token it was previously responsible for. This node successfully > streamed all the relevant data back but failed again a number of hours later > with the same StackOverflowError and again was unable to restart. > The initial stack overflow error on a running instance looks like this: > ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 > AbstractCassandraDaemon.java (line 134) Exception in thread > Thread[CompactionExecutor:314,1,main] > java.lang.StackOverflowError > at java.util.Arrays.mergeSort(Arrays.java:1157) > at java.util.Arrays.sort(Arrays.java:1092) > at java.util.Collections.sort(Collections.java:134) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:49) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow. Compactions stop from this point > onwards] > I restarted this failing instance with DEBUG logging enabled and it throws > the following exception part way through startup: > ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main] > java.lang.StackOverflowError > at > org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307) > at > org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230) > at > org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124) > at > org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:45) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > [snip - this repeats until stack overflow] > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:64) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalNode.(IntervalNode.java:62) > at > org.apache.cassandra.utils.IntervalTree.IntervalTree.(IntervalTree.java:39) > at > org.apache.cassandra.db.DataTracker.buildIntervalTree(D
[jira] [Comment Edited] (CASSANDRA-4321) stackoverflow building interval tree & possible sstable corruptions
[ https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396246#comment-13396246 ] Omid Aladini edited comment on CASSANDRA-4321 at 6/18/12 9:52 PM: -- Thanks for the patch. Offline scrub is indeed very useful. Tried the v3 patches and the scrub didn't complete, possibly because of a different issue, with the following failed assertion: {code} Exception in thread "main" java.lang.AssertionError: Unexpected empty index file: RandomAccessReader(filePath='/var/lib/cassandra/abcd/data/SOMEKSP/CF3/SOMEKSP-CF3-tmp-hd-33827-Index.db', skipIOCache=true) at org.apache.cassandra.io.sstable.SSTable.estimateRowsFromIndex(SSTable.java:221) at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:376) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:203) at org.apache.cassandra.io.sstable.SSTableReader.openNoValidation(SSTableReader.java:143) at org.apache.cassandra.tools.StandaloneScrubber.main(StandaloneScrubber.java:79) {code} which consequently, encountered corrupt SSTables during start-up: {code} 2012-06-18_20:36:19.89543 INFO 20:36:19,895 Opening /var/lib/cassandra/abcd/data/SOMEKSP/CF3/SOMEKSP-CF3-hd-24984 (1941993 bytes) 2012-06-18_20:36:19.90217 ERROR 20:36:19,900 Exception in thread Thread[SSTableBatchOpen:9,5,main] 2012-06-18_20:36:19.90222 java.lang.IllegalStateException: SSTable first key DecoratedKey(41255474878128469814942789647212295629, 31303132393937357c3337313730333536) > last key DecoratedKey(41219536226656199861610796307350537953, 31303234323538397c3331383436373338) 2012-06-18_20:36:19.90261 at org.apache.cassandra.io.sstable.SSTableReader.validate(SSTableReader.java:441) 2012-06-18_20:36:19.90275 at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:208) 2012-06-18_20:36:19.90291 at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:153) 2012-06-18_20:36:19.90309 at org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:245) 2012-06-18_20:36:19.90324 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 2012-06-18_20:36:19.90389 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 2012-06-18_20:36:19.90391 at java.util.concurrent.FutureTask.run(Unknown Source) 2012-06-18_20:36:19.90391 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) 2012-06-18_20:36:19.90392 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 2012-06-18_20:36:19.90392 at java.lang.Thread.run(Unknown Source) {code} although didn't prevent Cassandra from starting up, but compaction failed subsequently: {code} 2012-06-18_20:51:41.79122 ERROR 20:51:41,790 Exception in thread Thread[CompactionExecutor:81,1,main] 2012-06-18_20:51:41.79131 java.lang.RuntimeException: Last written key DecoratedKey(12341204629749023303706929560940823070, 33363037353338) >= current key DecoratedKey(12167298275958419273792070792442127650, 31363431343537) writing into /var/lib/cassandra/abcd/data/SOMEKSP/CF3/SOMEKSP-CF3-tmp-hd-40992-Data.db 2012-06-18_20:51:41.79161 at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:134) 2012-06-18_20:51:41.79169 at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:153) 2012-06-18_20:51:41.79180 at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:159) 2012-06-18_20:51:41.79189 at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50) 2012-06-18_20:51:41.79199 at org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:150) 2012-06-18_20:51:41.79210 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) 2012-06-18_20:51:41.79218 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 2012-06-18_20:51:41.79227 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 2012-06-18_20:51:41.79235 at java.util.concurrent.FutureTask.run(Unknown Source) 2012-06-18_20:51:41.79242 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) 2012-06-18_20:51:41.79250 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 2012-06-18_20:51:41.79259 at java.lang.Thread.run(Unknown Source) {code} was (Author: omid): Thanks for the patch. Offline scrub is indeed very useful. Tried the v3 patches and the scrub didn't complete, possibly because of a different issue, with the following failed assertion: {code} Exception in thread "main" java.lang.AssertionError: Unexpected empty index file: RandomAccessReader(filePath='/var/lib/cassandra/abcd/data/SOMEKSP/C
[jira] [Commented] (CASSANDRA-4355) Better debian packaging permissions
[ https://issues.apache.org/jira/browse/CASSANDRA-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396296#comment-13396296 ] Nick Bailey commented on CASSANDRA-4355: I'm also wondering if there are any generally accepted practices regarding giving the cassandra group itself write permissions to these directories/files. >From the perspective of someone writing a monitoring application, I would like >to be able to have our packaging create its own user and add that user to the >cassandra group, and at that point have read/write access to configuration >files/snapshots/other things. > Better debian packaging permissions > --- > > Key: CASSANDRA-4355 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4355 > Project: Cassandra > Issue Type: Bug >Reporter: Nick Bailey >Assignee: Nick Bailey > Attachments: 0001-Better-permissions-in-deb-package.patch > > > The debian package creates a cassandra user for the process to run as. It > chowns /var/lib/cassandra and /var/log/cassandra, but it doesn't grant group > level access to these files. It should do a 'chown cassandra:cassandra ...' > so that users in the cassandra group can also access those files. Also we > should chown /etc/cassandra and any other files/directories created. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4355) Better debian packaging permissions
[ https://issues.apache.org/jira/browse/CASSANDRA-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396292#comment-13396292 ] Nick Bailey commented on CASSANDRA-4355: Actually I was wrong. The 'cassandra:' syntax does make the cassandra group the group for the files. The addition of /etc/cassandra and /usr/share/cassandra is still desirable though. > Better debian packaging permissions > --- > > Key: CASSANDRA-4355 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4355 > Project: Cassandra > Issue Type: Bug >Reporter: Nick Bailey >Assignee: Nick Bailey > Attachments: 0001-Better-permissions-in-deb-package.patch > > > The debian package creates a cassandra user for the process to run as. It > chowns /var/lib/cassandra and /var/log/cassandra, but it doesn't grant group > level access to these files. It should do a 'chown cassandra:cassandra ...' > so that users in the cassandra group can also access those files. Also we > should chown /etc/cassandra and any other files/directories created. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4355) Better debian packaging permissions
[ https://issues.apache.org/jira/browse/CASSANDRA-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-4355: Reviewer: thepaul > Better debian packaging permissions > --- > > Key: CASSANDRA-4355 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4355 > Project: Cassandra > Issue Type: Bug >Reporter: Nick Bailey >Assignee: Nick Bailey > Attachments: 0001-Better-permissions-in-deb-package.patch > > > The debian package creates a cassandra user for the process to run as. It > chowns /var/lib/cassandra and /var/log/cassandra, but it doesn't grant group > level access to these files. It should do a 'chown cassandra:cassandra ...' > so that users in the cassandra group can also access those files. Also we > should chown /etc/cassandra and any other files/directories created. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4355) Better debian packaging permissions
[ https://issues.apache.org/jira/browse/CASSANDRA-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Bailey updated CASSANDRA-4355: --- Attachment: 0001-Better-permissions-in-deb-package.patch > Better debian packaging permissions > --- > > Key: CASSANDRA-4355 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4355 > Project: Cassandra > Issue Type: Bug >Reporter: Nick Bailey >Assignee: Nick Bailey > Attachments: 0001-Better-permissions-in-deb-package.patch > > > The debian package creates a cassandra user for the process to run as. It > chowns /var/lib/cassandra and /var/log/cassandra, but it doesn't grant group > level access to these files. It should do a 'chown cassandra:cassandra ...' > so that users in the cassandra group can also access those files. Also we > should chown /etc/cassandra and any other files/directories created. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4355) Better debian packaging permissions
[ https://issues.apache.org/jira/browse/CASSANDRA-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Bailey updated CASSANDRA-4355: --- Assignee: Nick Bailey > Better debian packaging permissions > --- > > Key: CASSANDRA-4355 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4355 > Project: Cassandra > Issue Type: Bug >Reporter: Nick Bailey >Assignee: Nick Bailey > > The debian package creates a cassandra user for the process to run as. It > chowns /var/lib/cassandra and /var/log/cassandra, but it doesn't grant group > level access to these files. It should do a 'chown cassandra:cassandra ...' > so that users in the cassandra group can also access those files. Also we > should chown /etc/cassandra and any other files/directories created. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4355) Better debian packaging permissions
Nick Bailey created CASSANDRA-4355: -- Summary: Better debian packaging permissions Key: CASSANDRA-4355 URL: https://issues.apache.org/jira/browse/CASSANDRA-4355 Project: Cassandra Issue Type: Bug Reporter: Nick Bailey The debian package creates a cassandra user for the process to run as. It chowns /var/lib/cassandra and /var/log/cassandra, but it doesn't grant group level access to these files. It should do a 'chown cassandra:cassandra ...' so that users in the cassandra group can also access those files. Also we should chown /etc/cassandra and any other files/directories created. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4347) IP change of node requires assassinate to really remove old IP
[ https://issues.apache.org/jira/browse/CASSANDRA-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396253#comment-13396253 ] Brandon Williams commented on CASSANDRA-4347: - LocationInfo is enough > IP change of node requires assassinate to really remove old IP > -- > > Key: CASSANDRA-4347 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4347 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.10 > Environment: RHEL6, 64bit >Reporter: Karl Mueller >Assignee: Brandon Williams >Priority: Minor > Attachments: dev-cass-post-assassinate-gossipinfo.txt > > > In changing the IP addresses of nodes one-by-one, the node successfully moves > itself and its token. Everything works properly. > However, the node which had its IP changed (but NOT other nodes in the ring) > continues to have some type of state associated with the old IP and produces > log messages like this: > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:25:01,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:25:11,373 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:25:32,380 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:26:32,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:26:32,491 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:26:32,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:26:42,402 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:27:03,410 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:28:04,533 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > Other nodes do NOT have the old IP showing up in logs. It's only the node > that moved. > The old IP doesn't show up in ring anywhere or in any other fashion. The > cluster seems to be fully operational, so I think it's just a cleanup issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4347) IP change of node requires assassinate to really remove old IP
[ https://issues.apache.org/jira/browse/CASSANDRA-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396251#comment-13396251 ] Karl Mueller commented on CASSANDRA-4347: - OK, I'll grab one this week when we do the move. I assume you want the LocationInfo CF, or do you want the entire system keyspace? > IP change of node requires assassinate to really remove old IP > -- > > Key: CASSANDRA-4347 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4347 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.10 > Environment: RHEL6, 64bit >Reporter: Karl Mueller >Assignee: Brandon Williams >Priority: Minor > Attachments: dev-cass-post-assassinate-gossipinfo.txt > > > In changing the IP addresses of nodes one-by-one, the node successfully moves > itself and its token. Everything works properly. > However, the node which had its IP changed (but NOT other nodes in the ring) > continues to have some type of state associated with the old IP and produces > log messages like this: > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:25:01,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:25:11,373 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:25:32,380 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:26:32,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:26:32,491 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:26:32,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:26:42,402 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:27:03,410 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:28:04,533 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > Other nodes do NOT have the old IP showing up in logs. It's only the node > that moved. > The old IP doesn't show up in ring anywhere or in any other fashion. The > cluster seems to be fully operational, so I think it's just a cleanup issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4321) stackoverflow building interval tree & possible sstable corruptions
[ https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396246#comment-13396246 ] Omid Aladini commented on CASSANDRA-4321: - Thanks for the patch. Offline scrub is indeed very useful. Tried the v3 patches and the scrub didn't complete, possibly because of a different issue, with the following failed assertion: {code} Exception in thread "main" java.lang.AssertionError: Unexpected empty index file: RandomAccessReader(filePath='/var/lib/cassandra/abcd/data/SOMEKSP/CF3/SOMEKSP-CF3-tmp-hd-33827-Index.db', skipIOCache=true) at org.apache.cassandra.io.sstable.SSTable.estimateRowsFromIndex(SSTable.java:221) at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:376) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:203) at org.apache.cassandra.io.sstable.SSTableReader.openNoValidation(SSTableReader.java:143) at org.apache.cassandra.tools.StandaloneScrubber.main(StandaloneScrubber.java:79) {code} which consequently, it encountered a corrupt SSTable during start-up: {code} 2012-06-18_20:36:19.89543 INFO 20:36:19,895 Opening /var/lib/cassandra/abcd/data/SOMEKSP/CF3/SOMEKSP-CF3-hd-24984 (1941993 bytes) 2012-06-18_20:36:19.90217 ERROR 20:36:19,900 Exception in thread Thread[SSTableBatchOpen:9,5,main] 2012-06-18_20:36:19.90222 java.lang.IllegalStateException: SSTable first key DecoratedKey(41255474878128469814942789647212295629, 31303132393937357c3337313730333536) > last key DecoratedKey(41219536226656199861610796307350537953, 31303234323538397c3331383436373338) 2012-06-18_20:36:19.90261 at org.apache.cassandra.io.sstable.SSTableReader.validate(SSTableReader.java:441) 2012-06-18_20:36:19.90275 at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:208) 2012-06-18_20:36:19.90291 at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:153) 2012-06-18_20:36:19.90309 at org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:245) 2012-06-18_20:36:19.90324 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 2012-06-18_20:36:19.90389 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 2012-06-18_20:36:19.90391 at java.util.concurrent.FutureTask.run(Unknown Source) 2012-06-18_20:36:19.90391 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) 2012-06-18_20:36:19.90392 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 2012-06-18_20:36:19.90392 at java.lang.Thread.run(Unknown Source) {code} although didn't prevent Cassandra from starting up, but compaction failed subsequently: {code} 2012-06-18_20:51:41.79122 ERROR 20:51:41,790 Exception in thread Thread[CompactionExecutor:81,1,main] 2012-06-18_20:51:41.79131 java.lang.RuntimeException: Last written key DecoratedKey(12341204629749023303706929560940823070, 33363037353338) >= current key DecoratedKey(12167298275958419273792070792442127650, 31363431343537) writing into /var/lib/cassandra/abcd/data/SOMEKSP/CF3/SOMEKSP-CF3-tmp-hd-40992-Data.db 2012-06-18_20:51:41.79161 at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:134) 2012-06-18_20:51:41.79169 at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:153) 2012-06-18_20:51:41.79180 at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:159) 2012-06-18_20:51:41.79189 at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50) 2012-06-18_20:51:41.79199 at org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:150) 2012-06-18_20:51:41.79210 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) 2012-06-18_20:51:41.79218 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 2012-06-18_20:51:41.79227 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 2012-06-18_20:51:41.79235 at java.util.concurrent.FutureTask.run(Unknown Source) 2012-06-18_20:51:41.79242 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) 2012-06-18_20:51:41.79250 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 2012-06-18_20:51:41.79259 at java.lang.Thread.run(Unknown Source) {code} > stackoverflow building interval tree & possible sstable corruptions > --- > > Key: CASSANDRA-4321 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4321 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: Anton Winter >A
[jira] [Commented] (CASSANDRA-4347) IP change of node requires assassinate to really remove old IP
[ https://issues.apache.org/jira/browse/CASSANDRA-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396232#comment-13396232 ] Brandon Williams commented on CASSANDRA-4347: - Yes, a pre-assassinate capture is what we need. The old IPs showing in the LEFT state is assassinate working (and they do appear cross-cluster) > IP change of node requires assassinate to really remove old IP > -- > > Key: CASSANDRA-4347 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4347 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.10 > Environment: RHEL6, 64bit >Reporter: Karl Mueller >Assignee: Brandon Williams >Priority: Minor > Attachments: dev-cass-post-assassinate-gossipinfo.txt > > > In changing the IP addresses of nodes one-by-one, the node successfully moves > itself and its token. Everything works properly. > However, the node which had its IP changed (but NOT other nodes in the ring) > continues to have some type of state associated with the old IP and produces > log messages like this: > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:25:01,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:25:11,373 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:25:32,380 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:26:32,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:26:32,491 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:26:32,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:26:42,402 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:27:03,410 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:28:04,533 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > Other nodes do NOT have the old IP showing up in logs. It's only the node > that moved. > The old IP doesn't show up in ring anywhere or in any other fashion. The > cluster seems to be fully operational, so I think it's just a cleanup issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-4347) IP change of node requires assassinate to really remove old IP
[ https://issues.apache.org/jira/browse/CASSANDRA-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams reassigned CASSANDRA-4347: --- Assignee: Brandon Williams > IP change of node requires assassinate to really remove old IP > -- > > Key: CASSANDRA-4347 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4347 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.10 > Environment: RHEL6, 64bit >Reporter: Karl Mueller >Assignee: Brandon Williams >Priority: Minor > Attachments: dev-cass-post-assassinate-gossipinfo.txt > > > In changing the IP addresses of nodes one-by-one, the node successfully moves > itself and its token. Everything works properly. > However, the node which had its IP changed (but NOT other nodes in the ring) > continues to have some type of state associated with the old IP and produces > log messages like this: > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:25:01,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:25:11,373 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:25:32,380 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:26:32,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:26:32,491 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:26:32,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:26:42,402 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:27:03,410 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:28:04,533 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > Other nodes do NOT have the old IP showing up in logs. It's only the node > that moved. > The old IP doesn't show up in ring anywhere or in any other fashion. The > cluster seems to be fully operational, so I think it's just a cleanup issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4331) sstable2json error
[ https://issues.apache.org/jira/browse/CASSANDRA-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396224#comment-13396224 ] Pavel Yaskevich commented on CASSANDRA-4331: Oh, I'm sorry, +1 then. > sstable2json error > -- > > Key: CASSANDRA-4331 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4331 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: ganghuang >Assignee: Jonathan Ellis > Fix For: 1.1.2 > > Attachments: 4331.txt > > > /apache-cassandra-1.1.1/bin> ./sstable2json > /home/cassandra/data/pimda/CF_bookmark/pimda-CF_bookmark-hd-48-Data.db > > test.json > ERROR 22:27:14,215 Error in ThreadPoolExecutor > java.lang.ClassCastException: java.math.BigInteger cannot be cast to > java.nio.ByteBuffer > at org.apache.cassandra.db.marshal.UTF8Type.compare(UTF8Type.java:27) > at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:45) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:89) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:38) > at java.util.TreeMap.getEntry(TreeMap.java:328) > at java.util.TreeMap.containsKey(TreeMap.java:209) > at java.util.TreeSet.contains(TreeSet.java:217) > at > org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:396) > at > org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:187) > at > org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:225) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 22:27:14,219 Error in ThreadPoolExecutor > java.lang.ClassCastException: java.math.BigInteger cannot be cast to > java.nio.ByteBuffer > at org.apache.cassandra.db.marshal.UTF8Type.compare(UTF8Type.java:27) > at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:45) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:89) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:38) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4331) sstable2json error
[ https://issues.apache.org/jira/browse/CASSANDRA-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4331: -- Fix Version/s: (was: 1.1.1) 1.1.2 > sstable2json error > -- > > Key: CASSANDRA-4331 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4331 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: ganghuang >Assignee: Jonathan Ellis > Fix For: 1.1.2 > > Attachments: 4331.txt > > > /apache-cassandra-1.1.1/bin> ./sstable2json > /home/cassandra/data/pimda/CF_bookmark/pimda-CF_bookmark-hd-48-Data.db > > test.json > ERROR 22:27:14,215 Error in ThreadPoolExecutor > java.lang.ClassCastException: java.math.BigInteger cannot be cast to > java.nio.ByteBuffer > at org.apache.cassandra.db.marshal.UTF8Type.compare(UTF8Type.java:27) > at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:45) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:89) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:38) > at java.util.TreeMap.getEntry(TreeMap.java:328) > at java.util.TreeMap.containsKey(TreeMap.java:209) > at java.util.TreeSet.contains(TreeSet.java:217) > at > org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:396) > at > org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:187) > at > org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:225) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 22:27:14,219 Error in ThreadPoolExecutor > java.lang.ClassCastException: java.math.BigInteger cannot be cast to > java.nio.ByteBuffer > at org.apache.cassandra.db.marshal.UTF8Type.compare(UTF8Type.java:27) > at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:45) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:89) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:38) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (CASSANDRA-4331) sstable2json error
[ https://issues.apache.org/jira/browse/CASSANDRA-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reopened CASSANDRA-4331: --- Assignee: Jonathan Ellis Not a duplicate, 4289 was for a 1.2-only issue; this is against 1.1 > sstable2json error > -- > > Key: CASSANDRA-4331 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4331 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.1 >Reporter: ganghuang >Assignee: Jonathan Ellis > Fix For: 1.1.1 > > Attachments: 4331.txt > > > /apache-cassandra-1.1.1/bin> ./sstable2json > /home/cassandra/data/pimda/CF_bookmark/pimda-CF_bookmark-hd-48-Data.db > > test.json > ERROR 22:27:14,215 Error in ThreadPoolExecutor > java.lang.ClassCastException: java.math.BigInteger cannot be cast to > java.nio.ByteBuffer > at org.apache.cassandra.db.marshal.UTF8Type.compare(UTF8Type.java:27) > at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:45) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:89) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:38) > at java.util.TreeMap.getEntry(TreeMap.java:328) > at java.util.TreeMap.containsKey(TreeMap.java:209) > at java.util.TreeSet.contains(TreeSet.java:217) > at > org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:396) > at > org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:187) > at > org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:225) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 22:27:14,219 Error in ThreadPoolExecutor > java.lang.ClassCastException: java.math.BigInteger cannot be cast to > java.nio.ByteBuffer > at org.apache.cassandra.db.marshal.UTF8Type.compare(UTF8Type.java:27) > at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:45) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:89) > at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:38) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4347) IP change of node requires assassinate to really remove old IP
[ https://issues.apache.org/jira/browse/CASSANDRA-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karl Mueller updated CASSANDRA-4347: Attachment: dev-cass-post-assassinate-gossipinfo.txt This file contains gossipinfo from the 3-node cluster we already moved, after assassinate has run on each node for its own old IP. The new IPs are all 10.93.15.xx and the old IPs are all 10.12.x.x. The old IPs are as follows: dev-cass00 - 10.12.9.160 dev-cass01 - 10.12.9.157 dev-cass02 - 10.12.9.33 I believe dev-cass00 has restarted since the assinate, but the others haven't. New IPs are: dev-cass00 - 10.93.15.10 dev-cass01 - 10.93.15.11 dev-cass02 - 10.93.15.12 > IP change of node requires assassinate to really remove old IP > -- > > Key: CASSANDRA-4347 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4347 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.10 > Environment: RHEL6, 64bit >Reporter: Karl Mueller >Priority: Minor > Attachments: dev-cass-post-assassinate-gossipinfo.txt > > > In changing the IP addresses of nodes one-by-one, the node successfully moves > itself and its token. Everything works properly. > However, the node which had its IP changed (but NOT other nodes in the ring) > continues to have some type of state associated with the old IP and produces > log messages like this: > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:25:01,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:25:11,373 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:25:32,380 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:26:32,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:26:32,491 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:26:32,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:26:42,402 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:27:03,410 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:28:04,533 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > Other nodes do NOT have the old IP showing up in logs. It's only the node > that moved. > The old IP doesn't show up in ring anywhere or in any other fashion. The > cluster seems to be fully operational, so I think it's just a cleanup issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4347) IP change of node requires assassinate to really remove old IP
[ https://issues.apache.org/jira/browse/CASSANDRA-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396215#comment-13396215 ] Karl Mueller commented on CASSANDRA-4347: - You mean, before I did the assassinate? All of the nodes at this point are post-assassinate. I'm attaching the gossipinfo from the 3-node cluster in the current state which is showing some old IPs. (I thought assassinate went cross-cluster?) I'm moving another cluster this week, and I'll try to grab a gossipinfo and the system tables during transition from that set. I expect it will have the same issues. > IP change of node requires assassinate to really remove old IP > -- > > Key: CASSANDRA-4347 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4347 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.10 > Environment: RHEL6, 64bit >Reporter: Karl Mueller >Priority: Minor > > In changing the IP addresses of nodes one-by-one, the node successfully moves > itself and its token. Everything works properly. > However, the node which had its IP changed (but NOT other nodes in the ring) > continues to have some type of state associated with the old IP and produces > log messages like this: > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:25:01,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:25:11,373 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:25:32,380 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:26:32,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:26:32,491 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:26:32,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:26:42,402 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:27:03,410 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:28:04,533 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > Other nodes do NOT have the old IP showing up in logs. It's only the node > that moved. > The old IP doesn't show up in ring anywhere or in any other fashion. The > cluster seems to be fully operational, so I think it's just a cleanup issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3047) implementations of IPartitioner.describeOwnership() are not DC aware
[ https://issues.apache.org/jira/browse/CASSANDRA-3047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396185#comment-13396185 ] David Alves commented on CASSANDRA-3047: It was supposed to apply to trunk, but was failing mainly because it was built against github's skewed version of trunk. Tested to apply to current trunk. > implementations of IPartitioner.describeOwnership() are not DC aware > > > Key: CASSANDRA-3047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3047 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Aaron Morton >Assignee: David Alves >Priority: Trivial > Fix For: 1.1.2 > > Attachments: CASSANDRA-3047.patch, CASSANDRA-3047.patch, > CASSANDRA-3047.patch, CASSANDRA-3047.patch > > > see http://www.mail-archive.com/user@cassandra.apache.org/msg16375.html > When a cluster the multiple rings approach to tokens the output from nodetool > ring is incorrect. > When it uses the interleaved token approach (e.g. dc1, dc2, dc1, dc2) it will > be correct. > It's a bit hacky but could we special case (RP) tokens that are off by 1 and > calculate the ownership per dc ? I guess another approach would be to add > some parameters so the partitioner can be told about the token assignment > strategy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3047) implementations of IPartitioner.describeOwnership() are not DC aware
[ https://issues.apache.org/jira/browse/CASSANDRA-3047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Alves updated CASSANDRA-3047: --- Attachment: CASSANDRA-3047.patch updated patch to apply to trunk > implementations of IPartitioner.describeOwnership() are not DC aware > > > Key: CASSANDRA-3047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3047 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Aaron Morton >Assignee: David Alves >Priority: Trivial > Fix For: 1.1.2 > > Attachments: CASSANDRA-3047.patch, CASSANDRA-3047.patch, > CASSANDRA-3047.patch, CASSANDRA-3047.patch > > > see http://www.mail-archive.com/user@cassandra.apache.org/msg16375.html > When a cluster the multiple rings approach to tokens the output from nodetool > ring is incorrect. > When it uses the interleaved token approach (e.g. dc1, dc2, dc1, dc2) it will > be correct. > It's a bit hacky but could we special case (RP) tokens that are off by 1 and > calculate the ownership per dc ? I guess another approach would be to add > some parameters so the partitioner can be told about the token assignment > strategy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-3991) Investigate importance of jsvc in debian packages
[ https://issues.apache.org/jira/browse/CASSANDRA-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396174#comment-13396174 ] Viktor Kuzmin edited comment on CASSANDRA-3991 at 6/18/12 7:23 PM: --- Just to confirm: my problem is not related to jsvc at all. Problem is still there even with simple start-stop-daemon. was (Author: kvaster): Just to confirm: my problem is not related to jsvc at all. > Investigate importance of jsvc in debian packages > - > > Key: CASSANDRA-3991 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3991 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Brandon Williams >Assignee: Eric Evans >Priority: Minor > Fix For: 1.2 > > > jsvc seems to be buggy at best. For instance, if you set a small heap like > 128M it seems to completely ignore this and use as much memory as it wants. > I don't know what this is buying us over launching /usr/bin/cassandra > directly like the redhat scripts do, but I've seen multiple complaints about > its memory usage. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3991) Investigate importance of jsvc in debian packages
[ https://issues.apache.org/jira/browse/CASSANDRA-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396174#comment-13396174 ] Viktor Kuzmin commented on CASSANDRA-3991: -- Just to confirm: my problem is not related to jsvc at all. > Investigate importance of jsvc in debian packages > - > > Key: CASSANDRA-3991 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3991 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Brandon Williams >Assignee: Eric Evans >Priority: Minor > Fix For: 1.2 > > > jsvc seems to be buggy at best. For instance, if you set a small heap like > 128M it seems to completely ignore this and use as much memory as it wants. > I don't know what this is buying us over launching /usr/bin/cassandra > directly like the redhat scripts do, but I've seen multiple complaints about > its memory usage. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3047) implementations of IPartitioner.describeOwnership() are not DC aware
[ https://issues.apache.org/jira/browse/CASSANDRA-3047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396171#comment-13396171 ] Jonathan Ellis commented on CASSANDRA-3047: --- what branch is this against? getting failures against 1.1 and trunk. > implementations of IPartitioner.describeOwnership() are not DC aware > > > Key: CASSANDRA-3047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3047 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Aaron Morton >Assignee: David Alves >Priority: Trivial > Fix For: 1.1.2 > > Attachments: CASSANDRA-3047.patch, CASSANDRA-3047.patch, > CASSANDRA-3047.patch > > > see http://www.mail-archive.com/user@cassandra.apache.org/msg16375.html > When a cluster the multiple rings approach to tokens the output from nodetool > ring is incorrect. > When it uses the interleaved token approach (e.g. dc1, dc2, dc1, dc2) it will > be correct. > It's a bit hacky but could we special case (RP) tokens that are off by 1 and > calculate the ownership per dc ? I guess another approach would be to add > some parameters so the partitioner can be told about the token assignment > strategy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[2/3] git commit: Raise a meaningful exception instead of NPE when PFS encounters an unconfigured node patch by jbellis; reviewed by brandonwilliams for CASSANDRA-4349
Raise a meaningful exception instead of NPE when PFS encounters an unconfigured node patch by jbellis; reviewed by brandonwilliams for CASSANDRA-4349 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/45c8f53a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/45c8f53a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/45c8f53a Branch: refs/heads/trunk Commit: 45c8f53a2c42f48317110908734119a7cb24baf1 Parents: 0ba2631 Author: Jonathan Ellis Authored: Mon Jun 18 14:15:27 2012 -0500 Committer: Jonathan Ellis Committed: Mon Jun 18 14:15:36 2012 -0500 -- CHANGES.txt|2 ++ .../locator/AbstractNetworkTopologySnitch.java |2 -- .../cassandra/locator/PropertyFileSnitch.java |8 3 files changed, 10 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/45c8f53a/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ec03ca6..b6702cb 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 1.1.2 + * Raise a meaningful exception instead of NPE when PFS encounters + an unconfigured node + no default (CASSANDRA-4349) * fix bug in sstable blacklisting with LCS (CASSANDRA-4343) * LCS no longer promotes tiny sstables out of L0 (CASSANDRA-4341) * skip tombstones during hint replay (CASSANDRA-4320) http://git-wip-us.apache.org/repos/asf/cassandra/blob/45c8f53a/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java -- diff --git a/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java b/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java index c2df7e4..68404c9 100644 --- a/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java +++ b/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java @@ -34,7 +34,6 @@ public abstract class AbstractNetworkTopologySnitch extends AbstractEndpointSnit * Return the rack for which an endpoint resides in * @param endpoint a specified endpoint * @return string of rack - * @throws UnknownHostException */ abstract public String getRack(InetAddress endpoint); @@ -42,7 +41,6 @@ public abstract class AbstractNetworkTopologySnitch extends AbstractEndpointSnit * Return the data center for which an endpoint resides in * @param endpoint a specified endpoint * @return string of data center - * @throws UnknownHostException */ abstract public String getDatacenter(InetAddress endpoint); http://git-wip-us.apache.org/repos/asf/cassandra/blob/45c8f53a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java -- diff --git a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java index 00adc7e..0bf5850 100644 --- a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java +++ b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java @@ -84,6 +84,14 @@ public class PropertyFileSnitch extends AbstractNetworkTopologySnitch */ public String[] getEndpointInfo(InetAddress endpoint) { +String[] rawEndpointInfo = getRawEndpointInfo(endpoint); +if (rawEndpointInfo == null) +throw new RuntimeException("Unknown host " + endpoint + " with no default configured"); +return rawEndpointInfo; +} + +private String[] getRawEndpointInfo(InetAddress endpoint) +{ String[] value = endpointMap.get(endpoint); if (value == null) {
[1/3] git commit: Merge branch 'cassandra-1.1' into trunk
Updated Branches: refs/heads/cassandra-1.1 0ba2631ee -> 45c8f53a2 refs/heads/trunk a67a03996 -> a89c8b4d3 Merge branch 'cassandra-1.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a89c8b4d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a89c8b4d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a89c8b4d Branch: refs/heads/trunk Commit: a89c8b4d335a6da6471d9f6806d8c5ab622f85cf Parents: a67a039 45c8f53 Author: Jonathan Ellis Authored: Mon Jun 18 14:17:30 2012 -0500 Committer: Jonathan Ellis Committed: Mon Jun 18 14:17:30 2012 -0500 -- CHANGES.txt|2 ++ .../locator/AbstractNetworkTopologySnitch.java |2 -- .../cassandra/locator/PropertyFileSnitch.java |8 3 files changed, 10 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a89c8b4d/CHANGES.txt -- diff --cc CHANGES.txt index 7943112,b6702cb..c85fd92 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,30 -1,6 +1,32 @@@ +1.2-dev + * update MS protocol with a version handshake + broadcast address id + (CASSANDRA-4311) + * multithreaded hint replay (CASSANDRA-4189) + * add inter-node message compression (CASSANDRA-3127) + * enforce 1m min keycache for auto (CASSANDRA-4306) + * remove COPP (CASSANDRA-2479) + * Track tombstone expiration and compact when tombstone content is + higher than a configurable threshold, default 20% (CASSANDRA-3442) + * update MurmurHash to version 3 (CASSANDRA-2975) + * (CLI) track elapsed time for `delete' operation (CASSANDRA-4060) + * (CLI) jline version is bumped to 1.0 to properly support + 'delete' key function (CASSANDRA-4132) + * Save IndexSummary into new SSTable 'Summary' component (CASSANDRA-2392) + * Add support for range tombstones (CASSANDRA-3708) + * Improve MessagingService efficiency (CASSANDRA-3617) + * Avoid ID conflicts from concurrent schema changes (CASSANDRA-3794) + * Set thrift HSHA server thread limit to unlimited by default (CASSANDRA-4277) + * Avoids double serialization of CF id in RowMutation messages + (CASSANDRA-4293) + * fix Summary component and caches to use correct partitioner (CASSANDRA-4289) + * stream compressed sstables directly with java nio (CASSANDRA-4297) + * Support multiple ranges in SliceQueryFilter (CASSANDRA-3885) + * Add column metadata to system column families (CASSANDRA-4018) + + 1.1.2 + * Raise a meaningful exception instead of NPE when PFS encounters +an unconfigured node + no default (CASSANDRA-4349) * fix bug in sstable blacklisting with LCS (CASSANDRA-4343) * LCS no longer promotes tiny sstables out of L0 (CASSANDRA-4341) * skip tombstones during hint replay (CASSANDRA-4320) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a89c8b4d/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a89c8b4d/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java --
[3/3] git commit: Raise a meaningful exception instead of NPE when PFS encounters an unconfigured node patch by jbellis; reviewed by brandonwilliams for CASSANDRA-4349
Raise a meaningful exception instead of NPE when PFS encounters an unconfigured node patch by jbellis; reviewed by brandonwilliams for CASSANDRA-4349 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/45c8f53a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/45c8f53a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/45c8f53a Branch: refs/heads/cassandra-1.1 Commit: 45c8f53a2c42f48317110908734119a7cb24baf1 Parents: 0ba2631 Author: Jonathan Ellis Authored: Mon Jun 18 14:15:27 2012 -0500 Committer: Jonathan Ellis Committed: Mon Jun 18 14:15:36 2012 -0500 -- CHANGES.txt|2 ++ .../locator/AbstractNetworkTopologySnitch.java |2 -- .../cassandra/locator/PropertyFileSnitch.java |8 3 files changed, 10 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/45c8f53a/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ec03ca6..b6702cb 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 1.1.2 + * Raise a meaningful exception instead of NPE when PFS encounters + an unconfigured node + no default (CASSANDRA-4349) * fix bug in sstable blacklisting with LCS (CASSANDRA-4343) * LCS no longer promotes tiny sstables out of L0 (CASSANDRA-4341) * skip tombstones during hint replay (CASSANDRA-4320) http://git-wip-us.apache.org/repos/asf/cassandra/blob/45c8f53a/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java -- diff --git a/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java b/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java index c2df7e4..68404c9 100644 --- a/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java +++ b/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java @@ -34,7 +34,6 @@ public abstract class AbstractNetworkTopologySnitch extends AbstractEndpointSnit * Return the rack for which an endpoint resides in * @param endpoint a specified endpoint * @return string of rack - * @throws UnknownHostException */ abstract public String getRack(InetAddress endpoint); @@ -42,7 +41,6 @@ public abstract class AbstractNetworkTopologySnitch extends AbstractEndpointSnit * Return the data center for which an endpoint resides in * @param endpoint a specified endpoint * @return string of data center - * @throws UnknownHostException */ abstract public String getDatacenter(InetAddress endpoint); http://git-wip-us.apache.org/repos/asf/cassandra/blob/45c8f53a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java -- diff --git a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java index 00adc7e..0bf5850 100644 --- a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java +++ b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java @@ -84,6 +84,14 @@ public class PropertyFileSnitch extends AbstractNetworkTopologySnitch */ public String[] getEndpointInfo(InetAddress endpoint) { +String[] rawEndpointInfo = getRawEndpointInfo(endpoint); +if (rawEndpointInfo == null) +throw new RuntimeException("Unknown host " + endpoint + " with no default configured"); +return rawEndpointInfo; +} + +private String[] getRawEndpointInfo(InetAddress endpoint) +{ String[] value = endpointMap.get(endpoint); if (value == null) {
[jira] [Commented] (CASSANDRA-3991) Investigate importance of jsvc in debian packages
[ https://issues.apache.org/jira/browse/CASSANDRA-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396164#comment-13396164 ] Brandon Williams commented on CASSANDRA-3991: - Viktor mentioned on irc that his problem is actually not related to jsvc. > Investigate importance of jsvc in debian packages > - > > Key: CASSANDRA-3991 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3991 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Brandon Williams >Assignee: Eric Evans >Priority: Minor > Fix For: 1.2 > > > jsvc seems to be buggy at best. For instance, if you set a small heap like > 128M it seems to completely ignore this and use as much memory as it wants. > I don't know what this is buying us over launching /usr/bin/cassandra > directly like the redhat scripts do, but I've seen multiple complaints about > its memory usage. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4198) cqlsh: update recognized syntax for cql3
[ https://issues.apache.org/jira/browse/CASSANDRA-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396157#comment-13396157 ] Hudson commented on CASSANDRA-4198: --- Integrated in Cassandra #1515 (See [https://builds.apache.org/job/Cassandra/1515/]) Fix cqlsh ASSUME broken by CASSANDRA-4198. (Revision 0cc168a966bf4dc11db6b61e6b5b5d6771031804) Result = ABORTED brandonwilliams : Files : * bin/cqlsh > cqlsh: update recognized syntax for cql3 > > > Key: CASSANDRA-4198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4198 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Affects Versions: 1.1.0 >Reporter: paul cannon >Assignee: paul cannon >Priority: Minor > Labels: cql3, cqlsh > Fix For: 1.1.1 > > Attachments: 4198.patch.txt > > > cqlsh should recognize cql3 syntax when in cql3 mode; this includes tab > completing proper syntax and properly quoting any terms in single- or > double-quotes (current version only knows how to use single quotes). > also, prefer using the term "TABLE" over "COLUMNFAMILY" wherever one of those > is generated from cqlsh (like in DESCRIBE output). > and if it's not too bad, it would help to have the online help strings > reflect cql3 syntax (maybe with a nod to cql2 restrictions where appropriate). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-3991) Investigate importance of jsvc in debian packages
[ https://issues.apache.org/jira/browse/CASSANDRA-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams reassigned CASSANDRA-3991: --- Assignee: Eric Evans (was: Brandon Williams) > Investigate importance of jsvc in debian packages > - > > Key: CASSANDRA-3991 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3991 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Brandon Williams >Assignee: Eric Evans >Priority: Minor > Fix For: 1.2 > > > jsvc seems to be buggy at best. For instance, if you set a small heap like > 128M it seems to completely ignore this and use as much memory as it wants. > I don't know what this is buying us over launching /usr/bin/cassandra > directly like the redhat scripts do, but I've seen multiple complaints about > its memory usage. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy
[ https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396150#comment-13396150 ] Jonathan Ellis commented on CASSANDRA-4049: --- I'm not sure how you can give them meaningful names while keeping it pluggable and not tied to DSE, but I'm open to suggestions. > Add generic way of adding SSTable components required custom compaction > strategy > > > Key: CASSANDRA-4049 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4049 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Piotr Kołaczkowski >Assignee: Piotr Kołaczkowski >Priority: Minor > Labels: compaction > Fix For: 1.1.2 > > Attachments: compaction_strategy_cleanup.patch, component_patch.diff > > > CFS compaction strategy coming up in the next DSE release needs to store some > important information in Tombstones.db and RemovedKeys.db files, one per > sstable. However, currently Cassandra issues warnings when these files are > found in the data directory. Additionally, when switched to > SizeTieredCompactionStrategy, the files are left in the data directory after > compaction. > The attached patch adds new components to the Component class so Cassandra > knows about those files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4353) no error propagated to client when updating a column family with an invalid column def
[ https://issues.apache.org/jira/browse/CASSANDRA-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4353: -- Reviewer: xedin Assignee: Sam Overton > no error propagated to client when updating a column family with an invalid > column def > -- > > Key: CASSANDRA-4353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4353 > Project: Cassandra > Issue Type: Bug > Components: API >Affects Versions: 1.1.1 >Reporter: Sam Overton >Assignee: Sam Overton >Priority: Minor > Attachments: 4353.patch > > > CASSANDRA-3761 appears to have introduced a regression which is exposed by > test_system_column_family_operations in test/system/test_thrift_server.py > The test fails with this stack trace: > {noformat} > == > ERROR: > system.test_thrift_server.TestMutations.test_system_column_family_operations > -- > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/nose/case.py", line 183, in runTest > self.test(*self.arg) > File > "/opt/acunu/tests/cassandra-tests.hg/thrift/system/test_thrift_server.py", > line 1469, in test_system_column_family_operations > _expect_exception(fail_invalid_field, InvalidRequestException) > File > "/opt/acunu/tests/cassandra-tests.hg/thrift/system/test_thrift_server.py", > line 209, in _expect_exception > r = fn() > File > "/opt/acunu/tests/cassandra-tests.hg/thrift/system/test_thrift_server.py", > line 1468, in fail_invalid_field > client.system_update_column_family(modified_cf) > File "/usr/lib/python2.6/site-packages/cassandra/Cassandra.py", line 1892, > in system_update_column_family > return self.recv_system_update_column_family() > File "/usr/lib/python2.6/site-packages/cassandra/Cassandra.py", line 1903, > in recv_system_update_column_family > (fname, mtype, rseqid) = self._iprot.readMessageBegin() > File > "/usr/lib64/python2.6/site-packages/thrift/protocol/TBinaryProtocol.py", line > 126, in readMessageBegin > sz = self.readI32() > File > "/usr/lib64/python2.6/site-packages/thrift/protocol/TBinaryProtocol.py", line > 203, in readI32 > buff = self.trans.readAll(4) > File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", > line 58, in readAll > chunk = self.read(sz-have) > File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", > line 272, in read > self.readFrame() > File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", > line 276, in readFrame > buff = self.__trans.readAll(4) > File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", > line 58, in readAll > chunk = self.read(sz-have) > File "/usr/lib64/python2.6/site-packages/thrift/transport/TSocket.py", line > 108, in read > raise TTransportException(type=TTransportException.END_OF_FILE, > message='TSocket read 0 bytes') > TTransportException: TSocket read 0 bytes > -- > {noformat} > The logs have the following stack trace: > {noformat} > ERROR [Thrift:1] 2012-06-18 18:17:27,865 CustomTThreadPoolServer.java (line > 204) Error occurred during processing of message. > org.apache.cassandra.db.marshal.MarshalException: A long is exactly 8 bytes: > 16 > at > org.apache.cassandra.db.marshal.LongType.getString(LongType.java:72) > at > org.apache.cassandra.cql3.ColumnIdentifier.(ColumnIdentifier.java:47) > at > org.apache.cassandra.cql3.CFDefinition.(CFDefinition.java:115) > at > org.apache.cassandra.config.CFMetaData.updateCfDef(CFMetaData.java:1303) > at > org.apache.cassandra.config.CFMetaData.columnMetadata(CFMetaData.java:228) > at > org.apache.cassandra.config.CFMetaData.fromThrift(CFMetaData.java:648) > at > org.apache.cassandra.thrift.CassandraServer.system_update_column_family(CassandraServer.java:1061) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_update_column_family.getResult(Cassandra.java:3520) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_update_column_family.getResult(Cassandra.java:3508) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) > at > java.util.concurrent.ThreadPoolExecutor$Wor
[jira] [Commented] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy
[ https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396119#comment-13396119 ] Piotr Kołaczkowski commented on CASSANDRA-4049: --- Custom1..Custom5 types would be ok, but we'd like they can at least get meaningful names in the DSE code and in the data-directory. It can pretty fast bite us, if there are files named sblocks-custom1.dat, sblocks-custom2.dat, etc - the name doesn't tell anything what is inside. > Add generic way of adding SSTable components required custom compaction > strategy > > > Key: CASSANDRA-4049 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4049 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Piotr Kołaczkowski >Assignee: Piotr Kołaczkowski >Priority: Minor > Labels: compaction > Fix For: 1.1.2 > > Attachments: compaction_strategy_cleanup.patch, component_patch.diff > > > CFS compaction strategy coming up in the next DSE release needs to store some > important information in Tombstones.db and RemovedKeys.db files, one per > sstable. However, currently Cassandra issues warnings when these files are > found in the data directory. Additionally, when switched to > SizeTieredCompactionStrategy, the files are left in the data directory after > compaction. > The attached patch adds new components to the Component class so Cassandra > knows about those files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: typo in CHANGES
Updated Branches: refs/heads/trunk 054358bf4 -> a67a03996 typo in CHANGES Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a67a0399 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a67a0399 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a67a0399 Branch: refs/heads/trunk Commit: a67a0399602fe3235abb8b8f7c105690d317e36a Parents: 054358b Author: Brandon Williams Authored: Mon Jun 18 13:27:52 2012 -0500 Committer: Brandon Williams Committed: Mon Jun 18 13:27:52 2012 -0500 -- CHANGES.txt |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a67a0399/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ae1ce7a..7943112 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -15,7 +15,7 @@ * Add support for range tombstones (CASSANDRA-3708) * Improve MessagingService efficiency (CASSANDRA-3617) * Avoid ID conflicts from concurrent schema changes (CASSANDRA-3794) - * Set thrift HSHA server thread limit to unlimet by default (CASSANDRA-4277) + * Set thrift HSHA server thread limit to unlimited by default (CASSANDRA-4277) * Avoids double serialization of CF id in RowMutation messages (CASSANDRA-4293) * fix Summary component and caches to use correct partitioner (CASSANDRA-4289)
[2/3] git commit: cqlsh: add COPY command to load data from CSV flat files Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4012
cqlsh: add COPY command to load data from CSV flat files Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4012 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0ba2631e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0ba2631e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0ba2631e Branch: refs/heads/trunk Commit: 0ba2631ee228bdefaba61a53d723a65107ca044d Parents: 0cc168a Author: Brandon Williams Authored: Mon Jun 18 13:24:32 2012 -0500 Committer: Brandon Williams Committed: Mon Jun 18 13:24:32 2012 -0500 -- CHANGES.txt|1 + bin/cqlsh | 227 +-- pylib/cqlshlib/cql3handling.py |4 +- pylib/cqlshlib/cqlhandling.py | 15 ++- 4 files changed, 233 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ba2631e/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 693b03b..ec03ca6 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -10,6 +10,7 @@ * (cql3) Reject (not yet supported) creation of 2ndardy indexes on tables with composite primary keys (CASSANDRA-4328) * Set JVM stack size to 160k for java 7 (CASSANDRA-4275) + * cqlsh: add COPY command to load data from CSV flat files (CASSANDRA-4012) Merged from 1.0: * Set gc_grace on index CF to 0 (CASSANDRA-4314) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ba2631e/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index fecd472..842a313 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -36,7 +36,7 @@ version = "2.2.0" from StringIO import StringIO from itertools import groupby -from contextlib import contextmanager +from contextlib import contextmanager, closing from glob import glob from functools import partial from collections import defaultdict @@ -52,6 +52,7 @@ import locale import re import platform import warnings +import csv # cqlsh should run correctly when run out of a Cassandra source tree, # out of an unpacked Cassandra tarball, and after a proper package install. @@ -189,6 +190,7 @@ cqlsh_extra_syntax_rules = r''' | | | + | | | | @@ -220,6 +222,15 @@ cqlsh_extra_syntax_rules = r''' ::= "CAPTURE" ( fname=( | "OFF" ) )? ; + ::= "COPY" cf= + ( "(" [colnames]= ( "," [colnames]= )* ")" )? + "FROM" ( fname= | "STDIN" ) + ( "WITH" ( "AND" )* )? +; + + ::= [optnames]= "=" [optvals]= + ; + # avoiding just "DEBUG" so that this rule doesn't get treated as a terminal ::= "DEBUG" "THINGS"? ; @@ -272,6 +283,41 @@ cqlsh_syntax_completer('sourceCommand', 'fname') \ cqlsh_syntax_completer('captureCommand', 'fname') \ (complete_source_quoted_filename) +@cqlsh_syntax_completer('copyCommand', 'fname') +def copy_fname_completer(ctxt, cqlsh): +lasttype = ctxt.get_binding('*LASTTYPE*') +if lasttype == 'unclosedString': +return complete_source_quoted_filename(ctxt, cqlsh) +partial = ctxt.get_binding('partial') +if partial == '': +return ["'"] +return () + +@cqlsh_syntax_completer('copyCommand', 'colnames') +def complete_copy_column_names(ctxt, cqlsh): +existcols = map(cqlsh.cql_unprotect_name, ctxt.get_binding('colnames', ())) +ks = cqlsh.cql_unprotect_name(ctxt.get_binding('ksname', None)) +cf = cqlsh.cql_unprotect_name(ctxt.get_binding('cfname')) +colnames = cqlsh.get_column_names(ks, cf) +if len(existcols) == 0: +return [colnames[0]] +return set(colnames[1:]) - set(existcols) + +COPY_OPTIONS = ('DELIMITER', 'QUOTE', 'ESCAPE', 'HEADER') + +@cqlsh_syntax_completer('copyOption', 'optnames') +def complete_copy_options(ctxt, cqlsh): +optnames = map(str.upper, ctxt.get_binding('optnames', ())) +return set(COPY_OPTIONS) - set(optnames) + +@cqlsh_syntax_completer('copyOption', 'optvals') +def complete_copy_opt_values(ctxt, cqlsh): +optnames = ctxt.get_binding('optnames', ()) +lastopt = optnames[-1].lower() +if lastopt == 'header': +return ['true', 'false'] +return [cqlhandling.Hint('')] + class NoKeyspaceError(Exception): pass @@ -469,6 +515,22 @@ def show_warning_without_quoting_line(message, category, filename, lineno, file= warnings.showwarning = show_warning_without_quoting_line warnings.filterwarnings('always', category=cql3handling.UnexpectedTableStructure) +def describe_interval(seconds): +desc = [] +for length, unit in
[3/3] git commit: cqlsh: add COPY command to load data from CSV flat files Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4012
cqlsh: add COPY command to load data from CSV flat files Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4012 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0ba2631e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0ba2631e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0ba2631e Branch: refs/heads/cassandra-1.1 Commit: 0ba2631ee228bdefaba61a53d723a65107ca044d Parents: 0cc168a Author: Brandon Williams Authored: Mon Jun 18 13:24:32 2012 -0500 Committer: Brandon Williams Committed: Mon Jun 18 13:24:32 2012 -0500 -- CHANGES.txt|1 + bin/cqlsh | 227 +-- pylib/cqlshlib/cql3handling.py |4 +- pylib/cqlshlib/cqlhandling.py | 15 ++- 4 files changed, 233 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ba2631e/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 693b03b..ec03ca6 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -10,6 +10,7 @@ * (cql3) Reject (not yet supported) creation of 2ndardy indexes on tables with composite primary keys (CASSANDRA-4328) * Set JVM stack size to 160k for java 7 (CASSANDRA-4275) + * cqlsh: add COPY command to load data from CSV flat files (CASSANDRA-4012) Merged from 1.0: * Set gc_grace on index CF to 0 (CASSANDRA-4314) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ba2631e/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index fecd472..842a313 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -36,7 +36,7 @@ version = "2.2.0" from StringIO import StringIO from itertools import groupby -from contextlib import contextmanager +from contextlib import contextmanager, closing from glob import glob from functools import partial from collections import defaultdict @@ -52,6 +52,7 @@ import locale import re import platform import warnings +import csv # cqlsh should run correctly when run out of a Cassandra source tree, # out of an unpacked Cassandra tarball, and after a proper package install. @@ -189,6 +190,7 @@ cqlsh_extra_syntax_rules = r''' | | | + | | | | @@ -220,6 +222,15 @@ cqlsh_extra_syntax_rules = r''' ::= "CAPTURE" ( fname=( | "OFF" ) )? ; + ::= "COPY" cf= + ( "(" [colnames]= ( "," [colnames]= )* ")" )? + "FROM" ( fname= | "STDIN" ) + ( "WITH" ( "AND" )* )? +; + + ::= [optnames]= "=" [optvals]= + ; + # avoiding just "DEBUG" so that this rule doesn't get treated as a terminal ::= "DEBUG" "THINGS"? ; @@ -272,6 +283,41 @@ cqlsh_syntax_completer('sourceCommand', 'fname') \ cqlsh_syntax_completer('captureCommand', 'fname') \ (complete_source_quoted_filename) +@cqlsh_syntax_completer('copyCommand', 'fname') +def copy_fname_completer(ctxt, cqlsh): +lasttype = ctxt.get_binding('*LASTTYPE*') +if lasttype == 'unclosedString': +return complete_source_quoted_filename(ctxt, cqlsh) +partial = ctxt.get_binding('partial') +if partial == '': +return ["'"] +return () + +@cqlsh_syntax_completer('copyCommand', 'colnames') +def complete_copy_column_names(ctxt, cqlsh): +existcols = map(cqlsh.cql_unprotect_name, ctxt.get_binding('colnames', ())) +ks = cqlsh.cql_unprotect_name(ctxt.get_binding('ksname', None)) +cf = cqlsh.cql_unprotect_name(ctxt.get_binding('cfname')) +colnames = cqlsh.get_column_names(ks, cf) +if len(existcols) == 0: +return [colnames[0]] +return set(colnames[1:]) - set(existcols) + +COPY_OPTIONS = ('DELIMITER', 'QUOTE', 'ESCAPE', 'HEADER') + +@cqlsh_syntax_completer('copyOption', 'optnames') +def complete_copy_options(ctxt, cqlsh): +optnames = map(str.upper, ctxt.get_binding('optnames', ())) +return set(COPY_OPTIONS) - set(optnames) + +@cqlsh_syntax_completer('copyOption', 'optvals') +def complete_copy_opt_values(ctxt, cqlsh): +optnames = ctxt.get_binding('optnames', ()) +lastopt = optnames[-1].lower() +if lastopt == 'header': +return ['true', 'false'] +return [cqlhandling.Hint('')] + class NoKeyspaceError(Exception): pass @@ -469,6 +515,22 @@ def show_warning_without_quoting_line(message, category, filename, lineno, file= warnings.showwarning = show_warning_without_quoting_line warnings.filterwarnings('always', category=cql3handling.UnexpectedTableStructure) +def describe_interval(seconds): +desc = [] +for length,
[1/3] git commit: Merge branch 'cassandra-1.1' into trunk
Updated Branches: refs/heads/cassandra-1.1 0cc168a96 -> 0ba2631ee refs/heads/trunk d0d21aded -> 054358bf4 Merge branch 'cassandra-1.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/054358bf Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/054358bf Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/054358bf Branch: refs/heads/trunk Commit: 054358bf44cee49a1b1f5bed8cdc86ae2408b3c1 Parents: d0d21ad 0ba2631 Author: Brandon Williams Authored: Mon Jun 18 13:24:57 2012 -0500 Committer: Brandon Williams Committed: Mon Jun 18 13:24:57 2012 -0500 -- CHANGES.txt|1 + bin/cqlsh | 227 +-- pylib/cqlshlib/cql3handling.py |4 +- pylib/cqlshlib/cqlhandling.py | 15 ++- 4 files changed, 233 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/054358bf/CHANGES.txt --
[2/3] git commit: Fix cqlsh ASSUME broken by CASSANDRA-4198. Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4352
Fix cqlsh ASSUME broken by CASSANDRA-4198. Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4352 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0cc168a9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0cc168a9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0cc168a9 Branch: refs/heads/trunk Commit: 0cc168a966bf4dc11db6b61e6b5b5d6771031804 Parents: 6dddf36 Author: Brandon Williams Authored: Mon Jun 18 13:09:50 2012 -0500 Committer: Brandon Williams Committed: Mon Jun 18 13:09:50 2012 -0500 -- bin/cqlsh |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0cc168a9/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 06e0e13..fecd472 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -901,10 +901,10 @@ class Shell(cmd.Cmd): HELP SELECT_LIMIT HELP CONSISTENCYLEVEL """ -ksname = parsed.get_binding('selectks') +ksname = parsed.get_binding('ksname') if ksname is not None: ksname = self.cql_unprotect_name(ksname) -cfname = self.cql_unprotect_name(parsed.get_binding('selectsource')) +cfname = self.cql_unprotect_name(parsed.get_binding('cfname')) decoder = self.determine_decoder_for(cfname, ksname=ksname) self.perform_statement(parsed.extract_orig(), decoder=decoder)
[1/3] git commit: Merge branch 'cassandra-1.1' into trunk
Updated Branches: refs/heads/cassandra-1.1 6dddf360e -> 0cc168a96 refs/heads/trunk 72a2e528c -> d0d21aded Merge branch 'cassandra-1.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d0d21ade Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d0d21ade Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d0d21ade Branch: refs/heads/trunk Commit: d0d21aded73af5879ea3e2422269c0023da75d11 Parents: 72a2e52 0cc168a Author: Brandon Williams Authored: Mon Jun 18 13:11:16 2012 -0500 Committer: Brandon Williams Committed: Mon Jun 18 13:11:16 2012 -0500 -- bin/cqlsh |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) --
[3/3] git commit: Fix cqlsh ASSUME broken by CASSANDRA-4198. Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4352
Fix cqlsh ASSUME broken by CASSANDRA-4198. Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4352 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0cc168a9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0cc168a9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0cc168a9 Branch: refs/heads/cassandra-1.1 Commit: 0cc168a966bf4dc11db6b61e6b5b5d6771031804 Parents: 6dddf36 Author: Brandon Williams Authored: Mon Jun 18 13:09:50 2012 -0500 Committer: Brandon Williams Committed: Mon Jun 18 13:09:50 2012 -0500 -- bin/cqlsh |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0cc168a9/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 06e0e13..fecd472 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -901,10 +901,10 @@ class Shell(cmd.Cmd): HELP SELECT_LIMIT HELP CONSISTENCYLEVEL """ -ksname = parsed.get_binding('selectks') +ksname = parsed.get_binding('ksname') if ksname is not None: ksname = self.cql_unprotect_name(ksname) -cfname = self.cql_unprotect_name(parsed.get_binding('selectsource')) +cfname = self.cql_unprotect_name(parsed.get_binding('cfname')) decoder = self.determine_decoder_for(cfname, ksname=ksname) self.perform_statement(parsed.extract_orig(), decoder=decoder)
[jira] [Created] (CASSANDRA-4354) Add default range constraint to prevent non-intuitive results when using composite key
Leonid Ilyevsky created CASSANDRA-4354: -- Summary: Add default range constraint to prevent non-intuitive results when using composite key Key: CASSANDRA-4354 URL: https://issues.apache.org/jira/browse/CASSANDRA-4354 Project: Cassandra Issue Type: New Feature Components: Core Affects Versions: 1.1.1 Environment: Any Reporter: Leonid Ilyevsky When ByteOrderedPartitioner is used, and the table has a composite primary key, the result of the query may be logically incorrect if only one inequality is specified. For example, let say x and y are components of the key. The query with the predicate like "x = ?" will give correct answer, as well as "x = ? and y >= ? and y <= ?". However, the predicate "x = ? and y >= ?" will give the result where we may see different values of x. This behavior is understandable because we know how the composite key is used internally, but it is very confusing for users with sql experience, and indeed is very inconvenient overall. This can me easily fixed by automatically adding the complementary inequality constraint, using bit sequence of all zeroes or all ones, depending on the side. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4353) no error propagated to client when updating a column family with an invalid column def
[ https://issues.apache.org/jira/browse/CASSANDRA-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Overton updated CASSANDRA-4353: --- Attachment: 4353.patch Attached patch which re-throws MarshalException as InvalidRequestException in CFMetaData.fromThrift > no error propagated to client when updating a column family with an invalid > column def > -- > > Key: CASSANDRA-4353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4353 > Project: Cassandra > Issue Type: Bug > Components: API >Affects Versions: 1.1.1 >Reporter: Sam Overton >Priority: Minor > Attachments: 4353.patch > > > CASSANDRA-3761 appears to have introduced a regression which is exposed by > test_system_column_family_operations in test/system/test_thrift_server.py > The test fails with this stack trace: > {noformat} > == > ERROR: > system.test_thrift_server.TestMutations.test_system_column_family_operations > -- > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/nose/case.py", line 183, in runTest > self.test(*self.arg) > File > "/opt/acunu/tests/cassandra-tests.hg/thrift/system/test_thrift_server.py", > line 1469, in test_system_column_family_operations > _expect_exception(fail_invalid_field, InvalidRequestException) > File > "/opt/acunu/tests/cassandra-tests.hg/thrift/system/test_thrift_server.py", > line 209, in _expect_exception > r = fn() > File > "/opt/acunu/tests/cassandra-tests.hg/thrift/system/test_thrift_server.py", > line 1468, in fail_invalid_field > client.system_update_column_family(modified_cf) > File "/usr/lib/python2.6/site-packages/cassandra/Cassandra.py", line 1892, > in system_update_column_family > return self.recv_system_update_column_family() > File "/usr/lib/python2.6/site-packages/cassandra/Cassandra.py", line 1903, > in recv_system_update_column_family > (fname, mtype, rseqid) = self._iprot.readMessageBegin() > File > "/usr/lib64/python2.6/site-packages/thrift/protocol/TBinaryProtocol.py", line > 126, in readMessageBegin > sz = self.readI32() > File > "/usr/lib64/python2.6/site-packages/thrift/protocol/TBinaryProtocol.py", line > 203, in readI32 > buff = self.trans.readAll(4) > File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", > line 58, in readAll > chunk = self.read(sz-have) > File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", > line 272, in read > self.readFrame() > File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", > line 276, in readFrame > buff = self.__trans.readAll(4) > File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", > line 58, in readAll > chunk = self.read(sz-have) > File "/usr/lib64/python2.6/site-packages/thrift/transport/TSocket.py", line > 108, in read > raise TTransportException(type=TTransportException.END_OF_FILE, > message='TSocket read 0 bytes') > TTransportException: TSocket read 0 bytes > -- > {noformat} > The logs have the following stack trace: > {noformat} > ERROR [Thrift:1] 2012-06-18 18:17:27,865 CustomTThreadPoolServer.java (line > 204) Error occurred during processing of message. > org.apache.cassandra.db.marshal.MarshalException: A long is exactly 8 bytes: > 16 > at > org.apache.cassandra.db.marshal.LongType.getString(LongType.java:72) > at > org.apache.cassandra.cql3.ColumnIdentifier.(ColumnIdentifier.java:47) > at > org.apache.cassandra.cql3.CFDefinition.(CFDefinition.java:115) > at > org.apache.cassandra.config.CFMetaData.updateCfDef(CFMetaData.java:1303) > at > org.apache.cassandra.config.CFMetaData.columnMetadata(CFMetaData.java:228) > at > org.apache.cassandra.config.CFMetaData.fromThrift(CFMetaData.java:648) > at > org.apache.cassandra.thrift.CassandraServer.system_update_column_family(CassandraServer.java:1061) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_update_column_family.getResult(Cassandra.java:3520) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_update_column_family.getResult(Cassandra.java:3508) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) >
[jira] [Created] (CASSANDRA-4353) no error propagated to client when updating a column family with an invalid column def
Sam Overton created CASSANDRA-4353: -- Summary: no error propagated to client when updating a column family with an invalid column def Key: CASSANDRA-4353 URL: https://issues.apache.org/jira/browse/CASSANDRA-4353 Project: Cassandra Issue Type: Bug Components: API Affects Versions: 1.1.1 Reporter: Sam Overton Priority: Minor CASSANDRA-3761 appears to have introduced a regression which is exposed by test_system_column_family_operations in test/system/test_thrift_server.py The test fails with this stack trace: {noformat} == ERROR: system.test_thrift_server.TestMutations.test_system_column_family_operations -- Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/nose/case.py", line 183, in runTest self.test(*self.arg) File "/opt/acunu/tests/cassandra-tests.hg/thrift/system/test_thrift_server.py", line 1469, in test_system_column_family_operations _expect_exception(fail_invalid_field, InvalidRequestException) File "/opt/acunu/tests/cassandra-tests.hg/thrift/system/test_thrift_server.py", line 209, in _expect_exception r = fn() File "/opt/acunu/tests/cassandra-tests.hg/thrift/system/test_thrift_server.py", line 1468, in fail_invalid_field client.system_update_column_family(modified_cf) File "/usr/lib/python2.6/site-packages/cassandra/Cassandra.py", line 1892, in system_update_column_family return self.recv_system_update_column_family() File "/usr/lib/python2.6/site-packages/cassandra/Cassandra.py", line 1903, in recv_system_update_column_family (fname, mtype, rseqid) = self._iprot.readMessageBegin() File "/usr/lib64/python2.6/site-packages/thrift/protocol/TBinaryProtocol.py", line 126, in readMessageBegin sz = self.readI32() File "/usr/lib64/python2.6/site-packages/thrift/protocol/TBinaryProtocol.py", line 203, in readI32 buff = self.trans.readAll(4) File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", line 58, in readAll chunk = self.read(sz-have) File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", line 272, in read self.readFrame() File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", line 276, in readFrame buff = self.__trans.readAll(4) File "/usr/lib64/python2.6/site-packages/thrift/transport/TTransport.py", line 58, in readAll chunk = self.read(sz-have) File "/usr/lib64/python2.6/site-packages/thrift/transport/TSocket.py", line 108, in read raise TTransportException(type=TTransportException.END_OF_FILE, message='TSocket read 0 bytes') TTransportException: TSocket read 0 bytes -- {noformat} The logs have the following stack trace: {noformat} ERROR [Thrift:1] 2012-06-18 18:17:27,865 CustomTThreadPoolServer.java (line 204) Error occurred during processing of message. org.apache.cassandra.db.marshal.MarshalException: A long is exactly 8 bytes: 16 at org.apache.cassandra.db.marshal.LongType.getString(LongType.java:72) at org.apache.cassandra.cql3.ColumnIdentifier.(ColumnIdentifier.java:47) at org.apache.cassandra.cql3.CFDefinition.(CFDefinition.java:115) at org.apache.cassandra.config.CFMetaData.updateCfDef(CFMetaData.java:1303) at org.apache.cassandra.config.CFMetaData.columnMetadata(CFMetaData.java:228) at org.apache.cassandra.config.CFMetaData.fromThrift(CFMetaData.java:648) at org.apache.cassandra.thrift.CassandraServer.system_update_column_family(CassandraServer.java:1061) at org.apache.cassandra.thrift.Cassandra$Processor$system_update_column_family.getResult(Cassandra.java:3520) at org.apache.cassandra.thrift.Cassandra$Processor$system_update_column_family.getResult(Cassandra.java:3508) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34) at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4352) cqlsh: ASSUME functionality broken by CASSANDRA-4198 fix
[ https://issues.apache.org/jira/browse/CASSANDRA-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4352: -- Reviewer: brandon.williams > cqlsh: ASSUME functionality broken by CASSANDRA-4198 fix > > > Key: CASSANDRA-4352 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4352 > Project: Cassandra > Issue Type: Bug > Components: Tools >Affects Versions: 1.1.1 >Reporter: paul cannon >Assignee: paul cannon >Priority: Minor > Labels: cqlsh > Fix For: 1.1.2 > > Attachments: 4352.patch.txt > > > All uses of the {{ASSUME}} command in cqlsh now appear to be wholly > ineffective at affecting subsequent value output. > This is due to a change in the grammar definition introduced by the fix for > CASSANDRA-4198, upon which definition the ASSUME functionality relied. > All that's needed to fix is to update the token-binding names used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4352) cqlsh: ASSUME functionality broken by CASSANDRA-4198 fix
[ https://issues.apache.org/jira/browse/CASSANDRA-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] paul cannon updated CASSANDRA-4352: --- Attachment: 4352.patch.txt Fix attached, or also available in the 4352 patch in my github: https://github.com/thepaul/cassandra/tree/4352 Current revision of patch is tagged as pending/4352. > cqlsh: ASSUME functionality broken by CASSANDRA-4198 fix > > > Key: CASSANDRA-4352 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4352 > Project: Cassandra > Issue Type: Bug > Components: Tools >Affects Versions: 1.1.1 >Reporter: paul cannon >Assignee: paul cannon >Priority: Minor > Labels: cqlsh > Fix For: 1.1.2 > > Attachments: 4352.patch.txt > > > All uses of the {{ASSUME}} command in cqlsh now appear to be wholly > ineffective at affecting subsequent value output. > This is due to a change in the grammar definition introduced by the fix for > CASSANDRA-4198, upon which definition the ASSUME functionality relied. > All that's needed to fix is to update the token-binding names used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4352) cqlsh: ASSUME functionality broken by CASSANDRA-4198 fix
paul cannon created CASSANDRA-4352: -- Summary: cqlsh: ASSUME functionality broken by CASSANDRA-4198 fix Key: CASSANDRA-4352 URL: https://issues.apache.org/jira/browse/CASSANDRA-4352 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.1 Reporter: paul cannon Assignee: paul cannon Priority: Minor Fix For: 1.1.2 All uses of the {{ASSUME}} command in cqlsh now appear to be wholly ineffective at affecting subsequent value output. This is due to a change in the grammar definition introduced by the fix for CASSANDRA-4198, upon which definition the ASSUME functionality relied. All that's needed to fix is to update the token-binding names used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4239) Support Thrift SSL socket
[ https://issues.apache.org/jira/browse/CASSANDRA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4239: -- Reviewer: brandon.williams Assignee: Pavel Yaskevich (was: Brandon Williams) > Support Thrift SSL socket > - > > Key: CASSANDRA-4239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4239 > Project: Cassandra > Issue Type: New Feature > Components: API >Reporter: Jonathan Ellis >Assignee: Pavel Yaskevich >Priority: Minor > Fix For: 1.1.2 > > > Thrift has supported SSL encryption for a while now (THRIFT-106); we should > allow configuring that in cassandra.yaml -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-4041) Allow updating column_alias types
[ https://issues.apache.org/jira/browse/CASSANDRA-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-4041: - Assignee: Pavel Yaskevich > Allow updating column_alias types > - > > Key: CASSANDRA-4041 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4041 > Project: Cassandra > Issue Type: Sub-task > Components: API >Reporter: Sylvain Lebresne >Assignee: Pavel Yaskevich >Priority: Minor > Fix For: 1.1.2 > > > CASSANDRA-3657 has added the ability to change comparators (including parts > of a compositeType) when compatible. The code of CQL3 forbids it currently > however so we should lift that limitation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4212) COMPACT STORAGE should not require a value to be aliased
[ https://issues.apache.org/jira/browse/CASSANDRA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396028#comment-13396028 ] Jonathan Ellis commented on CASSANDRA-4212: --- Is this now redundant wrt CASSANDRA-4329 ? > COMPACT STORAGE should not require a value to be aliased > > > Key: CASSANDRA-4212 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4212 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.1.0 >Reporter: Jonathan Ellis >Assignee: Sylvain Lebresne > Labels: cql3 > Fix For: 1.1.2 > > Attachments: 4212.txt > > > It's legitimate to only need the column name in a schema, e.g., > system.NodeIdInfo. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4351) Consider storing more informations on peers in system tables
[ https://issues.apache.org/jira/browse/CASSANDRA-4351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396021#comment-13396021 ] Jonathan Ellis commented on CASSANDRA-4351: --- Sounds reasonable. > Consider storing more informations on peers in system tables > - > > Key: CASSANDRA-4351 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4351 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Sylvain Lebresne >Priority: Minor > Fix For: 1.2 > > > Currently, the only thing we keep in system tables about other peers is their > token and IP addresses. We should probably also record the new ring_id, but > since CASSANDRA-4018 makes system table easily queriable, may it could be > worth adding some more information (basically most of what we gossip could be > a candidate (schema UUID, status, C* version, ...)) as a simple way to expose > the ring state to users (even if it's just a "view" of the ring state from > one specific node I believe it's still nice). > Of course that means storing information that may not be absolutely needed by > the server, but I'm not sure there is much harm to that. > Note that doing this cleanly may require changing the schema of current > system tables but as long as we do that in the 1.2 timeframe it's ok (since > the concerned system table 'local' and 'peers' are news anyway). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4327) Sorting results when using IN()
[ https://issues.apache.org/jira/browse/CASSANDRA-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4327: -- Reviewer: xedin > Sorting results when using IN() > > > Key: CASSANDRA-4327 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4327 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 1.1.1 >Reporter: Stephen Powis >Priority: Minor > Labels: cql3, patch, sort > Fix For: 1.1.2 > > Attachments: trunk-4327.txt > > > Using the following test schema: > CREATE TABLE test ( > my_id varchar, > time_id uuid, > value int, > PRIMARY KEY (my_id, time_id) > ); > When you issue a CQL3 query like: > select * from test where my_id in('key1', 'key2') order by time_id; > You receive the error: > "Ordering is only supported if the first part of the PRIMARY KEY is > restricted by an Equal" > I'm including a patch I put together after spending an hour or two poking > thru the code base that sorts the results for these types of queries. I'm > hoping someone with a deeper understanding of Cassandra's code base can take > a look at it, clean it up or use it as a starting place, and include it in an > upcoming release. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3942) ColumnFamilyRecordReader can report progress > 100%
[ https://issues.apache.org/jira/browse/CASSANDRA-3942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395993#comment-13395993 ] Brandon Williams commented on CASSANDRA-3942: - So, is the idea simply to clamp it at 1.0? Since all we have is an estimate we can't really get any more accurate. > ColumnFamilyRecordReader can report progress > 100% > --- > > Key: CASSANDRA-3942 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3942 > Project: Cassandra > Issue Type: Bug >Affects Versions: 0.6 >Reporter: T Jake Luciani >Assignee: Brandon Williams >Priority: Minor > Fix For: 1.1.2 > > > CFRR.getProgress() can return a value > 1.0 since the totalRowCount is a > estimate. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4349) PFS should give a friendlier error message when a node has not been configured
[ https://issues.apache.org/jira/browse/CASSANDRA-4349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395989#comment-13395989 ] Brandon Williams commented on CASSANDRA-4349: - +1 > PFS should give a friendlier error message when a node has not been configured > -- > > Key: CASSANDRA-4349 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4349 > Project: Cassandra > Issue Type: Bug >Reporter: Jonathan Ellis >Assignee: Jonathan Ellis >Priority: Minor > Fix For: 1.1.2 > > Attachments: 4349.txt > > > see CASSANDRA-4345 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4347) IP change of node requires assassinate to really remove old IP
[ https://issues.apache.org/jira/browse/CASSANDRA-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395983#comment-13395983 ] Brandon Williams commented on CASSANDRA-4347: - Can you attach the system table from a moved node and also the output from nodetool gossipinfo? > IP change of node requires assassinate to really remove old IP > -- > > Key: CASSANDRA-4347 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4347 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.10 > Environment: RHEL6, 64bit >Reporter: Karl Mueller >Priority: Minor > > In changing the IP addresses of nodes one-by-one, the node successfully moves > itself and its token. Everything works properly. > However, the node which had its IP changed (but NOT other nodes in the ring) > continues to have some type of state associated with the old IP and produces > log messages like this: > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:25:01,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:25:11,373 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:25:32,380 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:26:32,490 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > INFO [GossipStage:1] 2012-06-15 15:26:32,491 Gossiper.java (line 804) > InetAddress /10.12.9.157 is now UP > INFO [GossipStage:1] 2012-06-15 15:26:32,491 StorageService.java (line 1017) > Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the > same token 113427455640312821154458202477256070484. Ignoring /10.12.9.157 > INFO [GossipTasks:1] 2012-06-15 15:26:42,402 Gossiper.java (line 818) > InetAddress /10.12.9.157 is now dead. > INFO [GossipTasks:1] 2012-06-15 15:27:03,410 Gossiper.java (line 632) > FatClient /10.12.9.157 has been silent for 3ms, removing from gossip > INFO [GossipStage:1] 2012-06-15 15:28:04,533 Gossiper.java (line 838) Node > /10.12.9.157 is now part of the cluster > Other nodes do NOT have the old IP showing up in logs. It's only the node > that moved. > The old IP doesn't show up in ring anywhere or in any other fashion. The > cluster seems to be fully operational, so I think it's just a cleanup issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4259) Bug in SSTableReader.getSampleIndexesForRanges(...) causes uneven InputSplits generation for Hadoop mappers
[ https://issues.apache.org/jira/browse/CASSANDRA-4259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395970#comment-13395970 ] Jonathan Ellis commented on CASSANDRA-4259: --- To clarify: this was a regression introduced in 1.1.0, it should not affect 1.0.x. > Bug in SSTableReader.getSampleIndexesForRanges(...) causes uneven InputSplits > generation for Hadoop mappers > --- > > Key: CASSANDRA-4259 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4259 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 1.1.0 > Environment: Small cassandra cluster with 2 nodes. Version 1.1.0. > Tokens: 0, 85070591730234615865843651857942052864 > Hadoop 1.0.1 and Pig 0.10.0. >Reporter: Bartłomiej Romański >Assignee: Bartłomiej Romański > Fix For: 1.1.1 > > > Running a simple mapreduce job on cassandra column family results in creating > multiple small mappers for one half of the ring and one big mapper for the > other half. Upper part (85... - 0) is cut into smaller slices. Lower part (0 > - 85...) generates one big input slice. One mapper processing half of the > ring causes huge inefficiency. Also the progress meter for this mapper is > incorrect - it goes to 100% in a couple of seconds, than stays at 100% for an > hour or two. > I've investigated the problem a bit. I think it is related to incorrect > output of 'nodetool rangekeysample'. On the node resposible for part (0 - > 85...) the output is empty! On the other node it works fine. > I think the bug is in SSTableReader.getSampleIndexesForRanges(...). These two > lines: >RowPosition leftPosition = range.left.maxKeyBound(); >RowPosition rightPosition = range.left.maxKeyBound(); > should be changed to: >RowPosition leftPosition = range.left.maxKeyBound(); >RowPosition rightPosition = range.right.maxKeyBound(); > After that fix the output of nodetool is correct and the whole ring is split > into small mappers. > The other half of the ring works fine because of extra 'if' in the code: >int right = Range.isWrapAround(range.left, range.right)... > This causes that the bug does not show up in one-node cluster or in the > "last" ring partition in muli-node clusters. > Can anyone look at it and verify my thoughts? I'm rather new to Cassandra. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3942) ColumnFamilyRecordReader can report progress > 100%
[ https://issues.apache.org/jira/browse/CASSANDRA-3942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-3942: -- Fix Version/s: (was: 1.0.11) 1.1.2 Assignee: Brandon Williams (was: T Jake Luciani) > ColumnFamilyRecordReader can report progress > 100% > --- > > Key: CASSANDRA-3942 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3942 > Project: Cassandra > Issue Type: Bug >Affects Versions: 0.6 >Reporter: T Jake Luciani >Assignee: Brandon Williams >Priority: Minor > Fix For: 1.1.2 > > > CFRR.getProgress() can return a value > 1.0 since the totalRowCount is a > estimate. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[3/3] Support multiple ranges in SliceQueryFilter
http://git-wip-us.apache.org/repos/asf/cassandra/blob/d1171ddc/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java -- diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java index f002483..0549f65 100644 --- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java +++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java @@ -22,10 +22,37 @@ import java.io.File; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.charset.CharacterCodingException; -import java.util.*; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.Iterator; +import java.util.LinkedList; +import java.util.List; +import java.util.Random; +import java.util.Set; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; +import com.google.common.base.Function; +import com.google.common.collect.Iterables; +import org.apache.commons.lang.ArrayUtils; +import org.apache.commons.lang.StringUtils; +import org.junit.Test; + +import static org.junit.Assert.assertNull; +import static junit.framework.Assert.assertEquals; +import static junit.framework.Assert.assertSame; +import static junit.framework.Assert.assertTrue; +import static org.apache.cassandra.Util.column; +import static org.apache.cassandra.Util.dk; +import static org.apache.cassandra.Util.getBytes; +import static org.apache.cassandra.Util.rp; +import static org.apache.cassandra.db.TableTest.assertColumns; +import static org.apache.cassandra.utils.ByteBufferUtil.bytes; +import static org.apache.commons.lang.ArrayUtils.EMPTY_BYTE_ARRAY; + import org.apache.cassandra.SchemaLoader; import org.apache.cassandra.Util; import org.apache.cassandra.config.ColumnDefinition; @@ -35,27 +62,20 @@ import org.apache.cassandra.db.filter.*; import org.apache.cassandra.db.index.SecondaryIndex; import org.apache.cassandra.db.marshal.LexicalUUIDType; import org.apache.cassandra.db.marshal.LongType; -import org.apache.cassandra.dht.*; +import org.apache.cassandra.dht.Bounds; +import org.apache.cassandra.dht.ExcludingBounds; +import org.apache.cassandra.dht.IPartitioner; +import org.apache.cassandra.dht.IncludingExcludingBounds; +import org.apache.cassandra.dht.Range; import org.apache.cassandra.io.sstable.Component; import org.apache.cassandra.io.sstable.Descriptor; -import org.apache.cassandra.io.sstable.SSTableReader; import org.apache.cassandra.io.sstable.SSTable; +import org.apache.cassandra.io.sstable.SSTableReader; import org.apache.cassandra.service.StorageService; import org.apache.cassandra.thrift.*; import org.apache.cassandra.utils.ByteBufferUtil; +import org.apache.cassandra.utils.Pair; import org.apache.cassandra.utils.WrappedRunnable; -import org.apache.commons.lang.ArrayUtils; -import org.apache.commons.lang.StringUtils; - -import static junit.framework.Assert.assertEquals; -import static junit.framework.Assert.assertTrue; -import static org.apache.cassandra.Util.column; -import static org.apache.cassandra.Util.getBytes; -import static org.apache.cassandra.Util.rp; -import static org.apache.cassandra.db.TableTest.assertColumns; -import static org.junit.Assert.assertNull; - -import org.junit.Test; public class ColumnFamilyStoreTest extends SchemaLoader { @@ -1009,4 +1029,372 @@ public class ColumnFamilyStoreTest extends SchemaLoader k += " " + ByteBufferUtil.string(r.key.key); return k; } + +@SuppressWarnings("unchecked") +@Test +public void testMultiRangeIndexed() throws Throwable +{ +// in order not to change thrift interfaces at this stage we build SliceQueryFilter +// directly instead of using QueryFilter to build it for us +ColumnSlice[] ranges = new ColumnSlice[] { +new ColumnSlice(ByteBuffer.wrap(EMPTY_BYTE_ARRAY), bytes("colA")), +new ColumnSlice(bytes("colC"), bytes("colE")), +new ColumnSlice(bytes("colG"), bytes("colG")), +new ColumnSlice(bytes("colI"), ByteBuffer.wrap(EMPTY_BYTE_ARRAY)) }; + +ColumnSlice[] rangesReversed = new ColumnSlice[] { +new ColumnSlice(ByteBuffer.wrap(EMPTY_BYTE_ARRAY), bytes("colI")), +new ColumnSlice(bytes("colG"), bytes("colG")), +new ColumnSlice(bytes("colE"), bytes("colC")), +new ColumnSlice(bytes("colA"), ByteBuffer.wrap(EMPTY_BYTE_ARRAY)) }; + +String tableName = "Keyspace1"; +String cfName = "Standard1"; +Table table = Table.open(tableName); +ColumnFamilyStore cfs = table.getColumnFamilyStore(cfName); +cfs.clearUnsafe(); + +String[] letters = new String[] { "a", "b", "c", "d", "e", "f", "g", "h", "i" }; +Column[] cols = new Column[letters.lengt
[1/3] git commit: Add missing changelog entry
Updated Branches: refs/heads/trunk 8ea2d2a6a -> 72a2e528c Add missing changelog entry Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72a2e528 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72a2e528 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72a2e528 Branch: refs/heads/trunk Commit: 72a2e528c9f78de341afeff6717960e80417f21c Parents: d1171dd Author: Sylvain Lebresne Authored: Mon Jun 18 17:22:43 2012 +0200 Committer: Sylvain Lebresne Committed: Mon Jun 18 17:22:43 2012 +0200 -- CHANGES.txt |1 + 1 files changed, 1 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/72a2e528/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 5bea5f4..91ba0b9 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -21,6 +21,7 @@ * fix Summary component and caches to use correct partitioner (CASSANDRA-4289) * stream compressed sstables directly with java nio (CASSANDRA-4297) * Support multiple ranges in SliceQueryFilter (CASSANDRA-3885) + * Add column metadata to system column families (CASSANDRA-4018) 1.1.2
[jira] [Commented] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter
[ https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395944#comment-13395944 ] David Alves commented on CASSANDRA-3885: +1 I reviewed the patch, looks good overall. I does not add failures other than the previously mentioned SerializationTest (CompactionsTest fails on a different method, for a different reason on my pc, that's expected right?). > Support multiple ranges in SliceQueryFilter > --- > > Key: CASSANDRA-3885 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3885 > Project: Cassandra > Issue Type: Sub-task > Components: Core >Reporter: Jonathan Ellis >Assignee: David Alves > Fix For: 1.2 > > Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, > CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch > > > This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow > sub-sub-tasks. > We need to support multiple ranges in a SliceQueryFilter, and we want > querying them to be efficient, i.e., one pass through the row to get all of > the ranges, rather than one pass per range. > Supercolumns are irrelevant since the goal is to replace them anyway. Ignore > supercolumn-related code or rip it out, whichever is easier. > This is ONLY dealing with the storage engine part, not the StorageProxy and > Command intra-node messages or the Thrift or CQL client APIs. Thus, a unit > test should be added to ColumnFamilyStoreTest to demonstrate that it works. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter
[ https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395838#comment-13395838 ] David Alves commented on CASSANDRA-3885: right, I can confirm that, sorry for the n00biness. patch applies cleanly, running tests. > Support multiple ranges in SliceQueryFilter > --- > > Key: CASSANDRA-3885 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3885 > Project: Cassandra > Issue Type: Sub-task > Components: Core >Reporter: Jonathan Ellis >Assignee: David Alves > Fix For: 1.2 > > Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, > CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch > > > This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow > sub-sub-tasks. > We need to support multiple ranges in a SliceQueryFilter, and we want > querying them to be efficient, i.e., one pass through the row to get all of > the ranges, rather than one pass per range. > Supercolumns are irrelevant since the goal is to replace them anyway. Ignore > supercolumn-related code or rip it out, whichever is easier. > This is ONLY dealing with the storage engine part, not the StorageProxy and > Command intra-node messages or the Thrift or CQL client APIs. Thus, a unit > test should be added to ColumnFamilyStoreTest to demonstrate that it works. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2864) Alternative Row Cache Implementation
[ https://issues.apache.org/jira/browse/CASSANDRA-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395837#comment-13395837 ] Sylvain Lebresne commented on CASSANDRA-2864: - Sorry for taking so long to get to this. But as a consequence, the patch will need some rebase (the hardest part will probably be to account for CASSANDRA-3708). But a few remarks based on the current patch: * The serialization format for columns seems only marginaly different from our internal one. Maybe it would be worth reusing ColumnSerializer? A priori, this seem to make it more difficult to avoid copying on deserialize, but we could thenuse ByteBufferUtil.inputStream and specialize ByteBufferUtil.read() to recognize that specific input stream and avoid copy. * Could be worth making it easier to use variable length int encoding (i.e. hardcode less the TypeSizes.NATIVE). Could give a nice benefit. * This is all serialized in heap. But it would make sense to allow serializing off-heap (Did you experimented with that?). That's even the strength of this idea I think: the in-heap and off-heap cache could be almost identical, except for the ByteBuffer.allocate that would become an allocateDirect in CachedRowSerializer.serialize(). With the big advantage that this off-heap cache wouldn't have to deserialize everything everytime of course. * What is the point of collectTimeOrderedData in RowCacheCollationController? * What's the goal of noMergeNecessary in CachedRowSliceIterator? Feels like the merge necessay path is not really much slower than the other one. Or rather, it feels like CachedRowSliceIterator.appendRow() can easily be turn into an iterator, which would pretty much be CachedRowSliceIterator. * If we're going to replace the current cache path by this patch, we may want to refactor code a bit. For instance, instead of having two collation controllers, we may just want one and have it decide if it uses sstables iterators or the cache iterator based on whether the row is cached. And some nits: * In RowCacheCollationController: we don't use underscores in front of variables :) * There is a few places where the code style is not respected (not a big deal at that point, just mentioning it fyi). * In CachedRowSerializer, I'd avoid names like deserializeFromSSTableNoColumns. Now the main problem is counters. As said previously, we will need to be able to distinguish during read between data that has been merged in the cache, and what hasn't been merge yet (the difficulty being to do that during the merge of a memtable). This is probably doable though. > Alternative Row Cache Implementation > > > Key: CASSANDRA-2864 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2864 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Daniel Doubleday >Assignee: Daniel Doubleday > Labels: cache > Fix For: 1.2 > > Attachments: 0001-CASSANDRA-2864-w-out-direct-counter-su.patch > > > we have been working on an alternative implementation to the existing row > cache(s) > We have 2 main goals: > - Decrease memory -> get more rows in the cache without suffering a huge > performance penalty > - Reduce gc pressure > This sounds a lot like we should be using the new serializing cache in 0.8. > Unfortunately our workload consists of loads of updates which would > invalidate the cache all the time. > *Note: Updated Patch Description (Please check history if you're interested > where this was comming from)* > h3. Rough Idea > - Keep serialized row (ByteBuffer) in mem which represents unfiltered but > collated columns of all ssts but not memtable columns > - Writes dont affect the cache at all. They go only to the memtables > - Reads collect columns from memtables and row cache > - Serialized Row is re-written (merged) with mem tables when flushed > h3. Some Implementation Details > h4. Reads > - Basically the read logic differ from regular uncached reads only in that a > special CollationController which is deserializing columns from in memory > bytes > - In the first version of this cache the serialized in memory format was the > same as the fs format but test showed that performance sufferd because a lot > of unnecessary deserialization takes place and that columns seeks are O( n ) > whithin one block > - To improve on that a different in memory format was used. It splits length > meta info and data of columns so that the names can be binary searched. > {noformat} > === > Header (24) > === > MaxTimestamp:long > LocalDeletionTime: int > MarkedForDeleteAt: long > NumColumns: int > === > Column Index (num col
[jira] [Commented] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter
[ https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395825#comment-13395825 ] Sylvain Lebresne commented on CASSANDRA-3885: - bq. I always thought we have /cassandra/test/data/serialization/x.x if you want to test the older versions. Ok, I see. So there is different ant targets to run the serialization test against older version. I've personally never ran that. Is there any reason why we don't just always run the serialization test on all versions? But anyway, for this patch I'll regenerate the message binaries for 1.2 before committing. > Support multiple ranges in SliceQueryFilter > --- > > Key: CASSANDRA-3885 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3885 > Project: Cassandra > Issue Type: Sub-task > Components: Core >Reporter: Jonathan Ellis >Assignee: David Alves > Fix For: 1.2 > > Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, > CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch > > > This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow > sub-sub-tasks. > We need to support multiple ranges in a SliceQueryFilter, and we want > querying them to be efficient, i.e., one pass through the row to get all of > the ranges, rather than one pass per range. > Supercolumns are irrelevant since the goal is to replace them anyway. Ignore > supercolumn-related code or rip it out, whichever is easier. > This is ONLY dealing with the storage engine part, not the StorageProxy and > Command intra-node messages or the Thrift or CQL client APIs. Thus, a unit > test should be added to ColumnFamilyStoreTest to demonstrate that it works. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter
[ https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395821#comment-13395821 ] Sylvain Lebresne commented on CASSANDRA-3885: - Be sure to check against http://git-wip-us.apache.org/repos/asf/cassandra.git, not any other repository. In particular the github mirror very often get behind. > Support multiple ranges in SliceQueryFilter > --- > > Key: CASSANDRA-3885 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3885 > Project: Cassandra > Issue Type: Sub-task > Components: Core >Reporter: Jonathan Ellis >Assignee: David Alves > Fix For: 1.2 > > Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, > CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch > > > This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow > sub-sub-tasks. > We need to support multiple ranges in a SliceQueryFilter, and we want > querying them to be efficient, i.e., one pass through the row to get all of > the ranges, rather than one pass per range. > Supercolumns are irrelevant since the goal is to replace them anyway. Ignore > supercolumn-related code or rip it out, whichever is easier. > This is ONLY dealing with the storage engine part, not the StorageProxy and > Command intra-node messages or the Thrift or CQL client APIs. Thus, a unit > test should be added to ColumnFamilyStoreTest to demonstrate that it works. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter
[ https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395816#comment-13395816 ] Sylvain Lebresne commented on CASSANDRA-3885: - bq. as far as I can see from the 3885 branch there are other changes in there beyond this patch that are more recent than the last change to trunk. Then you have the wrong version of trunk. The attached 3885-v2.txt patch applies cleanly to current trunk (as of this comment). > Support multiple ranges in SliceQueryFilter > --- > > Key: CASSANDRA-3885 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3885 > Project: Cassandra > Issue Type: Sub-task > Components: Core >Reporter: Jonathan Ellis >Assignee: David Alves > Fix For: 1.2 > > Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, > CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch > > > This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow > sub-sub-tasks. > We need to support multiple ranges in a SliceQueryFilter, and we want > querying them to be efficient, i.e., one pass through the row to get all of > the ranges, rather than one pass per range. > Supercolumns are irrelevant since the goal is to replace them anyway. Ignore > supercolumn-related code or rip it out, whichever is easier. > This is ONLY dealing with the storage engine part, not the StorageProxy and > Command intra-node messages or the Thrift or CQL client APIs. Thus, a unit > test should be added to ColumnFamilyStoreTest to demonstrate that it works. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3047) implementations of IPartitioner.describeOwnership() are not DC aware
[ https://issues.apache.org/jira/browse/CASSANDRA-3047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Alves updated CASSANDRA-3047: --- Attachment: (was: CASSANDRA-3047.patch) > implementations of IPartitioner.describeOwnership() are not DC aware > > > Key: CASSANDRA-3047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3047 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Aaron Morton >Assignee: David Alves >Priority: Trivial > Fix For: 1.1.2 > > Attachments: CASSANDRA-3047.patch, CASSANDRA-3047.patch, > CASSANDRA-3047.patch > > > see http://www.mail-archive.com/user@cassandra.apache.org/msg16375.html > When a cluster the multiple rings approach to tokens the output from nodetool > ring is incorrect. > When it uses the interleaved token approach (e.g. dc1, dc2, dc1, dc2) it will > be correct. > It's a bit hacky but could we special case (RP) tokens that are off by 1 and > calculate the ownership per dc ? I guess another approach would be to add > some parameters so the partitioner can be told about the token assignment > strategy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3047) implementations of IPartitioner.describeOwnership() are not DC aware
[ https://issues.apache.org/jira/browse/CASSANDRA-3047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Alves updated CASSANDRA-3047: --- Attachment: CASSANDRA-3047.patch (previous patch was submitted with wrong license) > implementations of IPartitioner.describeOwnership() are not DC aware > > > Key: CASSANDRA-3047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3047 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Aaron Morton >Assignee: David Alves >Priority: Trivial > Fix For: 1.1.2 > > Attachments: CASSANDRA-3047.patch, CASSANDRA-3047.patch, > CASSANDRA-3047.patch > > > see http://www.mail-archive.com/user@cassandra.apache.org/msg16375.html > When a cluster the multiple rings approach to tokens the output from nodetool > ring is incorrect. > When it uses the interleaved token approach (e.g. dc1, dc2, dc1, dc2) it will > be correct. > It's a bit hacky but could we special case (RP) tokens that are off by 1 and > calculate the ownership per dc ? I guess another approach would be to add > some parameters so the partitioner can be told about the token assignment > strategy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3047) implementations of IPartitioner.describeOwnership() are not DC aware
[ https://issues.apache.org/jira/browse/CASSANDRA-3047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Alves updated CASSANDRA-3047: --- Attachment: CASSANDRA-3047.patch reverted import ordering changes/made imports comply with code style. > implementations of IPartitioner.describeOwnership() are not DC aware > > > Key: CASSANDRA-3047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3047 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Aaron Morton >Assignee: David Alves >Priority: Trivial > Fix For: 1.1.2 > > Attachments: CASSANDRA-3047.patch, CASSANDRA-3047.patch, > CASSANDRA-3047.patch > > > see http://www.mail-archive.com/user@cassandra.apache.org/msg16375.html > When a cluster the multiple rings approach to tokens the output from nodetool > ring is incorrect. > When it uses the interleaved token approach (e.g. dc1, dc2, dc1, dc2) it will > be correct. > It's a bit hacky but could we special case (RP) tokens that are off by 1 and > calculate the ownership per dc ? I guess another approach would be to add > some parameters so the partitioner can be told about the token assignment > strategy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter
[ https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13395799#comment-13395799 ] David Alves commented on CASSANDRA-3885: I still can't apply this cleanly to trunk. as far as I can see from the 3885 branch there are other changes in there beyond this patch that are more recent than the last change to trunk. Sylvain do you want me to try and pick that up? (i mean take what you did and make it applicable to trunk) > Support multiple ranges in SliceQueryFilter > --- > > Key: CASSANDRA-3885 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3885 > Project: Cassandra > Issue Type: Sub-task > Components: Core >Reporter: Jonathan Ellis >Assignee: David Alves > Fix For: 1.2 > > Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, > CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch > > > This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow > sub-sub-tasks. > We need to support multiple ranges in a SliceQueryFilter, and we want > querying them to be efficient, i.e., one pass through the row to get all of > the ranges, rather than one pass per range. > Supercolumns are irrelevant since the goal is to replace them anyway. Ignore > supercolumn-related code or rip it out, whichever is easier. > This is ONLY dealing with the storage engine part, not the StorageProxy and > Command intra-node messages or the Thrift or CQL client APIs. Thus, a unit > test should be added to ColumnFamilyStoreTest to demonstrate that it works. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4351) Consider storing more informations on peers in system tables
Sylvain Lebresne created CASSANDRA-4351: --- Summary: Consider storing more informations on peers in system tables Key: CASSANDRA-4351 URL: https://issues.apache.org/jira/browse/CASSANDRA-4351 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Sylvain Lebresne Priority: Minor Fix For: 1.2 Currently, the only thing we keep in system tables about other peers is their token and IP addresses. We should probably also record the new ring_id, but since CASSANDRA-4018 makes system table easily queriable, may it could be worth adding some more information (basically most of what we gossip could be a candidate (schema UUID, status, C* version, ...)) as a simple way to expose the ring state to users (even if it's just a "view" of the ring state from one specific node I believe it's still nice). Of course that means storing information that may not be absolutely needed by the server, but I'm not sure there is much harm to that. Note that doing this cleanly may require changing the schema of current system tables but as long as we do that in the 1.2 timeframe it's ok (since the concerned system table 'local' and 'peers' are news anyway). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[2/2] Add column metadata to system column families
http://git-wip-us.apache.org/repos/asf/cassandra/blob/8ea2d2a6/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 64aea28..51f5fe2 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -368,7 +368,7 @@ public class StorageProxy implements StorageProxyMBean return; } assert hostId != null : "Missing host ID for " + target.getHostAddress(); -RowMutation hintedMutation = RowMutation.hintFor(mutation, ByteBuffer.wrap(UUIDGen.decompose(hostId))); +RowMutation hintedMutation = RowMutation.hintFor(mutation, hostId); hintedMutation.apply(); totalHints.incrementAndGet(); http://git-wip-us.apache.org/repos/asf/cassandra/blob/8ea2d2a6/test/unit/org/apache/cassandra/config/DefsTest.java -- diff --git a/test/unit/org/apache/cassandra/config/DefsTest.java b/test/unit/org/apache/cassandra/config/DefsTest.java index ae6ccc0..ba4c20f 100644 --- a/test/unit/org/apache/cassandra/config/DefsTest.java +++ b/test/unit/org/apache/cassandra/config/DefsTest.java @@ -49,8 +49,8 @@ public class DefsTest extends SchemaLoader @Test public void ensureStaticCFMIdsAreLessThan1000() { -assert CFMetaData.StatusCf.cfId.equals(CFMetaData.getId(Table.SYSTEM_TABLE, SystemTable.STATUS_CF)); -assert CFMetaData.HintsCf.cfId.equals(CFMetaData.getId(Table.SYSTEM_TABLE, HintedHandOffManager.HINTS_CF)); +assert CFMetaData.OldStatusCf.cfId.equals(CFMetaData.getId(Table.SYSTEM_TABLE, SystemTable.OLD_STATUS_CF)); +assert CFMetaData.OldHintsCf.cfId.equals(CFMetaData.getId(Table.SYSTEM_TABLE, SystemTable.OLD_HINTS_CF)); } @Test http://git-wip-us.apache.org/repos/asf/cassandra/blob/8ea2d2a6/test/unit/org/apache/cassandra/thrift/ThriftValidationTest.java -- diff --git a/test/unit/org/apache/cassandra/thrift/ThriftValidationTest.java b/test/unit/org/apache/cassandra/thrift/ThriftValidationTest.java index 9c131f9..d57f9c1 100644 --- a/test/unit/org/apache/cassandra/thrift/ThriftValidationTest.java +++ b/test/unit/org/apache/cassandra/thrift/ThriftValidationTest.java @@ -35,6 +35,7 @@ import org.apache.cassandra.db.marshal.AsciiType; import org.apache.cassandra.db.marshal.UTF8Type; import org.apache.cassandra.locator.LocalStrategy; import org.apache.cassandra.locator.NetworkTopologyStrategy; +import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; public class ThriftValidationTest extends SchemaLoader @@ -124,7 +125,7 @@ public class ThriftValidationTest extends SchemaLoader assert !gotException : "got unexpected ConfigurationException"; // add a column with name = "id" -newMetadata.addColumnDefinition(ColumnDefinition.utf8("id", null)); +newMetadata.addColumnDefinition(new ColumnDefinition(ByteBufferUtil.bytes("id"), UTF8Type.instance, null, null, null, null)); gotException = false;