[jira] [Commented] (CASSANDRA-6237) Allow range deletions in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-6237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121061#comment-14121061 ] Benjamin Lerer commented on CASSANDRA-6237: --- It is not big but I really would prefer to have 7016 and 4762 commited as I believe that they impact some part of code that will be reused. Allow range deletions in CQL Key: CASSANDRA-6237 URL: https://issues.apache.org/jira/browse/CASSANDRA-6237 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Benjamin Lerer Priority: Minor Labels: cql, docs Fix For: 3.0 Attachments: CASSANDRA-6237.txt We uses RangeTombstones internally in a number of places, but we could expose more directly too. Typically, given a table like: {noformat} CREATE TABLE events ( id text, created_at timestamp, content text, PRIMARY KEY (id, created_at) ) {noformat} we could allow queries like: {noformat} DELETE FROM events WHERE id='someEvent' AND created_at 'Jan 3, 2013'; {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-6237) Allow range deletions in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-6237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121061#comment-14121061 ] Benjamin Lerer edited comment on CASSANDRA-6237 at 9/4/14 7:31 AM: --- It is not big but I really would prefer to have 7016 and 4762 commited as I believe that they impact some part of code that will be reuse. was (Author: blerer): It is not big but I really would prefer to have 7016 and 4762 commited as I believe that they impact some part of code that will be reused. Allow range deletions in CQL Key: CASSANDRA-6237 URL: https://issues.apache.org/jira/browse/CASSANDRA-6237 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Benjamin Lerer Priority: Minor Labels: cql, docs Fix For: 3.0 Attachments: CASSANDRA-6237.txt We uses RangeTombstones internally in a number of places, but we could expose more directly too. Typically, given a table like: {noformat} CREATE TABLE events ( id text, created_at timestamp, content text, PRIMARY KEY (id, created_at) ) {noformat} we could allow queries like: {noformat} DELETE FROM events WHERE id='someEvent' AND created_at 'Jan 3, 2013'; {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7862) KeyspaceTest fails with heap_buffers
[ https://issues.apache.org/jira/browse/CASSANDRA-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121071#comment-14121071 ] Benedict commented on CASSANDRA-7862: - LGTM, +1 KeyspaceTest fails with heap_buffers Key: CASSANDRA-7862 URL: https://issues.apache.org/jira/browse/CASSANDRA-7862 Project: Cassandra Issue Type: Bug Reporter: T Jake Luciani Assignee: T Jake Luciani Priority: Minor Fix For: 2.1.1 Attachments: 7862.txt only happens with heap_buffers {code} [junit] Testcase: testReversedWithFlushing(org.apache.cassandra.db.KeyspaceTest): FAILED [junit] null [junit] junit.framework.AssertionFailedError [junit] at org.apache.cassandra.db.composites.AbstractSimpleCellNameType.compareUnsigned(AbstractSimpleCellNameType.java:89) [junit] at org.apache.cassandra.db.composites.AbstractSimpleCellNameType$2.compare(AbstractSimpleCellNameType.java:47) [junit] at org.apache.cassandra.db.composites.AbstractSimpleCellNameType$2.compare(AbstractSimpleCellNameType.java:44) [junit] at org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:147) [junit] at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) [junit] at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) [junit] at org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:191) [junit] at org.apache.cassandra.db.Memtable.put(Memtable.java:192) [junit] at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1142) [junit] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:388) [junit] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) [junit] at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) [junit] at org.apache.cassandra.db.KeyspaceTest.testReversedWithFlushing(KeyspaceTest.java:241) [junit] {code} Looks like a bug related to CASSANDRA-6934 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7360) CQLSSTableWriter consumes all memory for table with compound primary key
[ https://issues.apache.org/jira/browse/CASSANDRA-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121104#comment-14121104 ] Benjamin Lerer commented on CASSANDRA-7360: --- I noticed 2 things: 1) I think that in BufferedWriter.close() the code should probably try to call super.close before trying to rethrow the exception. Just to make sure that we terminate as properly as possible. 2) I have the following error message in my log when I run the Unit tests: Exception in thread StorageServiceShutdownHook java.lang.IllegalStateException: No configured daemon at org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:309) at org.apache.cassandra.service.StorageService.shutdownClientServers(StorageService.java:381) at org.apache.cassandra.service.StorageService.access$3(StorageService.java:379) at org.apache.cassandra.service.StorageService$1.runMayThrow(StorageService.java:568) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:745) I am not sure that it is a real problem but idealy it would be nice if we could get rid of it. Personnaly, I think that I instead of storing the error I would have wrapped it into a UncheckedException (e.g. SyncException) that I would have caught in rawAddRow and rethrow as IOException as it is more direct. Nevertheless it is just a matter of taste in term of non clean stuff and I am fine with your solution. CQLSSTableWriter consumes all memory for table with compound primary key Key: CASSANDRA-7360 URL: https://issues.apache.org/jira/browse/CASSANDRA-7360 Project: Cassandra Issue Type: Bug Components: Core Reporter: Xu Zhongxing Assignee: Sylvain Lebresne Fix For: 2.0.11 Attachments: 7360.txt When using CQLSSTableWriter to write a table with compound primary key, if the partition key is identical for a huge amount of records, the sync() method is never called, and the memory usage keeps growing until the memory is exhausted. Could the code be improved to do sync() even when there is no new row created? The relevant code is in SSTableSimpleUnsortedWriter.java and AbstractSSTableSimpleWriter.java. I am new to the code and cannot produce a reasonable patch for now. The problem can be reproduced by the following test case: {code} import org.apache.cassandra.io.sstable.CQLSSTableWriter; import org.apache.cassandra.exceptions.InvalidRequestException; import java.io.IOException; import java.util.UUID; class SS { public static void main(String[] args) { String schema = create table test.t (x uuid, y uuid, primary key (x, y)); String insert = insert into test.t (x, y) values (?, ?); CQLSSTableWriter writer = CQLSSTableWriter.builder() .inDirectory(/tmp/test/t) .forTable(schema).withBufferSizeInMB(32) .using(insert).build(); UUID id = UUID.randomUUID(); try { for (int i = 0; i 5000; i++) { UUID id2 = UUID.randomUUID(); writer.addRow(id, id2); } writer.close(); } catch (Exception e) { System.err.println(hell); } } } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7768) Error when creating multiple CQLSSTableWriters for more than one column family in the same keyspace
[ https://issues.apache.org/jira/browse/CASSANDRA-7768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121114#comment-14121114 ] Benjamin Lerer commented on CASSANDRA-7768: --- Could you specify the branch in which you had the issue and add a unit test to CQLSSTableWriterTest to validate your fix? Error when creating multiple CQLSSTableWriters for more than one column family in the same keyspace --- Key: CASSANDRA-7768 URL: https://issues.apache.org/jira/browse/CASSANDRA-7768 Project: Cassandra Issue Type: Bug Components: Hadoop Reporter: Paul Pak Assignee: Paul Pak Priority: Minor Labels: cql3, hadoop Attachments: trunk-7768-v1.txt The reason why this occurs is if the keyspace has already been loaded (due to another column family being previously loaded in the same keyspace), CQLSSTableWriter builder only loads the column family via Schema.load(CFMetaData). However, Schema.load(CFMetaData) only adds to the Schema.cfIdMap without making the proper addition to the CFMetaData map belonging to the KSMetaData map. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7692) Upgrade Cassandra Java Driver to 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121125#comment-14121125 ] Sylvain Lebresne commented on CASSANDRA-7692: - I would have a small preference for skipping C* 2.1 for now: to the best of my knowledge, we don't need the driver 2.1 so far and it's relatively new so I don't think there is much point in rushing an upgrade to it. I have not problem for trunk though. Upgrade Cassandra Java Driver to 2.1 Key: CASSANDRA-7692 URL: https://issues.apache.org/jira/browse/CASSANDRA-7692 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp Assignee: Robert Stupp Fix For: 2.1.1, 3.0 Attachments: 7692-2.1.txt, 7692-3.0.txt UDFs (CASSANDRA-7563) requires Java Driver that supports User Types and new collection features (at least Java Driver 2.1). May also be handled separately if e.g. Hadoop stuff requires this (follow-up to CASSANDRA-7618). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7692) Upgrade Cassandra Java Driver to 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7692: Fix Version/s: (was: 2.1.1) Upgrade Cassandra Java Driver to 2.1 Key: CASSANDRA-7692 URL: https://issues.apache.org/jira/browse/CASSANDRA-7692 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp Assignee: Robert Stupp Fix For: 3.0 Attachments: 7692-2.1.txt, 7692-3.0.txt UDFs (CASSANDRA-7563) requires Java Driver that supports User Types and new collection features (at least Java Driver 2.1). May also be handled separately if e.g. Hadoop stuff requires this (follow-up to CASSANDRA-7618). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-6651) Repair hanging
[ https://issues.apache.org/jira/browse/CASSANDRA-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121150#comment-14121150 ] Duncan Sands commented on CASSANDRA-6651: - I think repair hang issue in 2.0.x is fixed in CASSANDRA-7560. Closing as duplicate for it. If hang still happens after 2.0.10 then please open the new ticket. Since I seem to have hit it with 2.0.10 (see previous comment), presumably this should be reopened. I'm not able to reopen it, so can someone with the power to reopen it please do so. Repair hanging -- Key: CASSANDRA-6651 URL: https://issues.apache.org/jira/browse/CASSANDRA-6651 Project: Cassandra Issue Type: Bug Components: Core Reporter: Eitan Eibschutz Assignee: Yuki Morishita Hi, We have a 12 node cluster in PROD environment and we've noticed that repairs are never finishing. The behavior that we've observed is that a repair process will run until at some point it hangs and no other processing is happening. For example, at the moment, I have a repair process that has been running for two days and not finishing: nodetool tpstats is showing 2 active and 2 pending AntiEntropySessions nodetool compactionstats is showing: pending tasks: 0 Active compaction remaining time :n/a nodetools netstats is showing: Mode: NORMAL Not sending any streams. Read Repair Statistics: Attempted: 0 Mismatch (Blocking): 142110 Mismatch (Background): 0 Pool NameActive Pending Completed Commandsn/a 0 107589657 Responses n/a 0 116430785 The last entry that I see in the log is: INFO [AntiEntropySessions:18] 2014-02-03 04:01:39,145 RepairJob.java (line 116) [repair #ae78c6c0-8c2b-11e3-b950-c3b81a36bc9b] requesting merkle trees for MyCF (to [/x.x.x.x, /y.y.y.y, /z.z.z.z]) The repair started at 4am so it stopped after 1:40 minute. On node y.y.y.y I can see this in the log: INFO [MiscStage:1] 2014-02-03 04:01:38,985 ColumnFamilyStore.java (line 740) Enqueuing flush of Memtable-MyCF@1290890489(2176/5931 serialized/live bytes, 32 ops) INFO [FlushWriter:411] 2014-02-03 04:01:38,986 Memtable.java (line 333) Writing Memtable-MyCF@1290890489(2176/5931 serialized/live bytes, 32 ops) INFO [FlushWriter:411] 2014-02-03 04:01:39,048 Memtable.java (line 373) Completed flushing /var/lib/cassandra/main-db/data/MyKS/MyCF/MyKS-MyCF-jb-518-Data.db (1789 bytes) for commitlog position ReplayPosition(segmentId=1390437013339, position=21868792) INFO [ScheduledTasks:1] 2014-02-03 05:00:04,794 ColumnFamilyStore.java (line 740) Enqueuing flush of Memtable-compaction_history@1649414699(1635/17360 serialized/live bytes, 42 ops) So for some reason the merkle tree for this CF is never sent back to the node being repaired and it's hanging. I've also noticed that sometimes, restarting node y.y.y.y will cause the repair to resume. Another observation is that sometimes when restarting y.y.y.y it will not start with these errors: ERROR 16:34:18,485 Exception encountered during startup java.lang.IllegalStateException: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables. at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:495) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:264) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:461) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:504) java.lang.IllegalStateException: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables. at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:495) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:264) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:461) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:504) Exception encountered during startup: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables. And it will only restart after manually cleaning the compactions_in-progress folder. I'm not sure if these two issues are related but we've seen both on all the nodes in our cluster. I'll be happy to provide more info if needed as we are not sure what could cause this behavior. Another thing in our environment is that some of the
[jira] [Created] (CASSANDRA-7877) Tunable cross datacenter replication
Hari Sekhon created CASSANDRA-7877: -- Summary: Tunable cross datacenter replication Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do a full QUORUM or ALL write level across multiple datacenters. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across cross datacenters LOCAL_QUORUM + 1
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Summary: Tunable consistency across cross datacenters LOCAL_QUORUM + 1 (was: Tunable cross datacenter replication) Tunable consistency across cross datacenters LOCAL_QUORUM + 1 - Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do a full QUORUM or ALL write level across multiple datacenters. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across cross datacenters LOCAL_QUORUM + 1
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Description: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. was: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write level across multiple datacenters. Tunable consistency across cross datacenters LOCAL_QUORUM + 1 - Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across cross datacenters LOCAL_QUORUM + 1
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Description: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write level across multiple datacenters. was: Right now tunable consistency allows for things like local quorum and quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do a full QUORUM or ALL write level across multiple datacenters. Tunable consistency across cross datacenters LOCAL_QUORUM + 1 - Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write level across multiple datacenters. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across cross datacenters LOCAL_QUORUM + 1
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Description: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... was: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Tunable consistency across cross datacenters LOCAL_QUORUM + 1 - Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Summary: Tunable consistency across datacenters - LOCAL_QUORUM + 1 (was: Tunable consistency across cross datacenters LOCAL_QUORUM + 1) Tunable consistency across datacenters - LOCAL_QUORUM + 1 - Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Summary: Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC (was: Tunable consistency across datacenters - LOCAL_QUORUM + 1) Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC - Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7769) Implement pg-style dollar syntax for string constants
[ https://issues.apache.org/jira/browse/CASSANDRA-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7769: Attachment: 7769v2.txt Updated patch with working Cql.g incl Junit. Currently struggling with cqlsh Implement pg-style dollar syntax for string constants - Key: CASSANDRA-7769 URL: https://issues.apache.org/jira/browse/CASSANDRA-7769 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp Assignee: Robert Stupp Fix For: 3.0 Attachments: 7769.txt, 7769v2.txt Follow-up of CASSANDRA-7740: {{$function$...$function$}} in addition to string style variant. See also http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Description: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do each quorum in local + quorum in at least 1 other DC before confirming write to prevent data loss on site failure? was: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC - Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do each quorum in local + quorum in at least 1 other DC before confirming write to prevent data loss on site failure? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Description: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do quorum in local DC + quorum in one other DC before confirming write to prevent data loss on site failure? was: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do each quorum in local + quorum in at least 1 other DC before confirming write to prevent data loss on site failure? Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC - Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do quorum in local DC + quorum in one other DC before confirming write to prevent data loss on site failure? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Summary: Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC (was: Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC) Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC -- Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do quorum in local DC + quorum in one other DC before confirming write to prevent data loss on site failure? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Description: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do local_quorum + quorum in one other DC but not all the multiple DCs before confirming write to prevent data loss on site failure? was: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do quorum in local DC + quorum in one other DC before confirming write to prevent data loss on site failure? Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC -- Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do local_quorum + quorum in one other DC but not all the multiple DCs before confirming write to prevent data loss on site failure? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across multiple datacenters - LOCAL_QUORUM + quorum at 1 out of N other DCs
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Description: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. If you have 3 or more datacenters then perhaps there should be an option to do local_quorum + quorum in one other DC but not all the multiple DCs before confirming write to prevent data loss on site failure? was: Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do local_quorum + quorum in one other DC but not all the multiple DCs before confirming write to prevent data loss on site failure? Tunable consistency across multiple datacenters - LOCAL_QUORUM + quorum at 1 out of N other DCs --- Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. If you have 3 or more datacenters then perhaps there should be an option to do local_quorum + quorum in one other DC but not all the multiple DCs before confirming write to prevent data loss on site failure? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7877) Tunable consistency across multiple datacenters - LOCAL_QUORUM + quorum at 1 out of N other DCs
[ https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated CASSANDRA-7877: --- Summary: Tunable consistency across multiple datacenters - LOCAL_QUORUM + quorum at 1 out of N other DCs (was: Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC) Tunable consistency across multiple datacenters - LOCAL_QUORUM + quorum at 1 out of N other DCs --- Key: CASSANDRA-7877 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877 Project: Cassandra Issue Type: Improvement Reporter: Hari Sekhon Priority: Minor Right now tunable consistency allows for things like local quorum and each quorum across multiple datacenters. I propose something in between where you achieve local quorum + 1 node at other DC. This would make sure the data is written to the other datacenter for resilience purposes but be better performing that having to do an each quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how much more performant this would be... EDIT: thinking about this more this is probably reasonably covered by each quorum given that by the time you write to 1 replica node in other DC you may as well write to 2 in which case each quorum is probably the way to go. Although if you have 3 or more datacenters then perhaps there should be an option to do local_quorum + quorum in one other DC but not all the multiple DCs before confirming write to prevent data loss on site failure? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7769) Implement pg-style dollar syntax for string constants
[ https://issues.apache.org/jira/browse/CASSANDRA-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7769: Attachment: 7769v2.txt Implement pg-style dollar syntax for string constants - Key: CASSANDRA-7769 URL: https://issues.apache.org/jira/browse/CASSANDRA-7769 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp Assignee: Robert Stupp Fix For: 3.0 Attachments: 7769.txt, 7769v2.txt Follow-up of CASSANDRA-7740: {{$function$...$function$}} in addition to string style variant. See also http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7769) Implement pg-style dollar syntax for string constants
[ https://issues.apache.org/jira/browse/CASSANDRA-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7769: Attachment: (was: 7769v2.txt) Implement pg-style dollar syntax for string constants - Key: CASSANDRA-7769 URL: https://issues.apache.org/jira/browse/CASSANDRA-7769 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp Assignee: Robert Stupp Fix For: 3.0 Attachments: 7769.txt, 7769v2.txt Follow-up of CASSANDRA-7740: {{$function$...$function$}} in addition to string style variant. See also http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7769) Implement pg-style dollar syntax for string constants
[ https://issues.apache.org/jira/browse/CASSANDRA-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121329#comment-14121329 ] Robert Stupp commented on CASSANDRA-7769: - pg-style strings in cqlsh is a bigger thing. I've added this plus some other minor struff in {{cql3handling.py}}: {noformat} pgStringLiteral ::= /(\$[a-z][a-z0-9_]+\$).*(\$[a-z][a-z0-9_]+\$)/; {noformat} But it should look like this: {noformat} pgStringLiteral ::= /(\$[a-z][a-z0-9_]+\$).*\1/; {noformat} means it should use a group reference. But group references are not supported by SafeScanner. I got down to the point where SafeScanner mangles everything into one single group for everything - so group references cannot work at all. I also tried to use named groups without success. Means: it's not impossible to use pg-style strings in cqlsh but it's definitely not a trivial thing. Implement pg-style dollar syntax for string constants - Key: CASSANDRA-7769 URL: https://issues.apache.org/jira/browse/CASSANDRA-7769 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp Assignee: Robert Stupp Fix For: 3.0 Attachments: 7769.txt, 7769v2.txt Follow-up of CASSANDRA-7740: {{$function$...$function$}} in addition to string style variant. See also http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-6651) Repair hanging
[ https://issues.apache.org/jira/browse/CASSANDRA-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121384#comment-14121384 ] Yuki Morishita commented on CASSANDRA-6651: --- [~baldrick] From the log, it looks like there is another repair (#e0c033a0-3010-11e4-b65451c077eaf311) running on /172.18.68.138 that the node itself is repair coordinator, since it is receiving merkle tree. Can you check {{nodetool compactionstats}} on the nodes and verify if validation is running? It is possible that /172.18.68.138 is busy validating others. You can open new ticket if you continue to observe repiar hang. Repair hanging -- Key: CASSANDRA-6651 URL: https://issues.apache.org/jira/browse/CASSANDRA-6651 Project: Cassandra Issue Type: Bug Components: Core Reporter: Eitan Eibschutz Assignee: Yuki Morishita Hi, We have a 12 node cluster in PROD environment and we've noticed that repairs are never finishing. The behavior that we've observed is that a repair process will run until at some point it hangs and no other processing is happening. For example, at the moment, I have a repair process that has been running for two days and not finishing: nodetool tpstats is showing 2 active and 2 pending AntiEntropySessions nodetool compactionstats is showing: pending tasks: 0 Active compaction remaining time :n/a nodetools netstats is showing: Mode: NORMAL Not sending any streams. Read Repair Statistics: Attempted: 0 Mismatch (Blocking): 142110 Mismatch (Background): 0 Pool NameActive Pending Completed Commandsn/a 0 107589657 Responses n/a 0 116430785 The last entry that I see in the log is: INFO [AntiEntropySessions:18] 2014-02-03 04:01:39,145 RepairJob.java (line 116) [repair #ae78c6c0-8c2b-11e3-b950-c3b81a36bc9b] requesting merkle trees for MyCF (to [/x.x.x.x, /y.y.y.y, /z.z.z.z]) The repair started at 4am so it stopped after 1:40 minute. On node y.y.y.y I can see this in the log: INFO [MiscStage:1] 2014-02-03 04:01:38,985 ColumnFamilyStore.java (line 740) Enqueuing flush of Memtable-MyCF@1290890489(2176/5931 serialized/live bytes, 32 ops) INFO [FlushWriter:411] 2014-02-03 04:01:38,986 Memtable.java (line 333) Writing Memtable-MyCF@1290890489(2176/5931 serialized/live bytes, 32 ops) INFO [FlushWriter:411] 2014-02-03 04:01:39,048 Memtable.java (line 373) Completed flushing /var/lib/cassandra/main-db/data/MyKS/MyCF/MyKS-MyCF-jb-518-Data.db (1789 bytes) for commitlog position ReplayPosition(segmentId=1390437013339, position=21868792) INFO [ScheduledTasks:1] 2014-02-03 05:00:04,794 ColumnFamilyStore.java (line 740) Enqueuing flush of Memtable-compaction_history@1649414699(1635/17360 serialized/live bytes, 42 ops) So for some reason the merkle tree for this CF is never sent back to the node being repaired and it's hanging. I've also noticed that sometimes, restarting node y.y.y.y will cause the repair to resume. Another observation is that sometimes when restarting y.y.y.y it will not start with these errors: ERROR 16:34:18,485 Exception encountered during startup java.lang.IllegalStateException: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables. at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:495) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:264) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:461) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:504) java.lang.IllegalStateException: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables. at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:495) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:264) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:461) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:504) Exception encountered during startup: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables. And it will only restart after manually cleaning the compactions_in-progress folder. I'm not sure if these two issues are related but we've seen both on all the nodes in our cluster. I'll be happy to provide more info if needed as we are not sure what
[jira] [Commented] (CASSANDRA-7769) Implement pg-style dollar syntax for string constants
[ https://issues.apache.org/jira/browse/CASSANDRA-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121468#comment-14121468 ] Jack Krupansky commented on CASSANDRA-7769: --- Those patterns don't seem to recognize empty string sequences or non-name sequences for the delimiter marker. The PG rules allow both. Or even single letter sequences, for that matter. Or... upper case. It would be good to list a set of test use cases, which can also be included in doc. Implement pg-style dollar syntax for string constants - Key: CASSANDRA-7769 URL: https://issues.apache.org/jira/browse/CASSANDRA-7769 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp Assignee: Robert Stupp Fix For: 3.0 Attachments: 7769.txt, 7769v2.txt Follow-up of CASSANDRA-7740: {{$function$...$function$}} in addition to string style variant. See also http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson reopened CASSANDRA-7594: Reproduced In: 2.0.11, 2.1.1 The patch for this issue appears to have broken the dtest thrift_hsha_test.py:ThriftHSHATest.test_closing_connections on 2.0-HEAD and 2.1-HEAD. Creating the connection pool now simply hangs forever. Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-7878) Fix wrong progress reporting when streaming uncompressed SSTable w/ CRC check
Yuki Morishita created CASSANDRA-7878: - Summary: Fix wrong progress reporting when streaming uncompressed SSTable w/ CRC check Key: CASSANDRA-7878 URL: https://issues.apache.org/jira/browse/CASSANDRA-7878 Project: Cassandra Issue Type: Bug Reporter: Yuki Morishita Assignee: Yuki Morishita Priority: Trivial Fix For: 2.0.11 Attachments: 0001-Fix-wrong-progress-when-streaming-uncompressed.patch Streaming uncompressed SSTable w/ CRC validation calculates progress wrong. It shows transfer bytes as the sum of all read bytes for CRC validation. So netstats shows progress way over 100%. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7864) Repair should do less work when RF=1
[ https://issues.apache.org/jira/browse/CASSANDRA-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-7864: -- Attachment: 7864-2.0.txt Attaching patch to check RF before doing anything for repair. I also removed check for keyspace being system ks in favor of above check, since system ks is LocalStrategy of RF=1. Repair should do less work when RF=1 Key: CASSANDRA-7864 URL: https://issues.apache.org/jira/browse/CASSANDRA-7864 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Tyler Hobbs Assignee: Yuki Morishita Priority: Minor Labels: repair Fix For: 2.0.11, 2.1.1 Attachments: 7864-2.0.txt When the total RF for a keyspace is = 1, repair still calculates neighbors for each range and does some unneccessary work. We could short-circuit this earlier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121583#comment-14121583 ] Pavel Yaskevich commented on CASSANDRA-7594: [~philipthompson] Thanks for catching that, I'll try to take a look at that ASAP. Can you please provide jstack output of the server side while test is in the hanged state? Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7514) Support paging in cqlsh
[ https://issues.apache.org/jira/browse/CASSANDRA-7514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-7514: --- Attachment: CASSANDRA-2.1-7514.patch I'm attaching a patch with new PAGING command Support paging in cqlsh --- Key: CASSANDRA-7514 URL: https://issues.apache.org/jira/browse/CASSANDRA-7514 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Mikhail Stepura Priority: Minor Labels: cqlsh Fix For: 2.1.1 Attachments: CASSANDRA-2.1-7514.patch Once we've switch cqlsh to use the python driver 2.x (CASSANDRA-7506), we should also make it use paging. Currently cqlsh adds an implicit limit which is kind of ugly. Instead we should use some reasonably small page size (100 is probably fine) and display one page at a time, adding some NEXT command to query/display following pages. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7514) Support paging in cqlsh
[ https://issues.apache.org/jira/browse/CASSANDRA-7514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-7514: --- Attachment: (was: CASSANDRA-2.1-7514.patch) Support paging in cqlsh --- Key: CASSANDRA-7514 URL: https://issues.apache.org/jira/browse/CASSANDRA-7514 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Mikhail Stepura Priority: Minor Labels: cqlsh Fix For: 2.1.1 Attachments: CASSANDRA-2.1-7514.patch Once we've switch cqlsh to use the python driver 2.x (CASSANDRA-7506), we should also make it use paging. Currently cqlsh adds an implicit limit which is kind of ugly. Instead we should use some reasonably small page size (100 is probably fine) and display one page at a time, adding some NEXT command to query/display following pages. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121647#comment-14121647 ] Philip Thompson commented on CASSANDRA-7594: I just attached the output from running jstack on C* while its hanging. Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar, jstack.txt For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-7594: --- Attachment: jstack.txt Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar, jstack.txt For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121729#comment-14121729 ] Pavel Yaskevich commented on CASSANDRA-7594: Looks like the problem is that thread pool is rejecting execution of the selector. {noformat} main prio=9 tid=0x7fd2a4002800 nid=0xd07 waiting on condition [0x00010d915000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0x0007df31bbd8 (a java.util.concurrent.SynchronousQueue$TransferStack) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) at java.util.concurrent.SynchronousQueue.offer(SynchronousQueue.java:896) at org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:65) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372) at org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:150) at com.lmax.disruptor.WorkerPool.start(WorkerPool.java:133) at com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.init(TDisruptorServer.java:525) at com.thinkaurelius.thrift.TDisruptorServer.init(TDisruptorServer.java:221) at org.apache.cassandra.thrift.THsHaDisruptorServer.init(THsHaDisruptorServer.java:51) at org.apache.cassandra.thrift.THsHaDisruptorServer$Factory.buildTServer(THsHaDisruptorServer.java:105) at org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:55) at org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.init(ThriftServer.java:131) at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:58) at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:410) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:470) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) {noformat} [~philipthompson] Does dtest try to stop/start thrift after every test or something similar? Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar, jstack.txt For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121742#comment-14121742 ] Philip Thompson commented on CASSANDRA-7594: [~xedin] Each test uses a fresh cassandra cluster. This test does call 'nodetool disablethrift' and 'nodetool enablethrift', but it does not make it to that call before the error occurs. Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar, jstack.txt For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
git commit: Expose CFMetaData#isDense()
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 aae9b9101 - 77b036a87 Expose CFMetaData#isDense() Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/77b036a8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/77b036a8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/77b036a8 Branch: refs/heads/cassandra-2.0 Commit: 77b036a876d64d3cd12902d90db2f652e3e33d88 Parents: aae9b91 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 14:45:59 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 14:45:59 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java| 17 +++-- .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 12 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/77b036a8/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index fc8e8c3..e726957 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -452,7 +452,7 @@ public final class CFMetaData public CFMetaData populateIoCacheOnFlush(boolean prop) {populateIoCacheOnFlush = prop; return this;} public CFMetaData droppedColumns(MapByteBuffer, Long cols) {droppedColumns = cols; return this;} public CFMetaData triggers(MapString, TriggerDefinition prop) {triggers = prop; return this;} -public CFMetaData setDense(Boolean prop) {isDense = prop; return this;} +public CFMetaData isDense(Boolean prop) {isDense = prop; return this;} public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, AbstractType? subcc) { @@ -605,7 +605,7 @@ public final class CFMetaData .populateIoCacheOnFlush(oldCFMD.populateIoCacheOnFlush) .droppedColumns(new HashMap(oldCFMD.droppedColumns)) .triggers(new HashMap(oldCFMD.triggers)) - .setDense(oldCFMD.isDense) + .isDense(oldCFMD.isDense) .rebuild(); } @@ -786,6 +786,11 @@ public final class CFMetaData return droppedColumns; } +public Boolean getIsDense() +{ +return isDense; +} + public boolean equals(Object obj) { if (obj == this) @@ -1117,7 +1122,7 @@ public final class CFMetaData triggers = cfm.triggers; -setDense(cfm.isDense); +isDense(cfm.isDense); rebuild(); logger.debug(application result is {}, this); @@ -1712,7 +1717,7 @@ public final class CFMetaData cfm.populateIoCacheOnFlush(result.getBoolean(populate_io_cache_on_flush)); if (result.has(is_dense)) -cfm.setDense(result.getBoolean(is_dense)); +cfm.isDense(result.getBoolean(is_dense)); /* * The info previously hold by key_aliases, column_aliases and value_alias is now stored in column_metadata (because 1) this @@ -1964,7 +1969,7 @@ public final class CFMetaData { ListColumnDefinition pkCols = nullInitializedList(keyValidator.componentsCount()); if (isDense == null) -setDense(isDense(comparator, column_metadata.values())); +isDense(calculateIsDense(comparator, column_metadata.values())); int nbCkCols = isDense ? comparator.componentsCount() : comparator.componentsCount() - (hasCollection() ? 2 : 1); @@ -2087,7 +2092,7 @@ public final class CFMetaData * information for table just created through thrift, nor for table prior to CASSANDRA-7744, so this * method does its best to infer whether the table is dense or not based on other elements. */ -private static boolean isDense(AbstractType? comparator, CollectionColumnDefinition defs) +private static boolean calculateIsDense(AbstractType? comparator, CollectionColumnDefinition defs) { /* * As said above, this method is only here because we need to deal with thrift upgrades. http://git-wip-us.apache.org/repos/asf/cassandra/blob/77b036a8/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index b7f43d3..efaf36d 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java
[jira] [Commented] (CASSANDRA-6237) Allow range deletions in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-6237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121759#comment-14121759 ] Tyler Hobbs commented on CASSANDRA-6237: Okay, waiting on those two seems reasonable to me. Allow range deletions in CQL Key: CASSANDRA-6237 URL: https://issues.apache.org/jira/browse/CASSANDRA-6237 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Benjamin Lerer Priority: Minor Labels: cql, docs Fix For: 3.0 Attachments: CASSANDRA-6237.txt We uses RangeTombstones internally in a number of places, but we could expose more directly too. Typically, given a table like: {noformat} CREATE TABLE events ( id text, created_at timestamp, content text, PRIMARY KEY (id, created_at) ) {noformat} we could allow queries like: {noformat} DELETE FROM events WHERE id='someEvent' AND created_at 'Jan 3, 2013'; {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121762#comment-14121762 ] Pavel Yaskevich commented on CASSANDRA-7594: [~philipthompson] Got it, I think I have an idea about what is going on, each selector is being to greedy with shared pool and is trying to use all of the core threads for it's workers. I'm attaching a new thrift-server jar, can you please try it out? Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar, jstack.txt For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Yaskevich updated CASSANDRA-7594: --- Attachment: thrift-server-0.3.7-SNAPSHOT.jar Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar, jstack.txt, thrift-server-0.3.7-SNAPSHOT.jar For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/2] git commit: Expose CFMetaData#isDense()
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1.0 1908ae3bc - 37c6b2f22 Expose CFMetaData#isDense() Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/77b036a8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/77b036a8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/77b036a8 Branch: refs/heads/cassandra-2.1.0 Commit: 77b036a876d64d3cd12902d90db2f652e3e33d88 Parents: aae9b91 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 14:45:59 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 14:45:59 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java| 17 +++-- .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 12 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/77b036a8/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index fc8e8c3..e726957 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -452,7 +452,7 @@ public final class CFMetaData public CFMetaData populateIoCacheOnFlush(boolean prop) {populateIoCacheOnFlush = prop; return this;} public CFMetaData droppedColumns(MapByteBuffer, Long cols) {droppedColumns = cols; return this;} public CFMetaData triggers(MapString, TriggerDefinition prop) {triggers = prop; return this;} -public CFMetaData setDense(Boolean prop) {isDense = prop; return this;} +public CFMetaData isDense(Boolean prop) {isDense = prop; return this;} public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, AbstractType? subcc) { @@ -605,7 +605,7 @@ public final class CFMetaData .populateIoCacheOnFlush(oldCFMD.populateIoCacheOnFlush) .droppedColumns(new HashMap(oldCFMD.droppedColumns)) .triggers(new HashMap(oldCFMD.triggers)) - .setDense(oldCFMD.isDense) + .isDense(oldCFMD.isDense) .rebuild(); } @@ -786,6 +786,11 @@ public final class CFMetaData return droppedColumns; } +public Boolean getIsDense() +{ +return isDense; +} + public boolean equals(Object obj) { if (obj == this) @@ -1117,7 +1122,7 @@ public final class CFMetaData triggers = cfm.triggers; -setDense(cfm.isDense); +isDense(cfm.isDense); rebuild(); logger.debug(application result is {}, this); @@ -1712,7 +1717,7 @@ public final class CFMetaData cfm.populateIoCacheOnFlush(result.getBoolean(populate_io_cache_on_flush)); if (result.has(is_dense)) -cfm.setDense(result.getBoolean(is_dense)); +cfm.isDense(result.getBoolean(is_dense)); /* * The info previously hold by key_aliases, column_aliases and value_alias is now stored in column_metadata (because 1) this @@ -1964,7 +1969,7 @@ public final class CFMetaData { ListColumnDefinition pkCols = nullInitializedList(keyValidator.componentsCount()); if (isDense == null) -setDense(isDense(comparator, column_metadata.values())); +isDense(calculateIsDense(comparator, column_metadata.values())); int nbCkCols = isDense ? comparator.componentsCount() : comparator.componentsCount() - (hasCollection() ? 2 : 1); @@ -2087,7 +2092,7 @@ public final class CFMetaData * information for table just created through thrift, nor for table prior to CASSANDRA-7744, so this * method does its best to infer whether the table is dense or not based on other elements. */ -private static boolean isDense(AbstractType? comparator, CollectionColumnDefinition defs) +private static boolean calculateIsDense(AbstractType? comparator, CollectionColumnDefinition defs) { /* * As said above, this method is only here because we need to deal with thrift upgrades. http://git-wip-us.apache.org/repos/asf/cassandra/blob/77b036a8/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index b7f43d3..efaf36d 100644 ---
[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Conflicts: src/java/org/apache/cassandra/config/CFMetaData.java src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37c6b2f2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37c6b2f2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37c6b2f2 Branch: refs/heads/cassandra-2.1.0 Commit: 37c6b2f2260fa7447d3fea9f0f3c2b4102028e91 Parents: 1908ae3 77b036a Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 15:04:37 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 15:04:37 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java | 21 .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 14 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/37c6b2f2/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --cc src/java/org/apache/cassandra/config/CFMetaData.java index 7466b56,e726957..c7b48e3 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@@ -483,26 -449,27 +483,26 @@@ public final class CFMetaDat public CFMetaData memtableFlushPeriod(int prop) {memtableFlushPeriod = prop; return this;} public CFMetaData defaultTimeToLive(int prop) {defaultTimeToLive = prop; return this;} public CFMetaData speculativeRetry(SpeculativeRetry prop) {speculativeRetry = prop; return this;} -public CFMetaData populateIoCacheOnFlush(boolean prop) {populateIoCacheOnFlush = prop; return this;} -public CFMetaData droppedColumns(MapByteBuffer, Long cols) {droppedColumns = cols; return this;} +public CFMetaData droppedColumns(MapColumnIdentifier, Long cols) {droppedColumns = cols; return this;} public CFMetaData triggers(MapString, TriggerDefinition prop) {triggers = prop; return this;} - public CFMetaData setDense(Boolean prop) {isDense = prop; return this;} + public CFMetaData isDense(Boolean prop) {isDense = prop; return this;} -public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, AbstractType? subcc) -{ -this(keyspace, name, type, makeComparator(type, comp, subcc)); -} - -public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp) +/** + * Create new ColumnFamily metadata with generated random ID. + * When loading from existing schema, use CFMetaData + * + * @param keyspace keyspace name + * @param name column family name + * @param comp default comparator + */ +public CFMetaData(String keyspace, String name, ColumnFamilyType type, CellNameType comp) { -this(keyspace, name, type, comp, getId(keyspace, name)); +this(keyspace, name, type, comp, UUIDGen.getTimeUUID()); } -@VisibleForTesting -CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, UUID id) +private CFMetaData(String keyspace, String name, ColumnFamilyType type, CellNameType comp, UUID id) { -// (subcc may be null for non-supercolumns) -// (comp may also be null for custom indexes, which is kind of broken if you ask me) - +cfId = id; ksName = keyspace; cfName = name; cfType = type; @@@ -657,13 -599,13 +657,13 @@@ .bloomFilterFpChance(oldCFMD.bloomFilterFpChance) .caching(oldCFMD.caching) .defaultTimeToLive(oldCFMD.defaultTimeToLive) - .indexInterval(oldCFMD.indexInterval) + .minIndexInterval(oldCFMD.minIndexInterval) + .maxIndexInterval(oldCFMD.maxIndexInterval) .speculativeRetry(oldCFMD.speculativeRetry) .memtableFlushPeriod(oldCFMD.memtableFlushPeriod) - .populateIoCacheOnFlush(oldCFMD.populateIoCacheOnFlush) .droppedColumns(new HashMap(oldCFMD.droppedColumns)) .triggers(new HashMap(oldCFMD.triggers)) - .setDense(oldCFMD.isDense) + .isDense(oldCFMD.isDense) .rebuild(); } @@@ -880,47 -786,55 +880,52 @@@ return droppedColumns; } + public Boolean getIsDense() + { + return isDense; + } + -public boolean equals(Object obj) +@Override +public boolean equals(Object o) { -if (obj == this) -{ +
[3/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76508213 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76508213 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76508213 Branch: refs/heads/cassandra-2.1 Commit: 76508213df7fe8e22a483b2660ab589c8559cc44 Parents: 2001c25 37c6b2f Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 15:05:35 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 15:05:35 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java | 21 .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 14 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/76508213/src/java/org/apache/cassandra/config/CFMetaData.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/76508213/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java --
[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Conflicts: src/java/org/apache/cassandra/config/CFMetaData.java src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37c6b2f2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37c6b2f2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37c6b2f2 Branch: refs/heads/cassandra-2.1 Commit: 37c6b2f2260fa7447d3fea9f0f3c2b4102028e91 Parents: 1908ae3 77b036a Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 15:04:37 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 15:04:37 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java | 21 .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 14 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/37c6b2f2/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --cc src/java/org/apache/cassandra/config/CFMetaData.java index 7466b56,e726957..c7b48e3 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@@ -483,26 -449,27 +483,26 @@@ public final class CFMetaDat public CFMetaData memtableFlushPeriod(int prop) {memtableFlushPeriod = prop; return this;} public CFMetaData defaultTimeToLive(int prop) {defaultTimeToLive = prop; return this;} public CFMetaData speculativeRetry(SpeculativeRetry prop) {speculativeRetry = prop; return this;} -public CFMetaData populateIoCacheOnFlush(boolean prop) {populateIoCacheOnFlush = prop; return this;} -public CFMetaData droppedColumns(MapByteBuffer, Long cols) {droppedColumns = cols; return this;} +public CFMetaData droppedColumns(MapColumnIdentifier, Long cols) {droppedColumns = cols; return this;} public CFMetaData triggers(MapString, TriggerDefinition prop) {triggers = prop; return this;} - public CFMetaData setDense(Boolean prop) {isDense = prop; return this;} + public CFMetaData isDense(Boolean prop) {isDense = prop; return this;} -public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, AbstractType? subcc) -{ -this(keyspace, name, type, makeComparator(type, comp, subcc)); -} - -public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp) +/** + * Create new ColumnFamily metadata with generated random ID. + * When loading from existing schema, use CFMetaData + * + * @param keyspace keyspace name + * @param name column family name + * @param comp default comparator + */ +public CFMetaData(String keyspace, String name, ColumnFamilyType type, CellNameType comp) { -this(keyspace, name, type, comp, getId(keyspace, name)); +this(keyspace, name, type, comp, UUIDGen.getTimeUUID()); } -@VisibleForTesting -CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, UUID id) +private CFMetaData(String keyspace, String name, ColumnFamilyType type, CellNameType comp, UUID id) { -// (subcc may be null for non-supercolumns) -// (comp may also be null for custom indexes, which is kind of broken if you ask me) - +cfId = id; ksName = keyspace; cfName = name; cfType = type; @@@ -657,13 -599,13 +657,13 @@@ .bloomFilterFpChance(oldCFMD.bloomFilterFpChance) .caching(oldCFMD.caching) .defaultTimeToLive(oldCFMD.defaultTimeToLive) - .indexInterval(oldCFMD.indexInterval) + .minIndexInterval(oldCFMD.minIndexInterval) + .maxIndexInterval(oldCFMD.maxIndexInterval) .speculativeRetry(oldCFMD.speculativeRetry) .memtableFlushPeriod(oldCFMD.memtableFlushPeriod) - .populateIoCacheOnFlush(oldCFMD.populateIoCacheOnFlush) .droppedColumns(new HashMap(oldCFMD.droppedColumns)) .triggers(new HashMap(oldCFMD.triggers)) - .setDense(oldCFMD.isDense) + .isDense(oldCFMD.isDense) .rebuild(); } @@@ -880,47 -786,55 +880,52 @@@ return droppedColumns; } + public Boolean getIsDense() + { + return isDense; + } + -public boolean equals(Object obj) +@Override +public boolean equals(Object o) { -if (obj == this) -{ +
[1/3] git commit: Expose CFMetaData#isDense()
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 2001c2577 - 76508213d Expose CFMetaData#isDense() Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/77b036a8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/77b036a8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/77b036a8 Branch: refs/heads/cassandra-2.1 Commit: 77b036a876d64d3cd12902d90db2f652e3e33d88 Parents: aae9b91 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 14:45:59 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 14:45:59 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java| 17 +++-- .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 12 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/77b036a8/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index fc8e8c3..e726957 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -452,7 +452,7 @@ public final class CFMetaData public CFMetaData populateIoCacheOnFlush(boolean prop) {populateIoCacheOnFlush = prop; return this;} public CFMetaData droppedColumns(MapByteBuffer, Long cols) {droppedColumns = cols; return this;} public CFMetaData triggers(MapString, TriggerDefinition prop) {triggers = prop; return this;} -public CFMetaData setDense(Boolean prop) {isDense = prop; return this;} +public CFMetaData isDense(Boolean prop) {isDense = prop; return this;} public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, AbstractType? subcc) { @@ -605,7 +605,7 @@ public final class CFMetaData .populateIoCacheOnFlush(oldCFMD.populateIoCacheOnFlush) .droppedColumns(new HashMap(oldCFMD.droppedColumns)) .triggers(new HashMap(oldCFMD.triggers)) - .setDense(oldCFMD.isDense) + .isDense(oldCFMD.isDense) .rebuild(); } @@ -786,6 +786,11 @@ public final class CFMetaData return droppedColumns; } +public Boolean getIsDense() +{ +return isDense; +} + public boolean equals(Object obj) { if (obj == this) @@ -1117,7 +1122,7 @@ public final class CFMetaData triggers = cfm.triggers; -setDense(cfm.isDense); +isDense(cfm.isDense); rebuild(); logger.debug(application result is {}, this); @@ -1712,7 +1717,7 @@ public final class CFMetaData cfm.populateIoCacheOnFlush(result.getBoolean(populate_io_cache_on_flush)); if (result.has(is_dense)) -cfm.setDense(result.getBoolean(is_dense)); +cfm.isDense(result.getBoolean(is_dense)); /* * The info previously hold by key_aliases, column_aliases and value_alias is now stored in column_metadata (because 1) this @@ -1964,7 +1969,7 @@ public final class CFMetaData { ListColumnDefinition pkCols = nullInitializedList(keyValidator.componentsCount()); if (isDense == null) -setDense(isDense(comparator, column_metadata.values())); +isDense(calculateIsDense(comparator, column_metadata.values())); int nbCkCols = isDense ? comparator.componentsCount() : comparator.componentsCount() - (hasCollection() ? 2 : 1); @@ -2087,7 +2092,7 @@ public final class CFMetaData * information for table just created through thrift, nor for table prior to CASSANDRA-7744, so this * method does its best to infer whether the table is dense or not based on other elements. */ -private static boolean isDense(AbstractType? comparator, CollectionColumnDefinition defs) +private static boolean calculateIsDense(AbstractType? comparator, CollectionColumnDefinition defs) { /* * As said above, this method is only here because we need to deal with thrift upgrades. http://git-wip-us.apache.org/repos/asf/cassandra/blob/77b036a8/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index b7f43d3..efaf36d 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java
[3/4] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76508213 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76508213 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76508213 Branch: refs/heads/trunk Commit: 76508213df7fe8e22a483b2660ab589c8559cc44 Parents: 2001c25 37c6b2f Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 15:05:35 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 15:05:35 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java | 21 .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 14 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/76508213/src/java/org/apache/cassandra/config/CFMetaData.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/76508213/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java --
[1/4] git commit: Expose CFMetaData#isDense()
Repository: cassandra Updated Branches: refs/heads/trunk 9b98fecc6 - befd0b900 Expose CFMetaData#isDense() Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/77b036a8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/77b036a8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/77b036a8 Branch: refs/heads/trunk Commit: 77b036a876d64d3cd12902d90db2f652e3e33d88 Parents: aae9b91 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 14:45:59 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 14:45:59 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java| 17 +++-- .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 12 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/77b036a8/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index fc8e8c3..e726957 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -452,7 +452,7 @@ public final class CFMetaData public CFMetaData populateIoCacheOnFlush(boolean prop) {populateIoCacheOnFlush = prop; return this;} public CFMetaData droppedColumns(MapByteBuffer, Long cols) {droppedColumns = cols; return this;} public CFMetaData triggers(MapString, TriggerDefinition prop) {triggers = prop; return this;} -public CFMetaData setDense(Boolean prop) {isDense = prop; return this;} +public CFMetaData isDense(Boolean prop) {isDense = prop; return this;} public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, AbstractType? subcc) { @@ -605,7 +605,7 @@ public final class CFMetaData .populateIoCacheOnFlush(oldCFMD.populateIoCacheOnFlush) .droppedColumns(new HashMap(oldCFMD.droppedColumns)) .triggers(new HashMap(oldCFMD.triggers)) - .setDense(oldCFMD.isDense) + .isDense(oldCFMD.isDense) .rebuild(); } @@ -786,6 +786,11 @@ public final class CFMetaData return droppedColumns; } +public Boolean getIsDense() +{ +return isDense; +} + public boolean equals(Object obj) { if (obj == this) @@ -1117,7 +1122,7 @@ public final class CFMetaData triggers = cfm.triggers; -setDense(cfm.isDense); +isDense(cfm.isDense); rebuild(); logger.debug(application result is {}, this); @@ -1712,7 +1717,7 @@ public final class CFMetaData cfm.populateIoCacheOnFlush(result.getBoolean(populate_io_cache_on_flush)); if (result.has(is_dense)) -cfm.setDense(result.getBoolean(is_dense)); +cfm.isDense(result.getBoolean(is_dense)); /* * The info previously hold by key_aliases, column_aliases and value_alias is now stored in column_metadata (because 1) this @@ -1964,7 +1969,7 @@ public final class CFMetaData { ListColumnDefinition pkCols = nullInitializedList(keyValidator.componentsCount()); if (isDense == null) -setDense(isDense(comparator, column_metadata.values())); +isDense(calculateIsDense(comparator, column_metadata.values())); int nbCkCols = isDense ? comparator.componentsCount() : comparator.componentsCount() - (hasCollection() ? 2 : 1); @@ -2087,7 +2092,7 @@ public final class CFMetaData * information for table just created through thrift, nor for table prior to CASSANDRA-7744, so this * method does its best to infer whether the table is dense or not based on other elements. */ -private static boolean isDense(AbstractType? comparator, CollectionColumnDefinition defs) +private static boolean calculateIsDense(AbstractType? comparator, CollectionColumnDefinition defs) { /* * As said above, this method is only here because we need to deal with thrift upgrades. http://git-wip-us.apache.org/repos/asf/cassandra/blob/77b036a8/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index b7f43d3..efaf36d 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java +++
[4/4] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/befd0b90 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/befd0b90 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/befd0b90 Branch: refs/heads/trunk Commit: befd0b900c2cf894191fe5913aab6603a6b80f4f Parents: 9b98fec 7650821 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 15:05:52 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 15:05:52 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java | 21 .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 14 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/befd0b90/src/java/org/apache/cassandra/config/CFMetaData.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/befd0b90/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java --
[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Conflicts: src/java/org/apache/cassandra/config/CFMetaData.java src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37c6b2f2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37c6b2f2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37c6b2f2 Branch: refs/heads/trunk Commit: 37c6b2f2260fa7447d3fea9f0f3c2b4102028e91 Parents: 1908ae3 77b036a Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Sep 4 15:04:37 2014 -0400 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Sep 4 15:04:37 2014 -0400 -- .../org/apache/cassandra/config/CFMetaData.java | 21 .../cql3/statements/CreateTableStatement.java | 2 +- 2 files changed, 14 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/37c6b2f2/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --cc src/java/org/apache/cassandra/config/CFMetaData.java index 7466b56,e726957..c7b48e3 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@@ -483,26 -449,27 +483,26 @@@ public final class CFMetaDat public CFMetaData memtableFlushPeriod(int prop) {memtableFlushPeriod = prop; return this;} public CFMetaData defaultTimeToLive(int prop) {defaultTimeToLive = prop; return this;} public CFMetaData speculativeRetry(SpeculativeRetry prop) {speculativeRetry = prop; return this;} -public CFMetaData populateIoCacheOnFlush(boolean prop) {populateIoCacheOnFlush = prop; return this;} -public CFMetaData droppedColumns(MapByteBuffer, Long cols) {droppedColumns = cols; return this;} +public CFMetaData droppedColumns(MapColumnIdentifier, Long cols) {droppedColumns = cols; return this;} public CFMetaData triggers(MapString, TriggerDefinition prop) {triggers = prop; return this;} - public CFMetaData setDense(Boolean prop) {isDense = prop; return this;} + public CFMetaData isDense(Boolean prop) {isDense = prop; return this;} -public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, AbstractType? subcc) -{ -this(keyspace, name, type, makeComparator(type, comp, subcc)); -} - -public CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp) +/** + * Create new ColumnFamily metadata with generated random ID. + * When loading from existing schema, use CFMetaData + * + * @param keyspace keyspace name + * @param name column family name + * @param comp default comparator + */ +public CFMetaData(String keyspace, String name, ColumnFamilyType type, CellNameType comp) { -this(keyspace, name, type, comp, getId(keyspace, name)); +this(keyspace, name, type, comp, UUIDGen.getTimeUUID()); } -@VisibleForTesting -CFMetaData(String keyspace, String name, ColumnFamilyType type, AbstractType? comp, UUID id) +private CFMetaData(String keyspace, String name, ColumnFamilyType type, CellNameType comp, UUID id) { -// (subcc may be null for non-supercolumns) -// (comp may also be null for custom indexes, which is kind of broken if you ask me) - +cfId = id; ksName = keyspace; cfName = name; cfType = type; @@@ -657,13 -599,13 +657,13 @@@ .bloomFilterFpChance(oldCFMD.bloomFilterFpChance) .caching(oldCFMD.caching) .defaultTimeToLive(oldCFMD.defaultTimeToLive) - .indexInterval(oldCFMD.indexInterval) + .minIndexInterval(oldCFMD.minIndexInterval) + .maxIndexInterval(oldCFMD.maxIndexInterval) .speculativeRetry(oldCFMD.speculativeRetry) .memtableFlushPeriod(oldCFMD.memtableFlushPeriod) - .populateIoCacheOnFlush(oldCFMD.populateIoCacheOnFlush) .droppedColumns(new HashMap(oldCFMD.droppedColumns)) .triggers(new HashMap(oldCFMD.triggers)) - .setDense(oldCFMD.isDense) + .isDense(oldCFMD.isDense) .rebuild(); } @@@ -880,47 -786,55 +880,52 @@@ return droppedColumns; } + public Boolean getIsDense() + { + return isDense; + } + -public boolean equals(Object obj) +@Override +public boolean equals(Object o) { -if (obj == this) -{ +if (this
[jira] [Commented] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121906#comment-14121906 ] Philip Thompson commented on CASSANDRA-7594: [~xedin] Using the attached thrift-server.jar I see the following stack trace: {code} javax.management.InstanceAlreadyExistsException: org.apache.cassandra.RPC-THREAD-POOL:type=RPC-Thread at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutor.init(JMXEnabledThreadPoolExecutor.java:84) at org.apache.cassandra.thrift.THsHaDisruptorServer$Factory.buildTServer(THsHaDisruptorServer.java:86) at org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:55) at org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.init(ThriftServer.java:131) at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:58) at org.apache.cassandra.service.StorageService.startRPCServer(StorageService.java:309) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487) at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at sun.rmi.transport.Transport$1.run(Transport.java:177) at sun.rmi.transport.Transport$1.run(Transport.java:174) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:173) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} the test no longer hangs when creating the first connection. After disabling and re-enabling thrift, it hangs when attempting to connect again. This is
[jira] [Commented] (CASSANDRA-7828) New node cannot be joined if a value in composite type column is dropped (description updated)
[ https://issues.apache.org/jira/browse/CASSANDRA-7828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121934#comment-14121934 ] Igor Zubchenok commented on CASSANDRA-7828: --- Could anyone comment on this? Thanks New node cannot be joined if a value in composite type column is dropped (description updated) -- Key: CASSANDRA-7828 URL: https://issues.apache.org/jira/browse/CASSANDRA-7828 Project: Cassandra Issue Type: Bug Components: Core Reporter: Igor Zubchenok I get a *RuntimeException* at new node system.log on bootstrapping a new DC: {code:title=system.out - RuntimeException caused by IllegalArgumentException in Buffer.limit|borderStyle=solid} INFO [NonPeriodicTasks:1] 2014-08-26 15:43:01,030 SecondaryIndexManager.java (line 137) Submitting index build of [myColumnFamily.myColumnFamily_myColumn] for data in SSTableReader(path='/var/lib/cassandra/data/testbug/myColumnFamily/testbug-myColumnFamily-jb-1-Data.db') ERROR [CompactionExecutor:2] 2014-08-26 15:43:01,035 CassandraDaemon.java (line 199) Exception in thread Thread[CompactionExecutor:2,1,main] java.lang.IllegalArgumentException at java.nio.Buffer.limit(Buffer.java:267) at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:587) at org.apache.cassandra.utils.ByteBufferUtil.readBytesWithShortLength(ByteBufferUtil.java:596) at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:61) at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:36) at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:44) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.concurrent.ConcurrentSkipListMap.findPredecessor(ConcurrentSkipListMap.java:727) at java.util.concurrent.ConcurrentSkipListMap.findNode(ConcurrentSkipListMap.java:789) at java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:828) at java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1626) at org.apache.cassandra.db.Memtable.resolve(Memtable.java:215) at org.apache.cassandra.db.Memtable.put(Memtable.java:173) at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:900) at org.apache.cassandra.db.index.AbstractSimplePerColumnSecondaryIndex.insert(AbstractSimplePerColumnSecondaryIndex.java:107) at org.apache.cassandra.db.index.SecondaryIndexManager.indexRow(SecondaryIndexManager.java:441) at org.apache.cassandra.db.Keyspace.indexRow(Keyspace.java:413) at org.apache.cassandra.db.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:62) at org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:834) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ERROR [NonPeriodicTasks:1] 2014-08-26 15:43:01,035 CassandraDaemon.java (line 199) Exception in thread Thread[NonPeriodicTasks:1,5,main] java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:413) at org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndexes(SecondaryIndexManager.java:142) at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:113) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at java.util.concurrent.FutureTask.report(FutureTask.java:122) at
[1/3] git commit: Ninja: adjusted cqlsh tests for CASSANDRA-7857
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 76508213d - c54126bb4 refs/heads/trunk befd0b900 - 2f3fab14f Ninja: adjusted cqlsh tests for CASSANDRA-7857 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c54126bb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c54126bb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c54126bb Branch: refs/heads/cassandra-2.1 Commit: c54126bb47c90bdb67810d71f61e371088778bca Parents: 7650821 Author: Mikhail Stepura mish...@apache.org Authored: Thu Sep 4 13:36:13 2014 -0700 Committer: Mikhail Stepura mish...@apache.org Committed: Thu Sep 4 13:36:13 2014 -0700 -- pylib/cqlshlib/test/test_keyspace_init.cql | 8 1 file changed, 4 insertions(+), 4 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c54126bb/pylib/cqlshlib/test/test_keyspace_init.cql -- diff --git a/pylib/cqlshlib/test/test_keyspace_init.cql b/pylib/cqlshlib/test/test_keyspace_init.cql index b8d600c..cd5ac75 100644 --- a/pylib/cqlshlib/test/test_keyspace_init.cql +++ b/pylib/cqlshlib/test/test_keyspace_init.cql @@ -196,8 +196,8 @@ CREATE TYPE phone_number ( CREATE TABLE users ( login text PRIMARY KEY, name text, -addresses setaddress, -phone_numbers setphone_number +addresses setfrozenaddress, +phone_numbers setfrozenphone_number ); insert into users (login, name, addresses, phone_numbers) @@ -229,8 +229,8 @@ CREATE TYPE tags ( CREATE TABLE songs ( title text PRIMARY KEY, band text, -info band_info_type, -tags tags +info frozenband_info_type, +tags frozentags ); insert into songs (title, band, info, tags)
[2/3] git commit: Ninja: adjusted cqlsh tests for CASSANDRA-7857
Ninja: adjusted cqlsh tests for CASSANDRA-7857 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c54126bb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c54126bb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c54126bb Branch: refs/heads/trunk Commit: c54126bb47c90bdb67810d71f61e371088778bca Parents: 7650821 Author: Mikhail Stepura mish...@apache.org Authored: Thu Sep 4 13:36:13 2014 -0700 Committer: Mikhail Stepura mish...@apache.org Committed: Thu Sep 4 13:36:13 2014 -0700 -- pylib/cqlshlib/test/test_keyspace_init.cql | 8 1 file changed, 4 insertions(+), 4 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c54126bb/pylib/cqlshlib/test/test_keyspace_init.cql -- diff --git a/pylib/cqlshlib/test/test_keyspace_init.cql b/pylib/cqlshlib/test/test_keyspace_init.cql index b8d600c..cd5ac75 100644 --- a/pylib/cqlshlib/test/test_keyspace_init.cql +++ b/pylib/cqlshlib/test/test_keyspace_init.cql @@ -196,8 +196,8 @@ CREATE TYPE phone_number ( CREATE TABLE users ( login text PRIMARY KEY, name text, -addresses setaddress, -phone_numbers setphone_number +addresses setfrozenaddress, +phone_numbers setfrozenphone_number ); insert into users (login, name, addresses, phone_numbers) @@ -229,8 +229,8 @@ CREATE TYPE tags ( CREATE TABLE songs ( title text PRIMARY KEY, band text, -info band_info_type, -tags tags +info frozenband_info_type, +tags frozentags ); insert into songs (title, band, info, tags)
[3/3] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2f3fab14 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2f3fab14 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2f3fab14 Branch: refs/heads/trunk Commit: 2f3fab14f055fe64d5091171d2e7d54899fe63c5 Parents: befd0b9 c54126b Author: Mikhail Stepura mish...@apache.org Authored: Thu Sep 4 13:36:36 2014 -0700 Committer: Mikhail Stepura mish...@apache.org Committed: Thu Sep 4 13:36:36 2014 -0700 -- pylib/cqlshlib/test/test_keyspace_init.cql | 8 1 file changed, 4 insertions(+), 4 deletions(-) --
[jira] [Updated] (CASSANDRA-6572) Workload recording / playback
[ https://issues.apache.org/jira/browse/CASSANDRA-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6572: -- Assignee: Carl Yeksigian Workload recording / playback - Key: CASSANDRA-6572 URL: https://issues.apache.org/jira/browse/CASSANDRA-6572 Project: Cassandra Issue Type: New Feature Components: Core, Tools Reporter: Jonathan Ellis Assignee: Carl Yeksigian Fix For: 2.1.1 Attachments: 6572-trunk.diff Write sample mode gets us part way to testing new versions against a real world workload, but we need an easy way to test the query side as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[5/6] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd396ec8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd396ec8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd396ec8 Branch: refs/heads/cassandra-2.1 Commit: bd396ec8acb74436fd84a9cf48542c49e08a17a6 Parents: c54126b dd4fbbc Author: Mikhail Stepura mish...@apache.org Authored: Thu Sep 4 13:51:39 2014 -0700 Committer: Mikhail Stepura mish...@apache.org Committed: Thu Sep 4 13:51:39 2014 -0700 -- --
[3/6] git commit: Ninja: cqlsh test fixed for 2.1.0
Ninja: cqlsh test fixed for 2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd4fbbcd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd4fbbcd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd4fbbcd Branch: refs/heads/trunk Commit: dd4fbbcd8cce48d763b4ec93c97b09d80ebd532e Parents: 37c6b2f Author: Mikhail Stepura mish...@apache.org Authored: Thu Sep 4 13:31:50 2014 -0700 Committer: Mikhail Stepura mish...@apache.org Committed: Thu Sep 4 13:49:43 2014 -0700 -- pylib/cqlshlib/test/test_cqlsh_output.py | 77 + pylib/cqlshlib/test/test_keyspace_init.cql | 17 ++ 2 files changed, 20 insertions(+), 74 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd4fbbcd/pylib/cqlshlib/test/test_cqlsh_output.py -- diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py b/pylib/cqlshlib/test/test_cqlsh_output.py index 6fb4f41..072dd23 100644 --- a/pylib/cqlshlib/test/test_cqlsh_output.py +++ b/pylib/cqlshlib/test/test_cqlsh_output.py @@ -101,25 +101,22 @@ class TestCqlshOutput(BaseTestCase): self.assertNoHasColors(c.read_to_next_prompt()) def test_no_prompt_or_colors_output(self): -# CQL queries and number of lines expected in output: -queries = (('select * from has_all_types limit 1;', 7), - ('select * from has_value_encoding_errors limit 1;', 8)) for termname in ('', 'dumb', 'vt100', 'xterm'): cqlshlog.debug('TERM=%r' % termname) -for cql, lines_expected in queries: -output, result = testcall_cqlsh(prompt=None, env={'TERM': termname}, -tty=False, input=cql + '\n') -output = output.splitlines() -for line in output: -self.assertNoHasColors(line) -self.assertNotRegexpMatches(line, r'^cqlsh\S*') -self.assertEqual(len(output), lines_expected, - msg='output: %r' % '\n'.join(output)) -self.assertEqual(output[0], '') -self.assertNicelyFormattedTableHeader(output[1]) -self.assertNicelyFormattedTableRule(output[2]) -self.assertNicelyFormattedTableData(output[3]) -self.assertEqual(output[4].strip(), '') +query = 'select * from has_all_types limit 1;' +output, result = testcall_cqlsh(prompt=None, env={'TERM': termname}, +tty=False, input=query + '\n') +output = output.splitlines() +for line in output: +self.assertNoHasColors(line) +self.assertNotRegexpMatches(line, r'^cqlsh\S*') +self.assertTrue(6 = len(output) = 8, + msg='output: %r' % '\n'.join(output)) +self.assertEqual(output[0], '') +self.assertNicelyFormattedTableHeader(output[1]) +self.assertNicelyFormattedTableRule(output[2]) +self.assertNicelyFormattedTableData(output[3]) +self.assertEqual(output[4].strip(), '') def test_color_output(self): for termname in ('xterm', 'unknown-garbage'): @@ -449,13 +446,11 @@ class TestCqlshOutput(BaseTestCase): G YYmmY 2 | \x00\x01\x02\x03\x04\x05control chars\x06\x07 G Y - 3 | \xfe\xffbyte order mark - G YYY 4 | fake special chars\x00\n G -(5 rows) +(4 rows) ), ), cqlver=cqlsh.DEFAULT_CQLVER) @@ -525,46 +520,6 @@ class TestCqlshOutput(BaseTestCase): # explicitly generate an exception on the deserialization of type X.. pass -def test_colval_decoding_errors(self): -self.assertCqlverQueriesGiveColoredOutput(( -(select * from has_value_encoding_errors;, r - pkey | utf8col - MMM ---+ - -A | '\x00\xff\x00\xff' -Y RR - - -(1 rows) - - - -Failed to decode value '\x00\xff\x00\xff' (for column 'utf8col') as text: 'utf8' codec can't decode byte 0xff in position 1: invalid start byte -
[4/6] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd396ec8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd396ec8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd396ec8 Branch: refs/heads/trunk Commit: bd396ec8acb74436fd84a9cf48542c49e08a17a6 Parents: c54126b dd4fbbc Author: Mikhail Stepura mish...@apache.org Authored: Thu Sep 4 13:51:39 2014 -0700 Committer: Mikhail Stepura mish...@apache.org Committed: Thu Sep 4 13:51:39 2014 -0700 -- --
[6/6] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b74342a8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b74342a8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b74342a8 Branch: refs/heads/trunk Commit: b74342a8aaf775357b5087db35e8eefe2fbd4ed0 Parents: 2f3fab1 bd396ec Author: Mikhail Stepura mish...@apache.org Authored: Thu Sep 4 13:51:53 2014 -0700 Committer: Mikhail Stepura mish...@apache.org Committed: Thu Sep 4 13:51:53 2014 -0700 -- --
[1/6] git commit: Ninja: cqlsh test fixed for 2.1.0
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 c54126bb4 - bd396ec8a refs/heads/cassandra-2.1.0 37c6b2f22 - dd4fbbcd8 refs/heads/trunk 2f3fab14f - b74342a8a Ninja: cqlsh test fixed for 2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd4fbbcd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd4fbbcd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd4fbbcd Branch: refs/heads/cassandra-2.1 Commit: dd4fbbcd8cce48d763b4ec93c97b09d80ebd532e Parents: 37c6b2f Author: Mikhail Stepura mish...@apache.org Authored: Thu Sep 4 13:31:50 2014 -0700 Committer: Mikhail Stepura mish...@apache.org Committed: Thu Sep 4 13:49:43 2014 -0700 -- pylib/cqlshlib/test/test_cqlsh_output.py | 77 + pylib/cqlshlib/test/test_keyspace_init.cql | 17 ++ 2 files changed, 20 insertions(+), 74 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd4fbbcd/pylib/cqlshlib/test/test_cqlsh_output.py -- diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py b/pylib/cqlshlib/test/test_cqlsh_output.py index 6fb4f41..072dd23 100644 --- a/pylib/cqlshlib/test/test_cqlsh_output.py +++ b/pylib/cqlshlib/test/test_cqlsh_output.py @@ -101,25 +101,22 @@ class TestCqlshOutput(BaseTestCase): self.assertNoHasColors(c.read_to_next_prompt()) def test_no_prompt_or_colors_output(self): -# CQL queries and number of lines expected in output: -queries = (('select * from has_all_types limit 1;', 7), - ('select * from has_value_encoding_errors limit 1;', 8)) for termname in ('', 'dumb', 'vt100', 'xterm'): cqlshlog.debug('TERM=%r' % termname) -for cql, lines_expected in queries: -output, result = testcall_cqlsh(prompt=None, env={'TERM': termname}, -tty=False, input=cql + '\n') -output = output.splitlines() -for line in output: -self.assertNoHasColors(line) -self.assertNotRegexpMatches(line, r'^cqlsh\S*') -self.assertEqual(len(output), lines_expected, - msg='output: %r' % '\n'.join(output)) -self.assertEqual(output[0], '') -self.assertNicelyFormattedTableHeader(output[1]) -self.assertNicelyFormattedTableRule(output[2]) -self.assertNicelyFormattedTableData(output[3]) -self.assertEqual(output[4].strip(), '') +query = 'select * from has_all_types limit 1;' +output, result = testcall_cqlsh(prompt=None, env={'TERM': termname}, +tty=False, input=query + '\n') +output = output.splitlines() +for line in output: +self.assertNoHasColors(line) +self.assertNotRegexpMatches(line, r'^cqlsh\S*') +self.assertTrue(6 = len(output) = 8, + msg='output: %r' % '\n'.join(output)) +self.assertEqual(output[0], '') +self.assertNicelyFormattedTableHeader(output[1]) +self.assertNicelyFormattedTableRule(output[2]) +self.assertNicelyFormattedTableData(output[3]) +self.assertEqual(output[4].strip(), '') def test_color_output(self): for termname in ('xterm', 'unknown-garbage'): @@ -449,13 +446,11 @@ class TestCqlshOutput(BaseTestCase): G YYmmY 2 | \x00\x01\x02\x03\x04\x05control chars\x06\x07 G Y - 3 | \xfe\xffbyte order mark - G YYY 4 | fake special chars\x00\n G -(5 rows) +(4 rows) ), ), cqlver=cqlsh.DEFAULT_CQLVER) @@ -525,46 +520,6 @@ class TestCqlshOutput(BaseTestCase): # explicitly generate an exception on the deserialization of type X.. pass -def test_colval_decoding_errors(self): -self.assertCqlverQueriesGiveColoredOutput(( -(select * from has_value_encoding_errors;, r - pkey | utf8col - MMM ---+ - -A | '\x00\xff\x00\xff' -Y RR - - -(1 rows) - - - -Failed to decode value '\x00\xff\x00\xff' (for column 'utf8col') as text: 'utf8' codec
[2/6] git commit: Ninja: cqlsh test fixed for 2.1.0
Ninja: cqlsh test fixed for 2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd4fbbcd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd4fbbcd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd4fbbcd Branch: refs/heads/cassandra-2.1.0 Commit: dd4fbbcd8cce48d763b4ec93c97b09d80ebd532e Parents: 37c6b2f Author: Mikhail Stepura mish...@apache.org Authored: Thu Sep 4 13:31:50 2014 -0700 Committer: Mikhail Stepura mish...@apache.org Committed: Thu Sep 4 13:49:43 2014 -0700 -- pylib/cqlshlib/test/test_cqlsh_output.py | 77 + pylib/cqlshlib/test/test_keyspace_init.cql | 17 ++ 2 files changed, 20 insertions(+), 74 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd4fbbcd/pylib/cqlshlib/test/test_cqlsh_output.py -- diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py b/pylib/cqlshlib/test/test_cqlsh_output.py index 6fb4f41..072dd23 100644 --- a/pylib/cqlshlib/test/test_cqlsh_output.py +++ b/pylib/cqlshlib/test/test_cqlsh_output.py @@ -101,25 +101,22 @@ class TestCqlshOutput(BaseTestCase): self.assertNoHasColors(c.read_to_next_prompt()) def test_no_prompt_or_colors_output(self): -# CQL queries and number of lines expected in output: -queries = (('select * from has_all_types limit 1;', 7), - ('select * from has_value_encoding_errors limit 1;', 8)) for termname in ('', 'dumb', 'vt100', 'xterm'): cqlshlog.debug('TERM=%r' % termname) -for cql, lines_expected in queries: -output, result = testcall_cqlsh(prompt=None, env={'TERM': termname}, -tty=False, input=cql + '\n') -output = output.splitlines() -for line in output: -self.assertNoHasColors(line) -self.assertNotRegexpMatches(line, r'^cqlsh\S*') -self.assertEqual(len(output), lines_expected, - msg='output: %r' % '\n'.join(output)) -self.assertEqual(output[0], '') -self.assertNicelyFormattedTableHeader(output[1]) -self.assertNicelyFormattedTableRule(output[2]) -self.assertNicelyFormattedTableData(output[3]) -self.assertEqual(output[4].strip(), '') +query = 'select * from has_all_types limit 1;' +output, result = testcall_cqlsh(prompt=None, env={'TERM': termname}, +tty=False, input=query + '\n') +output = output.splitlines() +for line in output: +self.assertNoHasColors(line) +self.assertNotRegexpMatches(line, r'^cqlsh\S*') +self.assertTrue(6 = len(output) = 8, + msg='output: %r' % '\n'.join(output)) +self.assertEqual(output[0], '') +self.assertNicelyFormattedTableHeader(output[1]) +self.assertNicelyFormattedTableRule(output[2]) +self.assertNicelyFormattedTableData(output[3]) +self.assertEqual(output[4].strip(), '') def test_color_output(self): for termname in ('xterm', 'unknown-garbage'): @@ -449,13 +446,11 @@ class TestCqlshOutput(BaseTestCase): G YYmmY 2 | \x00\x01\x02\x03\x04\x05control chars\x06\x07 G Y - 3 | \xfe\xffbyte order mark - G YYY 4 | fake special chars\x00\n G -(5 rows) +(4 rows) ), ), cqlver=cqlsh.DEFAULT_CQLVER) @@ -525,46 +520,6 @@ class TestCqlshOutput(BaseTestCase): # explicitly generate an exception on the deserialization of type X.. pass -def test_colval_decoding_errors(self): -self.assertCqlverQueriesGiveColoredOutput(( -(select * from has_value_encoding_errors;, r - pkey | utf8col - MMM ---+ - -A | '\x00\xff\x00\xff' -Y RR - - -(1 rows) - - - -Failed to decode value '\x00\xff\x00\xff' (for column 'utf8col') as text: 'utf8' codec can't decode byte 0xff in position 1: invalid start byte -
[jira] [Commented] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121962#comment-14121962 ] Jonathan Ellis commented on CASSANDRA-7873: --- I think you forgot to include the Accumulator source in that patch. java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7824) cqlsh completion for triggers
[ https://issues.apache.org/jira/browse/CASSANDRA-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-7824: --- Attachment: (was: CASSANDRA-2.1-7824.patch) cqlsh completion for triggers - Key: CASSANDRA-7824 URL: https://issues.apache.org/jira/browse/CASSANDRA-7824 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Mikhail Stepura Priority: Minor Labels: cqlsh Fix For: 2.1.1 It appears cqlsh doesn't have completion for the trigger related statements and we should probably add it. Triggers are also not documented by the {{cql.textile}} file. I could swear we already had a ticket for fixing the doc, but can't find it right now, so unless someone remembers which ticket that is, let's maybe handle this here too. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7824) cqlsh completion for triggers
[ https://issues.apache.org/jira/browse/CASSANDRA-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-7824: --- Attachment: CASSANDRA-2.1-7824-v2.patch v2 of the patch, with updated driver /cc [~thobbs] cqlsh completion for triggers - Key: CASSANDRA-7824 URL: https://issues.apache.org/jira/browse/CASSANDRA-7824 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Mikhail Stepura Priority: Minor Labels: cqlsh Fix For: 2.1.1 Attachments: CASSANDRA-2.1-7824-v2.patch It appears cqlsh doesn't have completion for the trigger related statements and we should probably add it. Triggers are also not documented by the {{cql.textile}} file. I could swear we already had a ticket for fixing the doc, but can't find it right now, so unless someone remembers which ticket that is, let's maybe handle this here too. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Yaskevich updated CASSANDRA-7594: --- Attachment: (was: thrift-server-0.3.7-SNAPSHOT.jar) Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar, jstack.txt For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7594) Disruptor Thrift server worker thread pool not adjustable
[ https://issues.apache.org/jira/browse/CASSANDRA-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Yaskevich updated CASSANDRA-7594: --- Attachment: thrift-server-0.3.7-SNAPSHOT.jar [~philipthompson] I've re-uploaded the jar with would properly shutdown invocation pool, because it turns out that Disruptor doesn't do it on WorkerPool drainAndHalt. Please give it a try, this should be the last thing. Thank you! Disruptor Thrift server worker thread pool not adjustable - Key: CASSANDRA-7594 URL: https://issues.apache.org/jira/browse/CASSANDRA-7594 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Pavel Yaskevich Fix For: 2.0.11 Attachments: CASSANDRA-7594.patch, disruptor-thrift-server-0.3.6-SNAPSHOT.jar, jstack.txt, thrift-server-0.3.7-SNAPSHOT.jar For the THsHaDisruptorServer, there may not be enough threads to run blocking StorageProxy methods. The current number of worker threads is hardcoded at 2 per selector, so 2 * numAvailableProcessors(), or 64 threads on a 16-core hyperthreaded machine. StorageProxy methods block these threads, so this puts an upper bound on the throughput if hsha is enabled. If operations take 10ms on average, the node can only handle a maximum of 6,400 operations per second. This is a regression from hsha on 1.2.x, where the thread pool was tunable using rpc_min_threads and rpc_max_threads. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-7879) Internal c* datatypes exposed via jmx and method signatures changed in 2.1
Philip S Doctor created CASSANDRA-7879: -- Summary: Internal c* datatypes exposed via jmx and method signatures changed in 2.1 Key: CASSANDRA-7879 URL: https://issues.apache.org/jira/browse/CASSANDRA-7879 Project: Cassandra Issue Type: Bug Reporter: Philip S Doctor In c* 2.0 the StorageService jmx has this signature: {noformat} public void forceKeyspaceCleanup {noformat} but in 2.1 RC6 it is this {noformat} public CompactionManager.AllSSTableOpStatus forceKeyspaceCleanup {noformat} This makes any consumer have a problem with the unmarshalling and should be a native java type. There may be further instances, the jmx api should probably be audited for similar instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7879) Internal c* datatypes exposed via jmx and method signatures changed in 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121999#comment-14121999 ] Brandon Williams commented on CASSANDRA-7879: - Also we agreed at some point to only return simple types over JMX. There are a few instances of it, but CompactionManager.AllSSTableOpStatus is the only violator I see, and it's a simple enum of success/fail so we really don't need it, at least from a JMX standpoint. Internal c* datatypes exposed via jmx and method signatures changed in 2.1 -- Key: CASSANDRA-7879 URL: https://issues.apache.org/jira/browse/CASSANDRA-7879 Project: Cassandra Issue Type: Bug Components: Core Reporter: Philip S Doctor Fix For: 2.1.0 In c* 2.0 the StorageService jmx has this signature: {noformat} public void forceKeyspaceCleanup {noformat} but in 2.1 RC6 it is this {noformat} public CompactionManager.AllSSTableOpStatus forceKeyspaceCleanup {noformat} This makes any consumer have a problem with the unmarshalling and should be a native java type. There may be further instances, the jmx api should probably be audited for similar instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7879) Internal c* datatypes exposed via jmx and method signatures changed in 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-7879: Component/s: Core Fix Version/s: 2.1.0 Assignee: Marcus Eriksson Internal c* datatypes exposed via jmx and method signatures changed in 2.1 -- Key: CASSANDRA-7879 URL: https://issues.apache.org/jira/browse/CASSANDRA-7879 Project: Cassandra Issue Type: Bug Components: Core Reporter: Philip S Doctor Assignee: Marcus Eriksson Fix For: 2.1.0 In c* 2.0 the StorageService jmx has this signature: {noformat} public void forceKeyspaceCleanup {noformat} but in 2.1 RC6 it is this {noformat} public CompactionManager.AllSSTableOpStatus forceKeyspaceCleanup {noformat} This makes any consumer have a problem with the unmarshalling and should be a native java type. There may be further instances, the jmx api should probably be audited for similar instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7879) Internal c* datatypes exposed via jmx and method signatures changed in 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-7879: Priority: Trivial (was: Major) Internal c* datatypes exposed via jmx and method signatures changed in 2.1 -- Key: CASSANDRA-7879 URL: https://issues.apache.org/jira/browse/CASSANDRA-7879 Project: Cassandra Issue Type: Bug Components: Core Reporter: Philip S Doctor Assignee: Marcus Eriksson Priority: Trivial Fix For: 2.1.0 In c* 2.0 the StorageService jmx has this signature: {noformat} public void forceKeyspaceCleanup {noformat} but in 2.1 RC6 it is this {noformat} public CompactionManager.AllSSTableOpStatus forceKeyspaceCleanup {noformat} This makes any consumer have a problem with the unmarshalling and should be a native java type. There may be further instances, the jmx api should probably be audited for similar instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7828) New node cannot be joined if a value in composite type column is dropped (description updated)
[ https://issues.apache.org/jira/browse/CASSANDRA-7828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122069#comment-14122069 ] Michael Shuler commented on CASSANDRA-7828: --- Is this reproducible on 2.0.10? New node cannot be joined if a value in composite type column is dropped (description updated) -- Key: CASSANDRA-7828 URL: https://issues.apache.org/jira/browse/CASSANDRA-7828 Project: Cassandra Issue Type: Bug Components: Core Reporter: Igor Zubchenok I get a *RuntimeException* at new node system.log on bootstrapping a new DC: {code:title=system.out - RuntimeException caused by IllegalArgumentException in Buffer.limit|borderStyle=solid} INFO [NonPeriodicTasks:1] 2014-08-26 15:43:01,030 SecondaryIndexManager.java (line 137) Submitting index build of [myColumnFamily.myColumnFamily_myColumn] for data in SSTableReader(path='/var/lib/cassandra/data/testbug/myColumnFamily/testbug-myColumnFamily-jb-1-Data.db') ERROR [CompactionExecutor:2] 2014-08-26 15:43:01,035 CassandraDaemon.java (line 199) Exception in thread Thread[CompactionExecutor:2,1,main] java.lang.IllegalArgumentException at java.nio.Buffer.limit(Buffer.java:267) at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:587) at org.apache.cassandra.utils.ByteBufferUtil.readBytesWithShortLength(ByteBufferUtil.java:596) at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:61) at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:36) at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:44) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.concurrent.ConcurrentSkipListMap.findPredecessor(ConcurrentSkipListMap.java:727) at java.util.concurrent.ConcurrentSkipListMap.findNode(ConcurrentSkipListMap.java:789) at java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:828) at java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1626) at org.apache.cassandra.db.Memtable.resolve(Memtable.java:215) at org.apache.cassandra.db.Memtable.put(Memtable.java:173) at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:900) at org.apache.cassandra.db.index.AbstractSimplePerColumnSecondaryIndex.insert(AbstractSimplePerColumnSecondaryIndex.java:107) at org.apache.cassandra.db.index.SecondaryIndexManager.indexRow(SecondaryIndexManager.java:441) at org.apache.cassandra.db.Keyspace.indexRow(Keyspace.java:413) at org.apache.cassandra.db.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:62) at org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:834) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ERROR [NonPeriodicTasks:1] 2014-08-26 15:43:01,035 CassandraDaemon.java (line 199) Exception in thread Thread[NonPeriodicTasks:1,5,main] java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:413) at org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndexes(SecondaryIndexManager.java:142) at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:113) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at java.util.concurrent.FutureTask.report(FutureTask.java:122) at
[jira] [Commented] (CASSANDRA-7841) Pass static singleton instances into constructors of dependent classes
[ https://issues.apache.org/jira/browse/CASSANDRA-7841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122195#comment-14122195 ] Blake Eggleston commented on CASSANDRA-7841: I have a couple issues I could use some input on. You can check out my current progress here: https://github.com/bdeggleston/cassandra/compare/bdeggleston:C7840...C7841?expand=1 1) StorageService TokenMetadata: A lot the singletons StorageService depends on also depend on StorageService, but only for access to the tokenMetadata, versionedValueFactory, and bootstrap status. In my experimental implementation, I split this stuff into a separate class called ClusterState, which StorageService, and several other classes depended on. I'm reaching a point where I'm going to have to do something similar, and wanted to get some input on the specifics. Initially, I was thinking of just duplicating the ClusterState class, but then thought that having separate singletons for the TokenMetaData, and VersionedValueFactory, and moving the bootstrap status into the DatabaseDescriptor might be more flexible. I'm not sure that the 3 necessarily need to be kept together. 2) Interface implementations: IAuthorizer, IEndpoint snitch, and a few others all have implementations with different sets of dependencies, which they will need to be instantiated with. I know that an autowired DI framework isn't something we want to do for DI in general, but it seems like it's the right thing to use in these cases. I'm not married to the idea, but the only alternatives I can think of at this point would be to check for a single ultraconstructor which takes all possible dependencies, or using reflection to work out which constructors are available, and passing the dependencies in that way, something the DI frameworks already do better than an ad-hoc implementation. Here are some of the IEndpointSnitch and ReplicationStrategy classes, and their dependencies: https://gist.github.com/bdeggleston/d61cfaa89ef6cd8508c4 Pass static singleton instances into constructors of dependent classes -- Key: CASSANDRA-7841 URL: https://issues.apache.org/jira/browse/CASSANDRA-7841 Project: Cassandra Issue Type: Sub-task Reporter: Blake Eggleston Assignee: Blake Eggleston Identify all non-singleton usages of static state (grep for '.instance.'), and refactor to pass dependencies into their constructors from their instantiating services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-7873: Attachment: 7873.21.txt Whoops. Attached a new cut with Accumulator included. This one's against 2.1 instead of trunk, but can switch back again depending on what we decide to target this with. java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
git commit: fix changes
Repository: cassandra Updated Branches: refs/heads/trunk b74342a8a - b4839a388 fix changes Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b4839a38 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b4839a38 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b4839a38 Branch: refs/heads/trunk Commit: b4839a388b35dadc010345bdd13bd6454f35b945 Parents: b74342a Author: Brandon Williams brandonwilli...@apache.org Authored: Thu Sep 4 18:57:51 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Thu Sep 4 18:57:51 2014 -0500 -- CHANGES.txt | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4839a38/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 6f87cf9..4c4fc2c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -96,7 +96,6 @@ Merged from 2.0: * Don't send schema change responses and events for no-op DDL statements (CASSANDRA-7600) * (Hadoop) fix cluster initialisation for a split fetching (CASSANDRA-7774) - cassandra-2.1 * Configure system.paxos with LeveledCompactionStrategy (CASSANDRA-7753) * Fix ALTER clustering column type from DateType to TimestampType when using DESC clustering order (CASSANRDA-7797) @@ -200,7 +199,7 @@ Merged from 2.0: * Updated memtable_cleanup_threshold and memtable_flush_writers defaults (CASSANDRA-7551) * (Windows) fix startup when WMI memory query fails (CASSANDRA-7505) - * Anti-compaction proceeds if any part of the repair failed (CASANDRA-7521) + * Anti-compaction proceeds if any part of the repair failed (CASSANDRA-7521) * Add missing table name to DROP INDEX responses and notifications (CASSANDRA-7539) * Bump CQL version to 3.2.0 and update CQL documentation (CASSANDRA-7527) * Fix configuration error message when running nodetool ring (CASSANDRA-7508)
[jira] [Commented] (CASSANDRA-7841) Pass static singleton instances into constructors of dependent classes
[ https://issues.apache.org/jira/browse/CASSANDRA-7841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122203#comment-14122203 ] Blake Eggleston commented on CASSANDRA-7841: re #1: it may make sense to move all 3 into the DatabaseDescriptor for now, with the intention of breaking the DatabaseDescriptor into module level config classes once this refactor is complete. Pass static singleton instances into constructors of dependent classes -- Key: CASSANDRA-7841 URL: https://issues.apache.org/jira/browse/CASSANDRA-7841 Project: Cassandra Issue Type: Sub-task Reporter: Blake Eggleston Assignee: Blake Eggleston Identify all non-singleton usages of static state (grep for '.instance.'), and refactor to pass dependencies into their constructors from their instantiating services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7841) Pass static singleton instances into constructors of dependent classes
[ https://issues.apache.org/jira/browse/CASSANDRA-7841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122206#comment-14122206 ] Blake Eggleston commented on CASSANDRA-7841: re #1 again: otoh, this might be a good point to start making module level configs, with the TokenMetaData going into LocatorConfig, and VersionedValueFactory /bootstrap state going into GMSConfig. Sorry for the rapid fire comments. Pass static singleton instances into constructors of dependent classes -- Key: CASSANDRA-7841 URL: https://issues.apache.org/jira/browse/CASSANDRA-7841 Project: Cassandra Issue Type: Sub-task Reporter: Blake Eggleston Assignee: Blake Eggleston Identify all non-singleton usages of static state (grep for '.instance.'), and refactor to pass dependencies into their constructors from their instantiating services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-7880) Create a new system table schema_change_history
Michaël Figuière created CASSANDRA-7880: --- Summary: Create a new system table schema_change_history Key: CASSANDRA-7880 URL: https://issues.apache.org/jira/browse/CASSANDRA-7880 Project: Cassandra Issue Type: New Feature Reporter: Michaël Figuière Priority: Minor The current way Cassandra handle schema modification can lead to some schema disagreements as DDL statements execution doesn't come with any absolute guarantee. I understand that entirely seamless schema updates in such a distributed system will be challenging to reach and probably not a high priority for now. That being said these disagreements can sometime lead to challenging situation for scripts or tools that need things to be in order to move on. To clarify the situation, help the user to figure out what's going on, as well as to properly log these sensitive operations, it would be interesting to add a {{schema_change_history}} table in the {{system}} keyspace. I would expect it to be local to a node and to contain the following information: * DDL statement that has been executed * User login used for the operation * IP of the client that originated the request * Date/Time of the change * Schema version before the change * Schema version after the change Under normal conditions, Cassandra shouldn't handle a massive amount of DDL statements so this table should grow at a descent pace. Nevertheless to bound its growth we can consider adding a TTL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-7881) SCHEMA_CHANGE Events and Responses should carry the Schema Version
Michaël Figuière created CASSANDRA-7881: --- Summary: SCHEMA_CHANGE Events and Responses should carry the Schema Version Key: CASSANDRA-7881 URL: https://issues.apache.org/jira/browse/CASSANDRA-7881 Project: Cassandra Issue Type: New Feature Reporter: Michaël Figuière Priority: Minor For similar logging and debugging purpose as exposed in CASSANDRA-7880, it would be helpful to send to the client the previous and new schema version UUID that were in use before and after a schema change operation, in the {{SCHEMA_CHANGE}} events and responses in the protocol v4. This could then be exposed in the client APIs in order to bring much more precise awareness of the actual status of the schema on each node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-7873: Attachment: (was: 7873.21.txt) java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-7873: Attachment: 7873.21.txt Reattaching with tweaked patch java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-7873: Attachment: (was: 7873.21.txt) java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-7873: Attachment: 7873.21.txt java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-7873: Attachment: (was: 7873.21.txt) java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-7873: Attachment: 7873.21.txt and again... java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7870) Cannot execute logged batch when only the coordinator node is alive
[ https://issues.apache.org/jira/browse/CASSANDRA-7870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122255#comment-14122255 ] Aleksey Yeschenko commented on CASSANDRA-7870: -- Agree with Jonathan here. Cannot execute logged batch when only the coordinator node is alive --- Key: CASSANDRA-7870 URL: https://issues.apache.org/jira/browse/CASSANDRA-7870 Project: Cassandra Issue Type: Bug Components: Core Reporter: Sergio Bossa Priority: Critical As per issue summary. This is probably a bug, rather than a consequence of needing to replicate the batchlog, as if only the coordinator is alive the batch cannot be partially executed on other nodes (as there are no other nodes). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122258#comment-14122258 ] Jonathan Ellis commented on CASSANDRA-7873: --- My vote would be for CLQ in 2.1 and Accumulator in 3.0. java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122261#comment-14122261 ] Mikhail Stepura commented on CASSANDRA-7873: for the record - we already use CLQ in 2.1 :) java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122265#comment-14122265 ] Benedict commented on CASSANDRA-7873: - bq. My vote would be for CLQ in 2.1 and Accumulator in 3.0. SGTM. If you +1 I'll commit to trunk, and separately ninja 2.1 to reduce the four .size() calls in .resolve() to a single call stored to a local variable. java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122285#comment-14122285 ] Jonathan Ellis commented on CASSANDRA-7873: --- I skimmed it, but can you do an official review [~mishail]? java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-7873) java.util.ConcurrentModificationException seen after selecting from system_auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict reassigned CASSANDRA-7873: --- Assignee: Benedict java.util.ConcurrentModificationException seen after selecting from system_auth --- Key: CASSANDRA-7873 URL: https://issues.apache.org/jira/browse/CASSANDRA-7873 Project: Cassandra Issue Type: Bug Environment: OSX and Ubuntu 14.04 Reporter: Philip Thompson Assignee: Benedict Fix For: 3.0 Attachments: 7873.21.txt, 7873.txt The dtest auth_test.py:TestAuth.system_auth_ks_is_alterable_test is failing on trunk only with the following stack trace: {code} Unexpected error in node1 node log: ERROR [Thrift:1] 2014-09-03 15:48:08,389 CustomTThreadPoolServer.java:219 - Error occurred during processing of message. java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) ~[na:1.7.0_65] at java.util.ArrayList$Itr.next(ArrayList.java:831) ~[na:1.7.0_65] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:71) ~[main/:na] at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) ~[main/:na] at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[main/:na] at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144) ~[main/:na] at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1228) ~[main/:na] at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1154) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212) ~[main/:na] at org.apache.cassandra.auth.Auth.selectUser(Auth.java:257) ~[main/:na] at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:76) ~[main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:178) ~[main/:na] at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1486) ~[main/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579) ~[thrift/:na] at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563) ~[thrift/:na] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.1.jar:0.9.1] at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] {code} That exception is thrown when the following query is sent: {code} SELECT strategy_options FROM system.schema_keyspaces WHERE keyspace_name = 'system_auth' {code} The test alters the RF of the system_auth keyspace, then shuts down and restarts the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7519) Further stress improvements to generate more realistic workloads
[ https://issues.apache.org/jira/browse/CASSANDRA-7519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122314#comment-14122314 ] Benedict commented on CASSANDRA-7519: - Rebased and made the agreed tweaks as two follow up commits. If you can +1 I'll commit this to 2.1.0 in time for release Further stress improvements to generate more realistic workloads Key: CASSANDRA-7519 URL: https://issues.apache.org/jira/browse/CASSANDRA-7519 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Benedict Assignee: Benedict Priority: Minor Labels: tools Fix For: 2.1.1 We generally believe that the most common workload is for reads to exponentially prefer most recently written data. However as stress currently behaves we have two id generation modes: sequential and random (although random can be distributed). I propose introducing a new mode which is somewhat like sequential, except we essentially 'look back' from the current id by some amount defined by a distribution. I may possibly make the position only increment as it's first written to also, so that this mode can be run from a clean slate with a mixed workload. This should allow is to generate workloads that are more representative. At the same time, I will introduce a timestamp value generator for primary key columns that is strictly ascending, i.e. has some random component but is based off of the actual system time (or some shared monotonically increasing state) so that we can again generate a more realistic workload. This may be challenging to tie in with the new procedurally generated partitions, but I'm sure it can be done without too much difficulty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)