[jira] [Commented] (CASSANDRA-5633) CQL support for updating multiple rows in a partition using CAS
[ https://issues.apache.org/jira/browse/CASSANDRA-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850234#comment-13850234 ] Sylvain Lebresne commented on CASSANDRA-5633: - I'm starting to wonder if another approaches to this wouldn't be simpler. Namely, I'd like to suggest the idea of supporting static columns (the initial idea was suggested by jhalliday on irc). That is, we allow to declare some columns that as static to a partition, i.e. their value would be shared by all the rows of the same partition. The reason this is related to this issue is that you could CAS those static columns to update multiple rows of the same partition atomically. Concretely, you could define something like that: {noformat} CREATE TABLE t ( id text PRIMARY KEY, version int static, insertion_time timeuuid, prop1 text, prop2 int, PRIMARY KEY (insertion_time, prop1, prop2) ) {noformat} The {{version}} column being static, it's value would be shared by all rows having the same {{id}} so that you can use it as a partition version that allows to serialize inserts. More precisely, you'd read some row(s) of the partition, and update some other row of the partition based on CASing the version just read. Though it's not 100% equivalent to what the other suggestion of this ticket, I believe this static solution would be as general as anything else in terms of what can be done since you can serialize updates in any order you want. And in fact, for every concrete use case I have in mind for this ticket, this static column solution seems to provide a more natural/direct solution (of course, it's quite possible there is use cases I haven't though of and for which this static columns idea would be very awkward, but I'd be happy to understand those). Other advantages of this static columns solution I can think of are that: # it doesn't require any complex syntax. We'll have to define a few rules to govern those static columns (when do they get deleted, etc..), but syntax wise, it would really all just be the introduction of the static keyword in table creation. # it has uses outside of CAS, making it less of a narrow use case. There are cases where people want to basically cram a static and a dynamic table into a single table for efficiency reasons, and this would provide a native way to support that. I'll soon open a separate issue for this static columns idea, with a bit more detail on the exact semantic and some pointers on how I think this can be implemented, but [~sebastian_schmidt], is that something which sounds like it would fit well to your use cases? (and if not, can you try to explain why, if only for the sake of better understanding what we're trying to solve here). CQL support for updating multiple rows in a partition using CAS --- Key: CASSANDRA-5633 URL: https://issues.apache.org/jira/browse/CASSANDRA-5633 Project: Cassandra Issue Type: Improvement Affects Versions: 2.0 beta 1 Reporter: sankalp kohli Assignee: Sylvain Lebresne Priority: Minor Labels: cql3 Fix For: 2.0.4 This is currently supported via Thrift but not via CQL. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6496) Endless L0 LCS compactions
[ https://issues.apache.org/jira/browse/CASSANDRA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850238#comment-13850238 ] Marcus Eriksson commented on CASSANDRA-6496: i think this might be a duplicate of CASSANDRA-6284 Endless L0 LCS compactions -- Key: CASSANDRA-6496 URL: https://issues.apache.org/jira/browse/CASSANDRA-6496 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 2.0.3, Linux, 6 nodes, 5 disks per node Reporter: Nikolai Grigoriev Attachments: system.log.1.gz, system.log.gz I have first described the problem here: http://stackoverflow.com/questions/20589324/cassandra-2-0-3-endless-compactions-with-no-traffic I think I have really abused my system with the traffic (mix of reads, heavy updates and some deletes). Now after stopping the traffic I see the compactions that are going on endlessly for over 4 days. For a specific CF I have about 4700 sstable data files right now. The compaction estimates are logged as [3312, 4, 0, 0, 0, 0, 0, 0, 0]. sstable_size_in_mb=256. 3214 files are about 256Mb (+/1 few megs), other files are smaller or much smaller than that. No sstables are larger than 256Mb. What I observe is that LCS picks 32 sstables from L0 and compacts them into 32 sstables of approximately the same size. So, what my system is doing for last 4 days (no traffic at all) is compacting groups of 32 sstables into groups of 32 sstables without any changes. Seems like a bug to me regardless of what did I do to get the system into this state... -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (CASSANDRA-6498) Null pointer exception in custom secondary indexes
Andrés de la Peña created CASSANDRA-6498: Summary: Null pointer exception in custom secondary indexes Key: CASSANDRA-6498 URL: https://issues.apache.org/jira/browse/CASSANDRA-6498 Project: Cassandra Issue Type: Bug Reporter: Andrés de la Peña StorageProxy#estimateResultRowsPerRange raises a null pointer exception when using a custom 2i implementation that not uses a column family as underlying storage: {code} resultRowsPerRange = highestSelectivityIndex.getIndexCfs().getMeanColumns(); {code} According to the documentation, the method SecondaryIndex#getIndexCfs should return null when no column family is used. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6496) Endless L0 LCS compactions
[ https://issues.apache.org/jira/browse/CASSANDRA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850381#comment-13850381 ] Nikolai Grigoriev commented on CASSANDRA-6496: -- One thing I forgot to mention about the logs - I have reduced the number of compactors to one when enabling the debugging. Since at that point it was clear that something was wrong I was looking for clarity, not performance :) Endless L0 LCS compactions -- Key: CASSANDRA-6496 URL: https://issues.apache.org/jira/browse/CASSANDRA-6496 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 2.0.3, Linux, 6 nodes, 5 disks per node Reporter: Nikolai Grigoriev Attachments: system.log.1.gz, system.log.gz I have first described the problem here: http://stackoverflow.com/questions/20589324/cassandra-2-0-3-endless-compactions-with-no-traffic I think I have really abused my system with the traffic (mix of reads, heavy updates and some deletes). Now after stopping the traffic I see the compactions that are going on endlessly for over 4 days. For a specific CF I have about 4700 sstable data files right now. The compaction estimates are logged as [3312, 4, 0, 0, 0, 0, 0, 0, 0]. sstable_size_in_mb=256. 3214 files are about 256Mb (+/1 few megs), other files are smaller or much smaller than that. No sstables are larger than 256Mb. What I observe is that LCS picks 32 sstables from L0 and compacts them into 32 sstables of approximately the same size. So, what my system is doing for last 4 days (no traffic at all) is compacting groups of 32 sstables into groups of 32 sstables without any changes. Seems like a bug to me regardless of what did I do to get the system into this state... -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0
[ https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-6378: --- Attachment: (was: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch) sstableloader does not support client encryption on Cassandra 2.0 - Key: CASSANDRA-6378 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378 Project: Cassandra Issue Type: Bug Reporter: David Laube Assignee: Sam Tunnicliffe Labels: client, encryption, ssl, sstableloader Fix For: 2.0.4 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch We have been testing backup/restore from one ring to another and we recently stumbled upon an issue with sstableloader. When client_enc_enable: true, the exception below is generated. However, when client_enc_enable is set to false, the sstableloader is able to get to the point where it is discovers endpoints, connects to stream data, etc. ==BEGIN EXCEPTION== sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 /tmp/import/keyspace_name/columnfamily_name Exception in thread main java.lang.RuntimeException: Could not retrieve endpoint ranges: at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68) Caused by: org.apache.thrift.transport.TTransportException: Frame size (352518400) larger than max length (16384000)! at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292) at org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280) at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199) ... 2 more ==END EXCEPTION== -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0
[ https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-6378: --- Attachment: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch Removed unnecessary exception handling from SSLTransportFactory sstableloader does not support client encryption on Cassandra 2.0 - Key: CASSANDRA-6378 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378 Project: Cassandra Issue Type: Bug Reporter: David Laube Assignee: Sam Tunnicliffe Labels: client, encryption, ssl, sstableloader Fix For: 2.0.4 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch We have been testing backup/restore from one ring to another and we recently stumbled upon an issue with sstableloader. When client_enc_enable: true, the exception below is generated. However, when client_enc_enable is set to false, the sstableloader is able to get to the point where it is discovers endpoints, connects to stream data, etc. ==BEGIN EXCEPTION== sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 /tmp/import/keyspace_name/columnfamily_name Exception in thread main java.lang.RuntimeException: Could not retrieve endpoint ranges: at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68) Caused by: org.apache.thrift.transport.TTransportException: Frame size (352518400) larger than max length (16384000)! at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292) at org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280) at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199) ... 2 more ==END EXCEPTION== -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Comment Edited] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0
[ https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850489#comment-13850489 ] Sam Tunnicliffe edited comment on CASSANDRA-6378 at 12/17/13 2:18 PM: -- Updated patch to remove unnecessary exception handling from SSLTransportFactory was (Author: beobal): Removed unnecessary exception handling from SSLTransportFactory sstableloader does not support client encryption on Cassandra 2.0 - Key: CASSANDRA-6378 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378 Project: Cassandra Issue Type: Bug Reporter: David Laube Assignee: Sam Tunnicliffe Labels: client, encryption, ssl, sstableloader Fix For: 2.0.4 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch We have been testing backup/restore from one ring to another and we recently stumbled upon an issue with sstableloader. When client_enc_enable: true, the exception below is generated. However, when client_enc_enable is set to false, the sstableloader is able to get to the point where it is discovers endpoints, connects to stream data, etc. ==BEGIN EXCEPTION== sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 /tmp/import/keyspace_name/columnfamily_name Exception in thread main java.lang.RuntimeException: Could not retrieve endpoint ranges: at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68) Caused by: org.apache.thrift.transport.TTransportException: Frame size (352518400) larger than max length (16384000)! at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292) at org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280) at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199) ... 2 more ==END EXCEPTION== -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (CASSANDRA-6488) Batchlog writes consume unnecessarily large amounts of CPU on vnodes clusters
[ https://issues.apache.org/jira/browse/CASSANDRA-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-6488: - Attachment: 6488-fix.txt Separates TM.cloneOnlyTokenMap() and TM.cachedOnlyTokenMap() and only switched SP.getBatchlogEndpoints() and ARS.getNaturalEndpoints() to use the cached version. They aren't the only methods that *don't* mutate the returned metadata, but going through the rest of the usages and optimizing those can wait. Also fixes a regression from 6435 in TM.cachedOnlyTokenMap(). Batchlog writes consume unnecessarily large amounts of CPU on vnodes clusters - Key: CASSANDRA-6488 URL: https://issues.apache.org/jira/browse/CASSANDRA-6488 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Rick Branson Fix For: 1.2.13, 2.0.4 Attachments: 6488-fix.txt, 6488-rbranson-patch.txt, 6488-v2.txt, 6488-v3.txt, graph (21).png The cloneTokenOnlyMap call in StorageProxy.getBatchlogEndpoints causes enormous amounts of CPU to be consumed on clusters with many vnodes. I created a patch to cache this data as a workaround and deployed it to a production cluster with 15,000 tokens. CPU consumption drop to 1/5th. This highlights the overall issues with cloneOnlyTokenMap() calls on vnodes clusters. I'm including the maybe-not-the-best-quality workaround patch to use as a reference, but cloneOnlyTokenMap is a systemic issue and every place it's called should probably be investigated. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (CASSANDRA-6496) Endless L0 LCS compactions
[ https://issues.apache.org/jira/browse/CASSANDRA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6496: -- Attachment: 6496.txt Patch to remove sstable output size limit when we're supposed to be doing STCS in L0. - Chose to use LCT w/ unlimited size instead of normal CT since that seems less fragile (e.g. if we decide CT.level() should return -1) - Some churn to standardize on limiting in Bytes over MB Endless L0 LCS compactions -- Key: CASSANDRA-6496 URL: https://issues.apache.org/jira/browse/CASSANDRA-6496 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 2.0.3, Linux, 6 nodes, 5 disks per node Reporter: Nikolai Grigoriev Attachments: 6496.txt, system.log.1.gz, system.log.gz I have first described the problem here: http://stackoverflow.com/questions/20589324/cassandra-2-0-3-endless-compactions-with-no-traffic I think I have really abused my system with the traffic (mix of reads, heavy updates and some deletes). Now after stopping the traffic I see the compactions that are going on endlessly for over 4 days. For a specific CF I have about 4700 sstable data files right now. The compaction estimates are logged as [3312, 4, 0, 0, 0, 0, 0, 0, 0]. sstable_size_in_mb=256. 3214 files are about 256Mb (+/1 few megs), other files are smaller or much smaller than that. No sstables are larger than 256Mb. What I observe is that LCS picks 32 sstables from L0 and compacts them into 32 sstables of approximately the same size. So, what my system is doing for last 4 days (no traffic at all) is compacting groups of 32 sstables into groups of 32 sstables without any changes. Seems like a bug to me regardless of what did I do to get the system into this state... -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[5/8] git commit: caching all calls of cloneOnlyTokenMap is not correct since many callers mutate the result patch by Aleksey Yeschenko; reviewed by jbellis for CASSANDRA-6488
caching all calls of cloneOnlyTokenMap is not correct since many callers mutate the result patch by Aleksey Yeschenko; reviewed by jbellis for CASSANDRA-6488 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/829047af Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/829047af Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/829047af Branch: refs/heads/cassandra-1.2 Commit: 829047af58fb1735b5d12b74f06ed0d4f04b2c0d Parents: 54a1955 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 08:46:35 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 08:46:35 2013 -0600 -- CHANGES.txt | 6 ++--- .../locator/AbstractReplicationStrategy.java| 2 +- .../apache/cassandra/locator/TokenMetadata.java | 28 +--- .../apache/cassandra/service/StorageProxy.java | 2 +- 4 files changed, 29 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/829047af/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4816d70..22a121e 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,9 +1,8 @@ -1.2.14 - * Improved error message on bad properties in DDL queries (CASSANDRA-6453) - 1.2.13 + * Improved error message on bad properties in DDL queries (CASSANDRA-6453) * Randomize batchlog candidates selection (CASSANDRA-6481) * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345, 6485) + * Improve batchlog write performance with vnodes (CASSANDRA-6488) * Optimize FD phi calculation (CASSANDRA-6386) * Improve initial FD phi estimate when starting up (CASSANDRA-6385) * Don't list CQL3 table in CLI describe even if named explicitely @@ -20,7 +19,6 @@ (CASSANDRA-6413) * (Hadoop) add describe_local_ring (CASSANDRA-6268) * Fix handling of concurrent directory creation failure (CASSANDRA-6459) - * Improve batchlog write performance with vnodes (CASSANDRA-6488) 1.2.12 http://git-wip-us.apache.org/repos/asf/cassandra/blob/829047af/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java -- diff --git a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java index 85e229c..a48bec9 100644 --- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java +++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java @@ -107,7 +107,7 @@ public abstract class AbstractReplicationStrategy ArrayListInetAddress endpoints = getCachedEndpoints(keyToken); if (endpoints == null) { -TokenMetadata tm = tokenMetadata.cloneOnlyTokenMap(); +TokenMetadata tm = tokenMetadata.cachedOnlyTokenMap(); // if our cache got invalidated, it's possible there is a new token to account for too keyToken = TokenMetadata.firstToken(tm.sortedTokens(), searchToken); endpoints = new ArrayListInetAddress(calculateNaturalEndpoints(searchToken, tm)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/829047af/src/java/org/apache/cassandra/locator/TokenMetadata.java -- diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java b/src/java/org/apache/cassandra/locator/TokenMetadata.java index be0f7c7..cf0c472 100644 --- a/src/java/org/apache/cassandra/locator/TokenMetadata.java +++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java @@ -591,12 +591,31 @@ public class TokenMetadata /** * Create a copy of TokenMetadata with only tokenToEndpointMap. That is, pending ranges, * bootstrap tokens and leaving endpoints are not included in the copy. - * - * This uses a cached copy that is invalided when the ring changes, so in the common case - * no extra locking is required. */ public TokenMetadata cloneOnlyTokenMap() { +lock.readLock().lock(); +try +{ +return new TokenMetadata(SortedBiMultiValMap.Token, InetAddresscreate(tokenToEndpointMap, null, inetaddressCmp), + HashBiMap.create(endpointToHostIdMap), + new Topology(topology)); +} +finally +{ +lock.readLock().unlock(); +} +} + +/** + * Return a cached TokenMetadata with only tokenToEndpointMap, i.e., the same as cloneOnlyTokenMap but + * uses a cached copy that is invalided when the ring changes, so in the common case + * no extra locking is required. + * + * Callers must *NOT*
[7/8] git commit: merge from 1.2
merge from 1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35902754 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35902754 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35902754 Branch: refs/heads/cassandra-2.0 Commit: 359027549fd81ce2357defbb270e752f3acbb5e8 Parents: bdff106 829047a Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 08:49:28 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 08:49:28 2013 -0600 -- CHANGES.txt | 4 +-- src/java/org/apache/cassandra/cql3/Cql.g| 10 +-- .../locator/AbstractReplicationStrategy.java| 2 +- .../apache/cassandra/locator/TokenMetadata.java | 28 +--- .../apache/cassandra/service/StorageProxy.java | 2 +- 5 files changed, 37 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/35902754/CHANGES.txt -- diff --cc CHANGES.txt index 89ef6e1,22a121e..5450b8a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,18 -1,12 +1,20 @@@ -1.2.13 +2.0.4 + * Fix assertion failure in filterColdSSTables (CASSANDRA-6483) + * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008) + * Fix cleanup ClassCastException (CASSANDRA-6462) + * Reduce gossip memory use by interning VersionedValue strings (CASSANDRA-6410) + * Allow specifying datacenters to participate in a repair (CASSANDRA-6218) + * Fix divide-by-zero in PCI (CASSANDRA-6403) + * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284) + * Add sub-ms precision formats to the timestamp parser (CASSANDRA-6395) + * Expose a total memtable size metric for a CF (CASSANDRA-6391) + * cqlsh: handle symlinks properly (CASSANDRA-6425) + * Don't resubmit counter mutation runnables internally (CASSANDRA-6427) +Merged from 1.2: + * Improved error message on bad properties in DDL queries (CASSANDRA-6453) * Randomize batchlog candidates selection (CASSANDRA-6481) * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345, 6485) + * Improve batchlog write performance with vnodes (CASSANDRA-6488) - * Optimize FD phi calculation (CASSANDRA-6386) - * Improve initial FD phi estimate when starting up (CASSANDRA-6385) - * Don't list CQL3 table in CLI describe even if named explicitely - (CASSANDRA-5750) * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172) * Improve gossip performance for typical messages (CASSANDRA-6409) * Throw IRE if a prepared statement has more markers than supported @@@ -25,47 -19,9 +27,45 @@@ (CASSANDRA-6413) * (Hadoop) add describe_local_ring (CASSANDRA-6268) * Fix handling of concurrent directory creation failure (CASSANDRA-6459) - * Randomize batchlog candidates selection (CASSANDRA-6481) - * Improve batchlog write performance with vnodes (CASSANDRA-6488) -1.2.12 +2.0.3 + * Fix FD leak on slice read path (CASSANDRA-6275) + * Cancel read meter task when closing SSTR (CASSANDRA-6358) + * free off-heap IndexSummary during bulk (CASSANDRA-6359) + * Recover from IOException in accept() thread (CASSANDRA-6349) + * Improve Gossip tolerance of abnormally slow tasks (CASSANDRA-6338) + * Fix trying to hint timed out counter writes (CASSANDRA-6322) + * Allow restoring specific columnfamilies from archived CL (CASSANDRA-4809) + * Avoid flushing compaction_history after each operation (CASSANDRA-6287) + * Fix repair assertion error when tombstones expire (CASSANDRA-6277) + * Skip loading corrupt key cache (CASSANDRA-6260) + * Fixes for compacting larger-than-memory rows (CASSANDRA-6274) + * Compact hottest sstables first and optionally omit coldest from + compaction entirely (CASSANDRA-6109) + * Fix modifying column_metadata from thrift (CASSANDRA-6182) + * cqlsh: fix LIST USERS output (CASSANDRA-6242) + * Add IRequestSink interface (CASSANDRA-6248) + * Update memtable size while flushing (CASSANDRA-6249) + * Provide hooks around CQL2/CQL3 statement execution (CASSANDRA-6252) + * Require Permission.SELECT for CAS updates (CASSANDRA-6247) + * New CQL-aware SSTableWriter (CASSANDRA-5894) + * Reject CAS operation when the protocol v1 is used (CASSANDRA-6270) + * Correctly throw error when frame too large (CASSANDRA-5981) + * Fix serialization bug in PagedRange with 2ndary indexes (CASSANDRA-6299) + * Fix CQL3 table validation in Thrift (CASSANDRA-6140) + * Fix bug missing results with IN clauses (CASSANDRA-6327) + * Fix paging with reversed slices (CASSANDRA-6343) + * Set minTimestamp correctly to be able to drop expired sstables (CASSANDRA-6337) + * Support NaN and Infinity as float literals (CASSANDRA-6003) + * Remove RF from
[8/8] git commit: Merge branch 'cassandra-2.0' into trunk
Merge branch 'cassandra-2.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c821d8b0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c821d8b0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c821d8b0 Branch: refs/heads/trunk Commit: c821d8b08e6cb5b20b8e324a578fb9534a3de212 Parents: 2ee6b8f 3590275 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 08:49:47 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 08:49:47 2013 -0600 -- CHANGES.txt | 4 +-- src/java/org/apache/cassandra/cql3/Cql.g| 10 +-- .../locator/AbstractReplicationStrategy.java| 2 +- .../apache/cassandra/locator/TokenMetadata.java | 28 +--- .../apache/cassandra/service/StorageProxy.java | 2 +- 5 files changed, 37 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c821d8b0/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c821d8b0/src/java/org/apache/cassandra/cql3/Cql.g -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c821d8b0/src/java/org/apache/cassandra/locator/TokenMetadata.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c821d8b0/src/java/org/apache/cassandra/service/StorageProxy.java --
[4/8] git commit: caching all calls of cloneOnlyTokenMap is not correct since many callers mutate the result patch by Aleksey Yeschenko; reviewed by jbellis for CASSANDRA-6488
caching all calls of cloneOnlyTokenMap is not correct since many callers mutate the result patch by Aleksey Yeschenko; reviewed by jbellis for CASSANDRA-6488 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/829047af Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/829047af Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/829047af Branch: refs/heads/trunk Commit: 829047af58fb1735b5d12b74f06ed0d4f04b2c0d Parents: 54a1955 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 08:46:35 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 08:46:35 2013 -0600 -- CHANGES.txt | 6 ++--- .../locator/AbstractReplicationStrategy.java| 2 +- .../apache/cassandra/locator/TokenMetadata.java | 28 +--- .../apache/cassandra/service/StorageProxy.java | 2 +- 4 files changed, 29 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/829047af/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4816d70..22a121e 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,9 +1,8 @@ -1.2.14 - * Improved error message on bad properties in DDL queries (CASSANDRA-6453) - 1.2.13 + * Improved error message on bad properties in DDL queries (CASSANDRA-6453) * Randomize batchlog candidates selection (CASSANDRA-6481) * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345, 6485) + * Improve batchlog write performance with vnodes (CASSANDRA-6488) * Optimize FD phi calculation (CASSANDRA-6386) * Improve initial FD phi estimate when starting up (CASSANDRA-6385) * Don't list CQL3 table in CLI describe even if named explicitely @@ -20,7 +19,6 @@ (CASSANDRA-6413) * (Hadoop) add describe_local_ring (CASSANDRA-6268) * Fix handling of concurrent directory creation failure (CASSANDRA-6459) - * Improve batchlog write performance with vnodes (CASSANDRA-6488) 1.2.12 http://git-wip-us.apache.org/repos/asf/cassandra/blob/829047af/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java -- diff --git a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java index 85e229c..a48bec9 100644 --- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java +++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java @@ -107,7 +107,7 @@ public abstract class AbstractReplicationStrategy ArrayListInetAddress endpoints = getCachedEndpoints(keyToken); if (endpoints == null) { -TokenMetadata tm = tokenMetadata.cloneOnlyTokenMap(); +TokenMetadata tm = tokenMetadata.cachedOnlyTokenMap(); // if our cache got invalidated, it's possible there is a new token to account for too keyToken = TokenMetadata.firstToken(tm.sortedTokens(), searchToken); endpoints = new ArrayListInetAddress(calculateNaturalEndpoints(searchToken, tm)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/829047af/src/java/org/apache/cassandra/locator/TokenMetadata.java -- diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java b/src/java/org/apache/cassandra/locator/TokenMetadata.java index be0f7c7..cf0c472 100644 --- a/src/java/org/apache/cassandra/locator/TokenMetadata.java +++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java @@ -591,12 +591,31 @@ public class TokenMetadata /** * Create a copy of TokenMetadata with only tokenToEndpointMap. That is, pending ranges, * bootstrap tokens and leaving endpoints are not included in the copy. - * - * This uses a cached copy that is invalided when the ring changes, so in the common case - * no extra locking is required. */ public TokenMetadata cloneOnlyTokenMap() { +lock.readLock().lock(); +try +{ +return new TokenMetadata(SortedBiMultiValMap.Token, InetAddresscreate(tokenToEndpointMap, null, inetaddressCmp), + HashBiMap.create(endpointToHostIdMap), + new Topology(topology)); +} +finally +{ +lock.readLock().unlock(); +} +} + +/** + * Return a cached TokenMetadata with only tokenToEndpointMap, i.e., the same as cloneOnlyTokenMap but + * uses a cached copy that is invalided when the ring changes, so in the common case + * no extra locking is required. + * + * Callers must *NOT* mutate
[1/8] git commit: Slightly improved message when parsing properties for DDL queries
Updated Branches: refs/heads/cassandra-1.2 54a1955d2 - 829047af5 refs/heads/cassandra-2.0 bdff106aa - 359027549 refs/heads/trunk 2ee6b8fd9 - c821d8b08 Slightly improved message when parsing properties for DDL queries patch by boneill42; reviewed by slebresne for CASSANDRA-6453 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54a1955d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54a1955d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54a1955d Branch: refs/heads/cassandra-2.0 Commit: 54a1955d254bfc89e48389d5d0d94c79d027d470 Parents: 4be9e67 Author: Sylvain Lebresne sylv...@datastax.com Authored: Mon Dec 16 10:53:22 2013 +0100 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Mon Dec 16 10:53:22 2013 +0100 -- CHANGES.txt | 3 +++ src/java/org/apache/cassandra/cql3/Cql.g | 10 -- 2 files changed, 11 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/54a1955d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index b55393b..4816d70 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,6 @@ +1.2.14 + * Improved error message on bad properties in DDL queries (CASSANDRA-6453) + 1.2.13 * Randomize batchlog candidates selection (CASSANDRA-6481) * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345, 6485) http://git-wip-us.apache.org/repos/asf/cassandra/blob/54a1955d/src/java/org/apache/cassandra/cql3/Cql.g -- diff --git a/src/java/org/apache/cassandra/cql3/Cql.g b/src/java/org/apache/cassandra/cql3/Cql.g index 7101c71..ea6844f 100644 --- a/src/java/org/apache/cassandra/cql3/Cql.g +++ b/src/java/org/apache/cassandra/cql3/Cql.g @@ -93,12 +93,18 @@ options { if (!(entry.left instanceof Constants.Literal)) { -addRecognitionError(Invalid property name: + entry.left); +String msg = Invalid property name: + entry.left; +if (entry.left instanceof AbstractMarker.Raw) +msg += (bind variables are not supported in DDL queries); +addRecognitionError(msg); break; } if (!(entry.right instanceof Constants.Literal)) { -addRecognitionError(Invalid property value: + entry.right); +String msg = Invalid property value: + entry.right + for property: + entry.left; +if (entry.right instanceof AbstractMarker.Raw) +msg += (bind variables are not supported in DDL queries); +addRecognitionError(msg); break; }
[2/8] git commit: Slightly improved message when parsing properties for DDL queries
Slightly improved message when parsing properties for DDL queries patch by boneill42; reviewed by slebresne for CASSANDRA-6453 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54a1955d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54a1955d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54a1955d Branch: refs/heads/trunk Commit: 54a1955d254bfc89e48389d5d0d94c79d027d470 Parents: 4be9e67 Author: Sylvain Lebresne sylv...@datastax.com Authored: Mon Dec 16 10:53:22 2013 +0100 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Mon Dec 16 10:53:22 2013 +0100 -- CHANGES.txt | 3 +++ src/java/org/apache/cassandra/cql3/Cql.g | 10 -- 2 files changed, 11 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/54a1955d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index b55393b..4816d70 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,6 @@ +1.2.14 + * Improved error message on bad properties in DDL queries (CASSANDRA-6453) + 1.2.13 * Randomize batchlog candidates selection (CASSANDRA-6481) * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345, 6485) http://git-wip-us.apache.org/repos/asf/cassandra/blob/54a1955d/src/java/org/apache/cassandra/cql3/Cql.g -- diff --git a/src/java/org/apache/cassandra/cql3/Cql.g b/src/java/org/apache/cassandra/cql3/Cql.g index 7101c71..ea6844f 100644 --- a/src/java/org/apache/cassandra/cql3/Cql.g +++ b/src/java/org/apache/cassandra/cql3/Cql.g @@ -93,12 +93,18 @@ options { if (!(entry.left instanceof Constants.Literal)) { -addRecognitionError(Invalid property name: + entry.left); +String msg = Invalid property name: + entry.left; +if (entry.left instanceof AbstractMarker.Raw) +msg += (bind variables are not supported in DDL queries); +addRecognitionError(msg); break; } if (!(entry.right instanceof Constants.Literal)) { -addRecognitionError(Invalid property value: + entry.right); +String msg = Invalid property value: + entry.right + for property: + entry.left; +if (entry.right instanceof AbstractMarker.Raw) +msg += (bind variables are not supported in DDL queries); +addRecognitionError(msg); break; }
[6/8] git commit: merge from 1.2
merge from 1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35902754 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35902754 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35902754 Branch: refs/heads/trunk Commit: 359027549fd81ce2357defbb270e752f3acbb5e8 Parents: bdff106 829047a Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 08:49:28 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 08:49:28 2013 -0600 -- CHANGES.txt | 4 +-- src/java/org/apache/cassandra/cql3/Cql.g| 10 +-- .../locator/AbstractReplicationStrategy.java| 2 +- .../apache/cassandra/locator/TokenMetadata.java | 28 +--- .../apache/cassandra/service/StorageProxy.java | 2 +- 5 files changed, 37 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/35902754/CHANGES.txt -- diff --cc CHANGES.txt index 89ef6e1,22a121e..5450b8a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,18 -1,12 +1,20 @@@ -1.2.13 +2.0.4 + * Fix assertion failure in filterColdSSTables (CASSANDRA-6483) + * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008) + * Fix cleanup ClassCastException (CASSANDRA-6462) + * Reduce gossip memory use by interning VersionedValue strings (CASSANDRA-6410) + * Allow specifying datacenters to participate in a repair (CASSANDRA-6218) + * Fix divide-by-zero in PCI (CASSANDRA-6403) + * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284) + * Add sub-ms precision formats to the timestamp parser (CASSANDRA-6395) + * Expose a total memtable size metric for a CF (CASSANDRA-6391) + * cqlsh: handle symlinks properly (CASSANDRA-6425) + * Don't resubmit counter mutation runnables internally (CASSANDRA-6427) +Merged from 1.2: + * Improved error message on bad properties in DDL queries (CASSANDRA-6453) * Randomize batchlog candidates selection (CASSANDRA-6481) * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345, 6485) + * Improve batchlog write performance with vnodes (CASSANDRA-6488) - * Optimize FD phi calculation (CASSANDRA-6386) - * Improve initial FD phi estimate when starting up (CASSANDRA-6385) - * Don't list CQL3 table in CLI describe even if named explicitely - (CASSANDRA-5750) * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172) * Improve gossip performance for typical messages (CASSANDRA-6409) * Throw IRE if a prepared statement has more markers than supported @@@ -25,47 -19,9 +27,45 @@@ (CASSANDRA-6413) * (Hadoop) add describe_local_ring (CASSANDRA-6268) * Fix handling of concurrent directory creation failure (CASSANDRA-6459) - * Randomize batchlog candidates selection (CASSANDRA-6481) - * Improve batchlog write performance with vnodes (CASSANDRA-6488) -1.2.12 +2.0.3 + * Fix FD leak on slice read path (CASSANDRA-6275) + * Cancel read meter task when closing SSTR (CASSANDRA-6358) + * free off-heap IndexSummary during bulk (CASSANDRA-6359) + * Recover from IOException in accept() thread (CASSANDRA-6349) + * Improve Gossip tolerance of abnormally slow tasks (CASSANDRA-6338) + * Fix trying to hint timed out counter writes (CASSANDRA-6322) + * Allow restoring specific columnfamilies from archived CL (CASSANDRA-4809) + * Avoid flushing compaction_history after each operation (CASSANDRA-6287) + * Fix repair assertion error when tombstones expire (CASSANDRA-6277) + * Skip loading corrupt key cache (CASSANDRA-6260) + * Fixes for compacting larger-than-memory rows (CASSANDRA-6274) + * Compact hottest sstables first and optionally omit coldest from + compaction entirely (CASSANDRA-6109) + * Fix modifying column_metadata from thrift (CASSANDRA-6182) + * cqlsh: fix LIST USERS output (CASSANDRA-6242) + * Add IRequestSink interface (CASSANDRA-6248) + * Update memtable size while flushing (CASSANDRA-6249) + * Provide hooks around CQL2/CQL3 statement execution (CASSANDRA-6252) + * Require Permission.SELECT for CAS updates (CASSANDRA-6247) + * New CQL-aware SSTableWriter (CASSANDRA-5894) + * Reject CAS operation when the protocol v1 is used (CASSANDRA-6270) + * Correctly throw error when frame too large (CASSANDRA-5981) + * Fix serialization bug in PagedRange with 2ndary indexes (CASSANDRA-6299) + * Fix CQL3 table validation in Thrift (CASSANDRA-6140) + * Fix bug missing results with IN clauses (CASSANDRA-6327) + * Fix paging with reversed slices (CASSANDRA-6343) + * Set minTimestamp correctly to be able to drop expired sstables (CASSANDRA-6337) + * Support NaN and Infinity as float literals (CASSANDRA-6003) + * Remove RF from nodetool ring
[3/8] git commit: caching all calls of cloneOnlyTokenMap is not correct since many callers mutate the result patch by Aleksey Yeschenko; reviewed by jbellis for CASSANDRA-6488
caching all calls of cloneOnlyTokenMap is not correct since many callers mutate the result patch by Aleksey Yeschenko; reviewed by jbellis for CASSANDRA-6488 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/829047af Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/829047af Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/829047af Branch: refs/heads/cassandra-2.0 Commit: 829047af58fb1735b5d12b74f06ed0d4f04b2c0d Parents: 54a1955 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 08:46:35 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 08:46:35 2013 -0600 -- CHANGES.txt | 6 ++--- .../locator/AbstractReplicationStrategy.java| 2 +- .../apache/cassandra/locator/TokenMetadata.java | 28 +--- .../apache/cassandra/service/StorageProxy.java | 2 +- 4 files changed, 29 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/829047af/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4816d70..22a121e 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,9 +1,8 @@ -1.2.14 - * Improved error message on bad properties in DDL queries (CASSANDRA-6453) - 1.2.13 + * Improved error message on bad properties in DDL queries (CASSANDRA-6453) * Randomize batchlog candidates selection (CASSANDRA-6481) * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345, 6485) + * Improve batchlog write performance with vnodes (CASSANDRA-6488) * Optimize FD phi calculation (CASSANDRA-6386) * Improve initial FD phi estimate when starting up (CASSANDRA-6385) * Don't list CQL3 table in CLI describe even if named explicitely @@ -20,7 +19,6 @@ (CASSANDRA-6413) * (Hadoop) add describe_local_ring (CASSANDRA-6268) * Fix handling of concurrent directory creation failure (CASSANDRA-6459) - * Improve batchlog write performance with vnodes (CASSANDRA-6488) 1.2.12 http://git-wip-us.apache.org/repos/asf/cassandra/blob/829047af/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java -- diff --git a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java index 85e229c..a48bec9 100644 --- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java +++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java @@ -107,7 +107,7 @@ public abstract class AbstractReplicationStrategy ArrayListInetAddress endpoints = getCachedEndpoints(keyToken); if (endpoints == null) { -TokenMetadata tm = tokenMetadata.cloneOnlyTokenMap(); +TokenMetadata tm = tokenMetadata.cachedOnlyTokenMap(); // if our cache got invalidated, it's possible there is a new token to account for too keyToken = TokenMetadata.firstToken(tm.sortedTokens(), searchToken); endpoints = new ArrayListInetAddress(calculateNaturalEndpoints(searchToken, tm)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/829047af/src/java/org/apache/cassandra/locator/TokenMetadata.java -- diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java b/src/java/org/apache/cassandra/locator/TokenMetadata.java index be0f7c7..cf0c472 100644 --- a/src/java/org/apache/cassandra/locator/TokenMetadata.java +++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java @@ -591,12 +591,31 @@ public class TokenMetadata /** * Create a copy of TokenMetadata with only tokenToEndpointMap. That is, pending ranges, * bootstrap tokens and leaving endpoints are not included in the copy. - * - * This uses a cached copy that is invalided when the ring changes, so in the common case - * no extra locking is required. */ public TokenMetadata cloneOnlyTokenMap() { +lock.readLock().lock(); +try +{ +return new TokenMetadata(SortedBiMultiValMap.Token, InetAddresscreate(tokenToEndpointMap, null, inetaddressCmp), + HashBiMap.create(endpointToHostIdMap), + new Topology(topology)); +} +finally +{ +lock.readLock().unlock(); +} +} + +/** + * Return a cached TokenMetadata with only tokenToEndpointMap, i.e., the same as cloneOnlyTokenMap but + * uses a cached copy that is invalided when the ring changes, so in the common case + * no extra locking is required. + * + * Callers must *NOT*
[3/3] git commit: Merge branch 'cassandra-2.0' into trunk
Merge branch 'cassandra-2.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f943433a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f943433a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f943433a Branch: refs/heads/trunk Commit: f943433ae8f62f2ecb2c21e7be924ded09d669f2 Parents: c821d8b 22d8744 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Dec 17 18:26:46 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Dec 17 18:26:46 2013 +0300 -- .../apache/cassandra/locator/TokenMetadata.java | 18 -- 1 file changed, 4 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f943433a/src/java/org/apache/cassandra/locator/TokenMetadata.java --
[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/22d87444 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/22d87444 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/22d87444 Branch: refs/heads/trunk Commit: 22d87444cef2a0fe4f9a01eea313526e69c70521 Parents: 3590275 13348c4 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Dec 17 18:26:10 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Dec 17 18:26:10 2013 +0300 -- .../apache/cassandra/locator/TokenMetadata.java | 18 -- 1 file changed, 4 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/22d87444/src/java/org/apache/cassandra/locator/TokenMetadata.java --
git commit: Simplify TokenMetadata.cachedOnlyTokenMap()
Updated Branches: refs/heads/cassandra-1.2 829047af5 - 13348c47a Simplify TokenMetadata.cachedOnlyTokenMap() Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/13348c47 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/13348c47 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/13348c47 Branch: refs/heads/cassandra-1.2 Commit: 13348c47a415bb0887ee722af33384cf18362497 Parents: 829047a Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Dec 17 18:25:31 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Dec 17 18:25:31 2013 +0300 -- .../apache/cassandra/locator/TokenMetadata.java | 18 -- 1 file changed, 4 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/13348c47/src/java/org/apache/cassandra/locator/TokenMetadata.java -- diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java b/src/java/org/apache/cassandra/locator/TokenMetadata.java index cf0c472..22a9042 100644 --- a/src/java/org/apache/cassandra/locator/TokenMetadata.java +++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java @@ -620,25 +620,15 @@ public class TokenMetadata if (tm != null) return tm; -// synchronize is to prevent thundering herd (CASSANDRA-6345); lock.readLock is for correctness vs updates to our internals +// synchronize to prevent thundering herd (CASSANDRA-6345) synchronized (this) { if ((tm = cachedTokenMap.get()) != null) return tm; -lock.readLock().lock(); -try -{ -tm = new TokenMetadata(SortedBiMultiValMap.Token, InetAddresscreate(tokenToEndpointMap, null, inetaddressCmp), - HashBiMap.create(endpointToHostIdMap), - new Topology(topology)); -cachedTokenMap.set(tm); -return tm; -} -finally -{ -lock.readLock().unlock(); -} +tm = cloneOnlyTokenMap(); +cachedTokenMap.set(tm); +return tm; } }
[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/22d87444 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/22d87444 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/22d87444 Branch: refs/heads/cassandra-2.0 Commit: 22d87444cef2a0fe4f9a01eea313526e69c70521 Parents: 3590275 13348c4 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Dec 17 18:26:10 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Dec 17 18:26:10 2013 +0300 -- .../apache/cassandra/locator/TokenMetadata.java | 18 -- 1 file changed, 4 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/22d87444/src/java/org/apache/cassandra/locator/TokenMetadata.java --
[1/2] git commit: Simplify TokenMetadata.cachedOnlyTokenMap()
Updated Branches: refs/heads/cassandra-2.0 359027549 - 22d87444c Simplify TokenMetadata.cachedOnlyTokenMap() Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/13348c47 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/13348c47 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/13348c47 Branch: refs/heads/cassandra-2.0 Commit: 13348c47a415bb0887ee722af33384cf18362497 Parents: 829047a Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Dec 17 18:25:31 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Dec 17 18:25:31 2013 +0300 -- .../apache/cassandra/locator/TokenMetadata.java | 18 -- 1 file changed, 4 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/13348c47/src/java/org/apache/cassandra/locator/TokenMetadata.java -- diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java b/src/java/org/apache/cassandra/locator/TokenMetadata.java index cf0c472..22a9042 100644 --- a/src/java/org/apache/cassandra/locator/TokenMetadata.java +++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java @@ -620,25 +620,15 @@ public class TokenMetadata if (tm != null) return tm; -// synchronize is to prevent thundering herd (CASSANDRA-6345); lock.readLock is for correctness vs updates to our internals +// synchronize to prevent thundering herd (CASSANDRA-6345) synchronized (this) { if ((tm = cachedTokenMap.get()) != null) return tm; -lock.readLock().lock(); -try -{ -tm = new TokenMetadata(SortedBiMultiValMap.Token, InetAddresscreate(tokenToEndpointMap, null, inetaddressCmp), - HashBiMap.create(endpointToHostIdMap), - new Topology(topology)); -cachedTokenMap.set(tm); -return tm; -} -finally -{ -lock.readLock().unlock(); -} +tm = cloneOnlyTokenMap(); +cachedTokenMap.set(tm); +return tm; } }
[1/3] git commit: Simplify TokenMetadata.cachedOnlyTokenMap()
Updated Branches: refs/heads/trunk c821d8b08 - f943433ae Simplify TokenMetadata.cachedOnlyTokenMap() Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/13348c47 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/13348c47 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/13348c47 Branch: refs/heads/trunk Commit: 13348c47a415bb0887ee722af33384cf18362497 Parents: 829047a Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Dec 17 18:25:31 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Dec 17 18:25:31 2013 +0300 -- .../apache/cassandra/locator/TokenMetadata.java | 18 -- 1 file changed, 4 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/13348c47/src/java/org/apache/cassandra/locator/TokenMetadata.java -- diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java b/src/java/org/apache/cassandra/locator/TokenMetadata.java index cf0c472..22a9042 100644 --- a/src/java/org/apache/cassandra/locator/TokenMetadata.java +++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java @@ -620,25 +620,15 @@ public class TokenMetadata if (tm != null) return tm; -// synchronize is to prevent thundering herd (CASSANDRA-6345); lock.readLock is for correctness vs updates to our internals +// synchronize to prevent thundering herd (CASSANDRA-6345) synchronized (this) { if ((tm = cachedTokenMap.get()) != null) return tm; -lock.readLock().lock(); -try -{ -tm = new TokenMetadata(SortedBiMultiValMap.Token, InetAddresscreate(tokenToEndpointMap, null, inetaddressCmp), - HashBiMap.create(endpointToHostIdMap), - new Topology(topology)); -cachedTokenMap.set(tm); -return tm; -} -finally -{ -lock.readLock().unlock(); -} +tm = cloneOnlyTokenMap(); +cachedTokenMap.set(tm); +return tm; } }
[jira] [Updated] (CASSANDRA-4911) Lift limitation that order by columns must be selected for IN queries
[ https://issues.apache.org/jira/browse/CASSANDRA-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-4911: Attachment: 4911.txt Attaching patch on trunk for this. Since no-one complained about this so far and since it's slightly risky, let's target that to 2.1. We do need it a bit more in 2.1 following CASSANDRA-5417 (as explained in this ticket, we have one dtest failure otherwise). Lift limitation that order by columns must be selected for IN queries - Key: CASSANDRA-4911 URL: https://issues.apache.org/jira/browse/CASSANDRA-4911 Project: Cassandra Issue Type: Improvement Affects Versions: 1.2.0 beta 1 Reporter: Sylvain Lebresne Priority: Minor Fix For: 2.1 Attachments: 4911.txt This is the followup of CASSANDRA-4645. We should remove the limitation that for IN queries, you must have columns on which you have an ORDER BY in the select clause. For that, we'll need to automatically add the columns on which we have an ORDER BY to the one queried internally, and remove it afterwards (once the sorting is done) from the resultSet. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[Cassandra Wiki] Update of MavenPlugin by bhamail
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The MavenPlugin page has been changed by bhamail: https://wiki.apache.org/cassandra/MavenPlugin?action=diffrev1=5rev2=6 Comment: fix typos in load.script example and then using your favourite editor, create a file called {{{load.script}}} in the {{{webapp/src/cassandra/cli}}} directory. {{{ - reate keyspace TestKeyspace + create keyspace WebappKeyspace with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = {replication_factor:1}; use WebappKeyspace;
[jira] [Commented] (CASSANDRA-6488) Batchlog writes consume unnecessarily large amounts of CPU on vnodes clusters
[ https://issues.apache.org/jira/browse/CASSANDRA-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850623#comment-13850623 ] Michael Shuler commented on CASSANDRA-6488: --- cassandra-1.2 branch, commit 13348c4, is passing these 4 unit tests: - http://cassci.datastax.com/job/cassandra-1.2_test/35/console (pending comment edit when 2.0 finishes) Batchlog writes consume unnecessarily large amounts of CPU on vnodes clusters - Key: CASSANDRA-6488 URL: https://issues.apache.org/jira/browse/CASSANDRA-6488 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Rick Branson Fix For: 1.2.13, 2.0.4 Attachments: 6488-fix.txt, 6488-rbranson-patch.txt, 6488-v2.txt, 6488-v3.txt, graph (21).png The cloneTokenOnlyMap call in StorageProxy.getBatchlogEndpoints causes enormous amounts of CPU to be consumed on clusters with many vnodes. I created a patch to cache this data as a workaround and deployed it to a production cluster with 15,000 tokens. CPU consumption drop to 1/5th. This highlights the overall issues with cloneOnlyTokenMap() calls on vnodes clusters. I'm including the maybe-not-the-best-quality workaround patch to use as a reference, but cloneOnlyTokenMap is a systemic issue and every place it's called should probably be investigated. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Assigned] (CASSANDRA-6495) LOCAL_SERIAL use QUORAM consistency level to validate expected columns
[ https://issues.apache.org/jira/browse/CASSANDRA-6495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sankalp kohli reassigned CASSANDRA-6495: Assignee: sankalp kohli LOCAL_SERIAL use QUORAM consistency level to validate expected columns --- Key: CASSANDRA-6495 URL: https://issues.apache.org/jira/browse/CASSANDRA-6495 Project: Cassandra Issue Type: Bug Components: Core Reporter: sankalp kohli Assignee: sankalp kohli Priority: Minor If CAS is done at LOCAL_SERIAL consistency level, only the nodes from the local data center should be involved. Here we are using QUORAM to validate the expected columns. This will require nodes from more than one DC. We should use LOCAL_QUORAM here when CAS is done at LOCAL_SERIAL. Also if we have 2 DCs with DC1:3,DC2:3, a single DC down will cause CAS to not work even for LOCAL_SERIAL. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Comment Edited] (CASSANDRA-6488) Batchlog writes consume unnecessarily large amounts of CPU on vnodes clusters
[ https://issues.apache.org/jira/browse/CASSANDRA-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850623#comment-13850623 ] Michael Shuler edited comment on CASSANDRA-6488 at 12/17/13 4:22 PM: - cassandra-1.2 branch, commit 13348c4, is passing these 4 unit tests: - http://cassci.datastax.com/job/cassandra-1.2_test/35/console cassandra-2.0 is passing these, also - http://cassci.datastax.com/job/cassandra-2.0_test/50/console Thanks all! was (Author: mshuler): cassandra-1.2 branch, commit 13348c4, is passing these 4 unit tests: - http://cassci.datastax.com/job/cassandra-1.2_test/35/console (pending comment edit when 2.0 finishes) Batchlog writes consume unnecessarily large amounts of CPU on vnodes clusters - Key: CASSANDRA-6488 URL: https://issues.apache.org/jira/browse/CASSANDRA-6488 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Rick Branson Fix For: 1.2.13, 2.0.4 Attachments: 6488-fix.txt, 6488-rbranson-patch.txt, 6488-v2.txt, 6488-v3.txt, graph (21).png The cloneTokenOnlyMap call in StorageProxy.getBatchlogEndpoints causes enormous amounts of CPU to be consumed on clusters with many vnodes. I created a patch to cache this data as a workaround and deployed it to a production cluster with 15,000 tokens. CPU consumption drop to 1/5th. This highlights the overall issues with cloneOnlyTokenMap() calls on vnodes clusters. I'm including the maybe-not-the-best-quality workaround patch to use as a reference, but cloneOnlyTokenMap is a systemic issue and every place it's called should probably be investigated. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Comment Edited] (CASSANDRA-6488) Batchlog writes consume unnecessarily large amounts of CPU on vnodes clusters
[ https://issues.apache.org/jira/browse/CASSANDRA-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850623#comment-13850623 ] Michael Shuler edited comment on CASSANDRA-6488 at 12/17/13 4:22 PM: - cassandra-1.2 branch, commit 13348c4, is passing these 4 unit tests: - http://cassci.datastax.com/job/cassandra-1.2_test/35/console cassandra-2.0 is passing these, also - http://cassci.datastax.com/job/cassandra-2.0_test/50/console Thanks all! was (Author: mshuler): cassandra-1.2 branch, commit 13348c4, is passing these 4 unit tests: - http://cassci.datastax.com/job/cassandra-1.2_test/35/console cassandra-2.0 is passing these, also - http://cassci.datastax.com/job/cassandra-2.0_test/50/console Thanks all! Batchlog writes consume unnecessarily large amounts of CPU on vnodes clusters - Key: CASSANDRA-6488 URL: https://issues.apache.org/jira/browse/CASSANDRA-6488 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Assignee: Rick Branson Fix For: 1.2.13, 2.0.4 Attachments: 6488-fix.txt, 6488-rbranson-patch.txt, 6488-v2.txt, 6488-v3.txt, graph (21).png The cloneTokenOnlyMap call in StorageProxy.getBatchlogEndpoints causes enormous amounts of CPU to be consumed on clusters with many vnodes. I created a patch to cache this data as a workaround and deployed it to a production cluster with 15,000 tokens. CPU consumption drop to 1/5th. This highlights the overall issues with cloneOnlyTokenMap() calls on vnodes clusters. I'm including the maybe-not-the-best-quality workaround patch to use as a reference, but cloneOnlyTokenMap is a systemic issue and every place it's called should probably be investigated. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-2812) Allow changing comparator between compatible collations
[ https://issues.apache.org/jira/browse/CASSANDRA-2812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850639#comment-13850639 ] Nicolas Lalevée commented on CASSANDRA-2812: I tried to find out if Cassandra can or will support changing the comparator type, and the only reference to a such feature I have found is that Jira issue. So I am logging here what I have found out, in case some other people are walking by this. As far as I can see (the code), Cassandra 1.2 does support it, but the types have be the same sort algorithm (obviously), but also have compatible validators. For instance in my use case where I want to fix a mistake where I specified the 'byte' type instead of the 'utf8' type, even if utf8 sorting is the same as the byte one, and even if all my clients are encoding in utf8, Cassandra is conservative and refuses to do the migration (be safer than sorry). Allow changing comparator between compatible collations --- Key: CASSANDRA-2812 URL: https://issues.apache.org/jira/browse/CASSANDRA-2812 Project: Cassandra Issue Type: Bug Reporter: Jonathan Ellis Priority: Minor Attachments: 2812.txt Normally we should not allow changing comparators, but anything that sorts in lexical byte order (Bytes, Ascii, UTF8, LexicalUUID) is compatible. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[1/6] Rename Column to Cell
Updated Branches: refs/heads/trunk 362cc0535 - e50d6af12 http://git-wip-us.apache.org/repos/asf/cassandra/blob/e50d6af1/test/unit/org/apache/cassandra/db/RemoveSubColumnTest.java -- diff --git a/test/unit/org/apache/cassandra/db/RemoveSubColumnTest.java b/test/unit/org/apache/cassandra/db/RemoveSubColumnTest.java deleted file mode 100644 index e112b1b..000 --- a/test/unit/org/apache/cassandra/db/RemoveSubColumnTest.java +++ /dev/null @@ -1,100 +0,0 @@ -/* -* Licensed to the Apache Software Foundation (ASF) under one -* or more contributor license agreements. See the NOTICE file -* distributed with this work for additional information -* regarding copyright ownership. The ASF licenses this file -* to you under the Apache License, Version 2.0 (the -* License); you may not use this file except in compliance -* with the License. You may obtain a copy of the License at -* -*http://www.apache.org/licenses/LICENSE-2.0 -* -* Unless required by applicable law or agreed to in writing, -* software distributed under the License is distributed on an -* AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -* KIND, either express or implied. See the License for the -* specific language governing permissions and limitations -* under the License. -*/ -package org.apache.cassandra.db; - -import java.nio.ByteBuffer; -import java.util.concurrent.TimeUnit; - -import org.junit.Test; - -import static org.junit.Assert.assertNull; -import org.apache.cassandra.db.composites.*; -import org.apache.cassandra.db.filter.QueryFilter; -import org.apache.cassandra.db.marshal.CompositeType; -import static org.apache.cassandra.Util.getBytes; -import org.apache.cassandra.Util; -import org.apache.cassandra.SchemaLoader; -import org.apache.cassandra.utils.ByteBufferUtil; - -import com.google.common.util.concurrent.Uninterruptibles; - - -public class RemoveSubColumnTest extends SchemaLoader -{ -@Test -public void testRemoveSubColumn() -{ -Keyspace keyspace = Keyspace.open(Keyspace1); -ColumnFamilyStore store = keyspace.getColumnFamilyStore(Super1); -RowMutation rm; -DecoratedKey dk = Util.dk(key1); - -// add data -rm = new RowMutation(Keyspace1, dk.key); -Util.addMutation(rm, Super1, SC1, 1, asdf, 0); -rm.apply(); -store.forceBlockingFlush(); - -CellName cname = CellNames.compositeDense(ByteBufferUtil.bytes(SC1), getBytes(1L)); -// remove -rm = new RowMutation(Keyspace1, dk.key); -rm.delete(Super1, cname, 1); -rm.apply(); - -ColumnFamily retrieved = store.getColumnFamily(QueryFilter.getIdentityFilter(dk, Super1, System.currentTimeMillis())); -assert retrieved.getColumn(cname).isMarkedForDelete(System.currentTimeMillis()); -assertNull(Util.cloneAndRemoveDeleted(retrieved, Integer.MAX_VALUE)); -} - -@Test -public void testRemoveSubColumnAndContainer() -{ -Keyspace keyspace = Keyspace.open(Keyspace1); -ColumnFamilyStore store = keyspace.getColumnFamilyStore(Super1); -RowMutation rm; -DecoratedKey dk = Util.dk(key2); - -// add data -rm = new RowMutation(Keyspace1, dk.key); -Util.addMutation(rm, Super1, SC1, 1, asdf, 0); -rm.apply(); -store.forceBlockingFlush(); - -// remove the SC -ByteBuffer scName = ByteBufferUtil.bytes(SC1); -CellName cname = CellNames.compositeDense(scName, getBytes(1L)); -rm = new RowMutation(Keyspace1, dk.key); -rm.deleteRange(Super1, SuperColumns.startOf(scName), SuperColumns.endOf(scName), 1); -rm.apply(); - -// Mark current time and make sure the next insert happens at least -// one second after the previous one (since gc resolution is the second) -QueryFilter filter = QueryFilter.getIdentityFilter(dk, Super1, System.currentTimeMillis()); -Uninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS); - -// remove the column itself -rm = new RowMutation(Keyspace1, dk.key); -rm.delete(Super1, cname, 2); -rm.apply(); - -ColumnFamily retrieved = store.getColumnFamily(filter); -assert retrieved.getColumn(cname).isMarkedForDelete(System.currentTimeMillis()); -assertNull(Util.cloneAndRemoveDeleted(retrieved, Integer.MAX_VALUE)); -} -} http://git-wip-us.apache.org/repos/asf/cassandra/blob/e50d6af1/test/unit/org/apache/cassandra/db/RowCacheTest.java -- diff --git a/test/unit/org/apache/cassandra/db/RowCacheTest.java b/test/unit/org/apache/cassandra/db/RowCacheTest.java index 6c3a620..238f61e 100644 --- a/test/unit/org/apache/cassandra/db/RowCacheTest.java +++ b/test/unit/org/apache/cassandra/db/RowCacheTest.java @@ -72,15 +72,15 @@ public class RowCacheTest extends SchemaLoader assert
[jira] [Updated] (CASSANDRA-6464) Paging queries with IN on the partition key is broken
[ https://issues.apache.org/jira/browse/CASSANDRA-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-6464: - Reviewer: Aleksey Yeschenko Paging queries with IN on the partition key is broken - Key: CASSANDRA-6464 URL: https://issues.apache.org/jira/browse/CASSANDRA-6464 Project: Cassandra Issue Type: Bug Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Fix For: 2.0.4 Attachments: 6464.txt Feels like MultiPartitionPager (which handles paging queries when there is a IN on the partition key) has completely missed CASSANDRA-5714's train. As a result, it completely broken and will typically loop infinitely. Attaching patch to fix. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6496) Endless L0 LCS compactions
[ https://issues.apache.org/jira/browse/CASSANDRA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850687#comment-13850687 ] Nikolai Grigoriev commented on CASSANDRA-6496: -- Cool! I have got the source tagged 2.0.3, applied the patch, recompiled, restarted the node. Clearly now it compacts the groups of 32 L0 sstables into one large one. I see that it just did one round and created 8Gb sstable from 32 256Mb ones. Thanks a lot for the patch! I will revert the compaction settings to give it enough resources and let it complete its job to see the end results before I restart the test traffic. Endless L0 LCS compactions -- Key: CASSANDRA-6496 URL: https://issues.apache.org/jira/browse/CASSANDRA-6496 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 2.0.3, Linux, 6 nodes, 5 disks per node Reporter: Nikolai Grigoriev Assignee: Jonathan Ellis Labels: compaction Fix For: 2.0.4 Attachments: 6496.txt, system.log.1.gz, system.log.gz I have first described the problem here: http://stackoverflow.com/questions/20589324/cassandra-2-0-3-endless-compactions-with-no-traffic I think I have really abused my system with the traffic (mix of reads, heavy updates and some deletes). Now after stopping the traffic I see the compactions that are going on endlessly for over 4 days. For a specific CF I have about 4700 sstable data files right now. The compaction estimates are logged as [3312, 4, 0, 0, 0, 0, 0, 0, 0]. sstable_size_in_mb=256. 3214 files are about 256Mb (+/1 few megs), other files are smaller or much smaller than that. No sstables are larger than 256Mb. What I observe is that LCS picks 32 sstables from L0 and compacts them into 32 sstables of approximately the same size. So, what my system is doing for last 4 days (no traffic at all) is compacting groups of 32 sstables into groups of 32 sstables without any changes. Seems like a bug to me regardless of what did I do to get the system into this state... -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Comment Edited] (CASSANDRA-6496) Endless L0 LCS compactions
[ https://issues.apache.org/jira/browse/CASSANDRA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850687#comment-13850687 ] Nikolai Grigoriev edited comment on CASSANDRA-6496 at 12/17/13 5:24 PM: Cool! I have got the source tagged 2.0.3, applied the patch, recompiled, restarted the node. Clearly now it compacts the groups of 32 L0 sstables into large ones. I see that it just did one round and created 8Gb sstable from 32 256Mb ones. Thanks a lot for the patch! I will revert the compaction settings to give it enough resources and let it complete its job to see the end results before I restart the test traffic. was (Author: ngrigoriev): Cool! I have got the source tagged 2.0.3, applied the patch, recompiled, restarted the node. Clearly now it compacts the groups of 32 L0 sstables into one large one. I see that it just did one round and created 8Gb sstable from 32 256Mb ones. Thanks a lot for the patch! I will revert the compaction settings to give it enough resources and let it complete its job to see the end results before I restart the test traffic. Endless L0 LCS compactions -- Key: CASSANDRA-6496 URL: https://issues.apache.org/jira/browse/CASSANDRA-6496 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 2.0.3, Linux, 6 nodes, 5 disks per node Reporter: Nikolai Grigoriev Assignee: Jonathan Ellis Labels: compaction Fix For: 2.0.4 Attachments: 6496.txt, system.log.1.gz, system.log.gz I have first described the problem here: http://stackoverflow.com/questions/20589324/cassandra-2-0-3-endless-compactions-with-no-traffic I think I have really abused my system with the traffic (mix of reads, heavy updates and some deletes). Now after stopping the traffic I see the compactions that are going on endlessly for over 4 days. For a specific CF I have about 4700 sstable data files right now. The compaction estimates are logged as [3312, 4, 0, 0, 0, 0, 0, 0, 0]. sstable_size_in_mb=256. 3214 files are about 256Mb (+/1 few megs), other files are smaller or much smaller than that. No sstables are larger than 256Mb. What I observe is that LCS picks 32 sstables from L0 and compacts them into 32 sstables of approximately the same size. So, what my system is doing for last 4 days (no traffic at all) is compacting groups of 32 sstables into groups of 32 sstables without any changes. Seems like a bug to me regardless of what did I do to get the system into this state... -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (CASSANDRA-6499) Shuffle fails if PasswordAuthenticator is enabled
Adam Hattrell created CASSANDRA-6499: Summary: Shuffle fails if PasswordAuthenticator is enabled Key: CASSANDRA-6499 URL: https://issues.apache.org/jira/browse/CASSANDRA-6499 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Adam Hattrell Priority: Minor If you attempt to run shuffle whilst authenticator: org.apache.cassandra.auth.PasswordAuthenticator is set in the cassandra.yaml you get the following error: Exception in thread main java.lang.RuntimeException: InvalidRequestException(why:You have not logged in) at org.apache.cassandra.tools.Shuffle.executeCqlQuery(Shuffle.java:516) at org.apache.cassandra.tools.Shuffle.shuffle(Shuffle.java:359) at org.apache.cassandra.tools.Shuffle.main(Shuffle.java:681) Caused by: InvalidRequestException(why:You have not logged in) at org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:37849) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1562) at org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1547) at org.apache.cassandra.tools.CassandraClient.execute_cql_query(Shuffle.java:736) at org.apache.cassandra.tools.Shuffle.executeCqlQuery(Shuffle.java:502) ... 2 more I've logged this as Minor as I wouldn't really recommend using shuffle in production. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing
[ https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850697#comment-13850697 ] Nikolai Grigoriev commented on CASSANDRA-6008: -- Not sure it is related, but I have noticed that I often have this issue when the node shuts down with this exception: {code} INFO [RMI TCP Connection(8)-10.3.45.158] 2013-12-17 17:22:31,782 StorageService.java (line 941) DRAINED ERROR [CompactionExecutor:2008] 2013-12-17 17:22:36,615 CassandraDaemon.java (line 187) Exception in thread Thread[CompactionExecutor:2008,1,main] java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@16e10a93 rejected from org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@107d44a1[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 130876] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:629) at org.apache.cassandra.io.sstable.SSTableDeletingTask.schedule(SSTableDeletingTask.java:66) at org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1105) at org.apache.cassandra.db.DataTracker.removeOldSSTablesSize(DataTracker.java:388) at org.apache.cassandra.db.DataTracker.postReplace(DataTracker.java:353) at org.apache.cassandra.db.DataTracker.replace(DataTracker.java:347) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:252) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:1078) at org.apache.cassandra.db.compaction.CompactionTask.replaceCompactedSSTables(CompactionTask.java:296) at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:242) at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) {code} I do disable thrift,gossip and drain the node before stopping Cassandra process. Getting 'This should never happen' error at startup due to sstables missing --- Key: CASSANDRA-6008 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008 Project: Cassandra Issue Type: Bug Components: Core Reporter: John Carrino Assignee: Tyler Hobbs Fix For: 2.0.4 Attachments: 6008-2.0-part2.patch, 6008-2.0-v1.patch, 6008-trunk-v1.patch Exception encountered during startup: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables This happens when sstables that have been compacted away are removed, but they still have entries in the system.compactions_in_progress table. Normally this should not happen because the entries in system.compactions_in_progress are deleted before the old sstables are deleted. However at startup recovery time, old sstables are deleted (NOT BEFORE they are removed from the compactions_in_progress table) and then after that is done it does a truncate using SystemKeyspace.discardCompactionsInProgress We ran into a case where the disk filled up and the node died and was bounced and then failed to truncate this table on startup, and then got stuck hitting this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers. Maybe on startup we can delete from this table incrementally as we clean
[jira] [Commented] (CASSANDRA-5633) CQL support for updating multiple rows in a partition using CAS
[ https://issues.apache.org/jira/browse/CASSANDRA-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850698#comment-13850698 ] Sebastian Schmidt commented on CASSANDRA-5633: -- This indeed seems to be a solution to our problem and we could certainly model our data using this approach. I am not sold that we need to introduce yet another concept just for syntactic ease, but this solution does offer us a very straightforward way to specify this specific CAS use case. CQL support for updating multiple rows in a partition using CAS --- Key: CASSANDRA-5633 URL: https://issues.apache.org/jira/browse/CASSANDRA-5633 Project: Cassandra Issue Type: Improvement Affects Versions: 2.0 beta 1 Reporter: sankalp kohli Assignee: Sylvain Lebresne Priority: Minor Labels: cql3 Fix For: 2.0.4 This is currently supported via Thrift but not via CQL. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-5839) Save repair data to system table
[ https://issues.apache.org/jira/browse/CASSANDRA-5839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850694#comment-13850694 ] Jimmy Mårdell commented on CASSANDRA-5839: -- Thanks for your input, it's very helpful! I didn't know about the default_time_to_live actually among other things (my CQL is very bad, but I speak thrift almost natively ;) ) I've made the suggested changes (patch coming), but I realize that perhaps the logging code should be in SystemKeyspace or (more likely) a new static class SystemKeyspaceDistributed. I'm not a fan of so many static methods but I am a fan of consistency. What do you think? Save repair data to system table Key: CASSANDRA-5839 URL: https://issues.apache.org/jira/browse/CASSANDRA-5839 Project: Cassandra Issue Type: New Feature Components: Core, Tools Reporter: Jonathan Ellis Assignee: Jimmy Mårdell Priority: Minor Fix For: 2.0.4 Attachments: 2.0.4-5839-draft.patch As noted in CASSANDRA-2405, it would be useful to store repair results, particularly with sub-range repair available (CASSANDRA-5280). -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing
[ https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850705#comment-13850705 ] Tyler Hobbs commented on CASSANDRA-6008: bq. This means that instead of throwing an error if we restart before removeUnfinishedCompactionLeftovers finishes, we'll leave both old and new sstables from unfinished compactions live, which defeats the purpose for counters. D'oh, you're right. bq. Switch back to delete-first, and add a debug line instead of IllegalStateException. (Can delete from compaction_log incrementally too to reduce the window of inconsistency.) [~yukim]'s patch on CASSANDRA-6086 basically does this (except for deleting incrementally) so we should pick one ticket or the other to do that under. Getting 'This should never happen' error at startup due to sstables missing --- Key: CASSANDRA-6008 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008 Project: Cassandra Issue Type: Bug Components: Core Reporter: John Carrino Assignee: Tyler Hobbs Fix For: 2.0.4 Attachments: 6008-2.0-part2.patch, 6008-2.0-v1.patch, 6008-trunk-v1.patch Exception encountered during startup: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables This happens when sstables that have been compacted away are removed, but they still have entries in the system.compactions_in_progress table. Normally this should not happen because the entries in system.compactions_in_progress are deleted before the old sstables are deleted. However at startup recovery time, old sstables are deleted (NOT BEFORE they are removed from the compactions_in_progress table) and then after that is done it does a truncate using SystemKeyspace.discardCompactionsInProgress We ran into a case where the disk filled up and the node died and was bounced and then failed to truncate this table on startup, and then got stuck hitting this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers. Maybe on startup we can delete from this table incrementally as we clean stuff up in the same way that compactions delete from this table before they delete old sstables. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing
[ https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850709#comment-13850709 ] Tyler Hobbs commented on CASSANDRA-6008: bq. Another possible issue may be when doing restore from backup. If you do a shutdown while there are rows in compaction_log and then clear the current tables and replace with new ones you will get this error also. [~johnyoh] yes, that's another good argument for approach #1, in my opinion. Getting 'This should never happen' error at startup due to sstables missing --- Key: CASSANDRA-6008 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008 Project: Cassandra Issue Type: Bug Components: Core Reporter: John Carrino Assignee: Tyler Hobbs Fix For: 2.0.4 Attachments: 6008-2.0-part2.patch, 6008-2.0-v1.patch, 6008-trunk-v1.patch Exception encountered during startup: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables This happens when sstables that have been compacted away are removed, but they still have entries in the system.compactions_in_progress table. Normally this should not happen because the entries in system.compactions_in_progress are deleted before the old sstables are deleted. However at startup recovery time, old sstables are deleted (NOT BEFORE they are removed from the compactions_in_progress table) and then after that is done it does a truncate using SystemKeyspace.discardCompactionsInProgress We ran into a case where the disk filled up and the node died and was bounced and then failed to truncate this table on startup, and then got stuck hitting this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers. Maybe on startup we can delete from this table incrementally as we clean stuff up in the same way that compactions delete from this table before they delete old sstables. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
git commit: Fix dereference after null check
Updated Branches: refs/heads/trunk e50d6af12 - 6e6730d87 Fix dereference after null check Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6e6730d8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6e6730d8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6e6730d8 Branch: refs/heads/trunk Commit: 6e6730d875cbb47e9af128a59467f4a02a3a43fb Parents: e50d6af Author: Yuki Morishita yu...@apache.org Authored: Tue Dec 17 11:43:57 2013 -0600 Committer: Yuki Morishita yu...@apache.org Committed: Tue Dec 17 11:43:57 2013 -0600 -- src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6e6730d8/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java -- diff --git a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java index d4b0b77..d8166ad 100644 --- a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java +++ b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java @@ -64,8 +64,8 @@ public class SSTableMetadataViewer out.printf(Estimated droppable tombstones: %s%n, stats.getEstimatedDroppableTombstoneRatio((int) (System.currentTimeMillis() / 1000))); out.printf(SSTable Level: %d%n, stats.sstableLevel); out.println(stats.replayPosition); +printHistograms(stats, out); } -printHistograms(stats, out); } }
[jira] [Commented] (CASSANDRA-4268) Expose full stop() operation through JMX
[ https://issues.apache.org/jira/browse/CASSANDRA-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850719#comment-13850719 ] Tyler Hobbs commented on CASSANDRA-4268: We want this operation to stop the process, so it basically needs to end with {{System.exit(0)}}. In that case, make sure that {{CassandraDaemon.destroy()}} gets called as well. +1 on adding a nodetool command. There's a typo in the stopdaemon help, though: {{badUse(stopserver does not take arguments.)}} Expose full stop() operation through JMX Key: CASSANDRA-4268 URL: https://issues.apache.org/jira/browse/CASSANDRA-4268 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Tyler Hobbs Assignee: Lyuben Todorov Priority: Minor Labels: jmx Fix For: 2.0.4 Attachments: 4268_cassandra-2.0.patch We already expose ways to stop just the RPC server or gossip. This would fully shutdown the process. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Resolved] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing
[ https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-6008. --- Resolution: Duplicate Fix Version/s: (was: 2.0.4) bq. Yuki Morishita's patch on CASSANDRA-6086 basically does this (except for deleting incrementally) so we should pick one ticket or the other to do that under. All right, resolving this one as duplicate. Getting 'This should never happen' error at startup due to sstables missing --- Key: CASSANDRA-6008 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008 Project: Cassandra Issue Type: Bug Components: Core Reporter: John Carrino Assignee: Tyler Hobbs Attachments: 6008-2.0-part2.patch, 6008-2.0-v1.patch, 6008-trunk-v1.patch Exception encountered during startup: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables This happens when sstables that have been compacted away are removed, but they still have entries in the system.compactions_in_progress table. Normally this should not happen because the entries in system.compactions_in_progress are deleted before the old sstables are deleted. However at startup recovery time, old sstables are deleted (NOT BEFORE they are removed from the compactions_in_progress table) and then after that is done it does a truncate using SystemKeyspace.discardCompactionsInProgress We ran into a case where the disk filled up and the node died and was bounced and then failed to truncate this table on startup, and then got stuck hitting this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers. Maybe on startup we can delete from this table incrementally as we clean stuff up in the same way that compactions delete from this table before they delete old sstables. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[1/3] git commit: Fix size-tiered compaction in LCS L0 patch by jbellis; reviewed by Marcus Eriksson and tested by Nikolai Grigoriev for CASSANDRA-9496
Updated Branches: refs/heads/cassandra-2.0 09c7dee25 - ecec863d1 refs/heads/trunk 216139ff6 - 90e585dde Fix size-tiered compaction in LCS L0 patch by jbellis; reviewed by Marcus Eriksson and tested by Nikolai Grigoriev for CASSANDRA-9496 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ecec863d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ecec863d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ecec863d Branch: refs/heads/cassandra-2.0 Commit: ecec863d1fe3b1b249b7d2948b482104f5ff1ef3 Parents: 09c7dee Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 14:31:04 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 14:31:14 2013 -0600 -- CHANGES.txt | 1 + .../compaction/AbstractCompactionStrategy.java | 2 +- .../cassandra/db/compaction/CompactionTask.java | 2 +- .../compaction/LeveledCompactionStrategy.java | 20 +++- .../db/compaction/LeveledCompactionTask.java| 8 +++ .../db/compaction/LeveledManifest.java | 25 .../SizeTieredCompactionStrategy.java | 2 +- .../cassandra/db/compaction/Upgrader.java | 9 +-- .../cassandra/tools/StandaloneScrubber.java | 2 +- 9 files changed, 39 insertions(+), 32 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ecec863d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 5450b8a..c2cd052 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.4 + * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496) * Fix assertion failure in filterColdSSTables (CASSANDRA-6483) * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008) * Fix cleanup ClassCastException (CASSANDRA-6462) http://git-wip-us.apache.org/repos/asf/cassandra/blob/ecec863d/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java index b63caab..f101998 100644 --- a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java +++ b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java @@ -167,7 +167,7 @@ public abstract class AbstractCompactionStrategy /** * @return size in bytes of the largest sstables for this strategy */ -public abstract long getMaxSSTableSize(); +public abstract long getMaxSSTableBytes(); public boolean isEnabled() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/ecec863d/src/java/org/apache/cassandra/db/compaction/CompactionTask.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java index 6c6f852..2a23966 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java @@ -118,7 +118,7 @@ public class CompactionTask extends AbstractCompactionTask long totalkeysWritten = 0; long estimatedTotalKeys = Math.max(cfs.metadata.getIndexInterval(), SSTableReader.getApproximateKeyCount(actuallyCompact, cfs.metadata)); -long estimatedSSTables = Math.max(1, SSTable.getTotalBytes(actuallyCompact) / strategy.getMaxSSTableSize()); +long estimatedSSTables = Math.max(1, SSTable.getTotalBytes(actuallyCompact) / strategy.getMaxSSTableBytes()); long keysPerSSTable = (long) Math.ceil((double) estimatedTotalKeys / estimatedSSTables); if (logger.isDebugEnabled()) logger.debug(Expected bloom filter size : + keysPerSSTable); http://git-wip-us.apache.org/repos/asf/cassandra/blob/ecec863d/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java index e992003..8e60223 100644 --- a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java +++ b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java @@ -38,7 +38,6 @@ import org.apache.cassandra.notifications.INotification; import org.apache.cassandra.notifications.INotificationConsumer; import org.apache.cassandra.notifications.SSTableAddedNotification; import
[3/3] git commit: merge from 2.0
merge from 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/90e585dd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/90e585dd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/90e585dd Branch: refs/heads/trunk Commit: 90e585dde10189bc8e7044837ce2db91720ea2ce Parents: 216139f ecec863 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 14:35:34 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 14:35:34 2013 -0600 -- CHANGES.txt | 1 + .../compaction/AbstractCompactionStrategy.java | 2 +- .../cassandra/db/compaction/CompactionTask.java | 2 +- .../compaction/LeveledCompactionStrategy.java | 20 +++- .../db/compaction/LeveledCompactionTask.java| 8 +++ .../db/compaction/LeveledManifest.java | 25 .../SizeTieredCompactionStrategy.java | 2 +- .../cassandra/db/compaction/Upgrader.java | 2 +- .../cassandra/tools/StandaloneScrubber.java | 2 +- 9 files changed, 39 insertions(+), 25 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/90e585dd/CHANGES.txt -- diff --cc CHANGES.txt index dcc7e33,c2cd052..6c9f2e1 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,25 -1,5 +1,26 @@@ +2.1 + * Multithreaded commitlog (CASSANDRA-3578) + * allocate fixed index summary memory pool and resample cold index summaries + to use less memory (CASSANDRA-5519) + * Removed multithreaded compaction (CASSANDRA-6142) + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337) + * change logging from log4j to logback (CASSANDRA-5883) + * switch to LZ4 compression for internode communication (CASSANDRA-5887) + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971) + * Remove 1.2 network compatibility code (CASSANDRA-5960) + * Remove leveled json manifest migration code (CASSANDRA-5996) + * Remove CFDefinition (CASSANDRA-6253) + * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278) + * User-defined types for CQL3 (CASSANDRA-5590) + * Use of o.a.c.metrics in nodetool (CASSANDRA-5871, 6406) + * Batch read from OTC's queue and cleanup (CASSANDRA-1632) + * Secondary index support for collections (CASSANDRA-4511) + * SSTable metadata(Stats.db) format change (CASSANDRA-6356) + * Push composites support in the storage engine (CASSANDRA-5417) + + 2.0.4 + * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496) * Fix assertion failure in filterColdSSTables (CASSANDRA-6483) * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008) * Fix cleanup ClassCastException (CASSANDRA-6462) http://git-wip-us.apache.org/repos/asf/cassandra/blob/90e585dd/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/90e585dd/src/java/org/apache/cassandra/db/compaction/CompactionTask.java -- diff --cc src/java/org/apache/cassandra/db/compaction/CompactionTask.java index 59f2f2f,2a23966..cabe486 --- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java @@@ -119,12 -118,14 +119,12 @@@ public class CompactionTask extends Abs long totalkeysWritten = 0; long estimatedTotalKeys = Math.max(cfs.metadata.getIndexInterval(), SSTableReader.getApproximateKeyCount(actuallyCompact, cfs.metadata)); - long estimatedSSTables = Math.max(1, SSTableReader.getTotalBytes(actuallyCompact) / strategy.getMaxSSTableSize()); -long estimatedSSTables = Math.max(1, SSTable.getTotalBytes(actuallyCompact) / strategy.getMaxSSTableBytes()); ++long estimatedSSTables = Math.max(1, SSTableReader.getTotalBytes(actuallyCompact) / strategy.getMaxSSTableBytes()); long keysPerSSTable = (long) Math.ceil((double) estimatedTotalKeys / estimatedSSTables); if (logger.isDebugEnabled()) -logger.debug(Expected bloom filter size : + keysPerSSTable); +logger.debug(Expected bloom filter size : {}, keysPerSSTable); -AbstractCompactionIterable ci = DatabaseDescriptor.isMultithreadedCompaction() - ? new ParallelCompactionIterable(compactionType, strategy.getScanners(actuallyCompact), controller) - : new CompactionIterable(compactionType, strategy.getScanners(actuallyCompact), controller); +AbstractCompactionIterable ci = new CompactionIterable(compactionType,
[2/3] git commit: Fix size-tiered compaction in LCS L0 patch by jbellis; reviewed by Marcus Eriksson and tested by Nikolai Grigoriev for CASSANDRA-9496
Fix size-tiered compaction in LCS L0 patch by jbellis; reviewed by Marcus Eriksson and tested by Nikolai Grigoriev for CASSANDRA-9496 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ecec863d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ecec863d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ecec863d Branch: refs/heads/trunk Commit: ecec863d1fe3b1b249b7d2948b482104f5ff1ef3 Parents: 09c7dee Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 14:31:04 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 14:31:14 2013 -0600 -- CHANGES.txt | 1 + .../compaction/AbstractCompactionStrategy.java | 2 +- .../cassandra/db/compaction/CompactionTask.java | 2 +- .../compaction/LeveledCompactionStrategy.java | 20 +++- .../db/compaction/LeveledCompactionTask.java| 8 +++ .../db/compaction/LeveledManifest.java | 25 .../SizeTieredCompactionStrategy.java | 2 +- .../cassandra/db/compaction/Upgrader.java | 9 +-- .../cassandra/tools/StandaloneScrubber.java | 2 +- 9 files changed, 39 insertions(+), 32 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ecec863d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 5450b8a..c2cd052 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.4 + * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496) * Fix assertion failure in filterColdSSTables (CASSANDRA-6483) * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008) * Fix cleanup ClassCastException (CASSANDRA-6462) http://git-wip-us.apache.org/repos/asf/cassandra/blob/ecec863d/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java index b63caab..f101998 100644 --- a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java +++ b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java @@ -167,7 +167,7 @@ public abstract class AbstractCompactionStrategy /** * @return size in bytes of the largest sstables for this strategy */ -public abstract long getMaxSSTableSize(); +public abstract long getMaxSSTableBytes(); public boolean isEnabled() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/ecec863d/src/java/org/apache/cassandra/db/compaction/CompactionTask.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java index 6c6f852..2a23966 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java @@ -118,7 +118,7 @@ public class CompactionTask extends AbstractCompactionTask long totalkeysWritten = 0; long estimatedTotalKeys = Math.max(cfs.metadata.getIndexInterval(), SSTableReader.getApproximateKeyCount(actuallyCompact, cfs.metadata)); -long estimatedSSTables = Math.max(1, SSTable.getTotalBytes(actuallyCompact) / strategy.getMaxSSTableSize()); +long estimatedSSTables = Math.max(1, SSTable.getTotalBytes(actuallyCompact) / strategy.getMaxSSTableBytes()); long keysPerSSTable = (long) Math.ceil((double) estimatedTotalKeys / estimatedSSTables); if (logger.isDebugEnabled()) logger.debug(Expected bloom filter size : + keysPerSSTable); http://git-wip-us.apache.org/repos/asf/cassandra/blob/ecec863d/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java index e992003..8e60223 100644 --- a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java +++ b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java @@ -38,7 +38,6 @@ import org.apache.cassandra.notifications.INotification; import org.apache.cassandra.notifications.INotificationConsumer; import org.apache.cassandra.notifications.SSTableAddedNotification; import org.apache.cassandra.notifications.SSTableListChangedNotification; -import org.apache.cassandra.utils.Pair; public class LeveledCompactionStrategy extends
[jira] [Updated] (CASSANDRA-6464) Paging queries with IN on the partition key is broken
[ https://issues.apache.org/jira/browse/CASSANDRA-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-6464: - Attachment: redundant-stuff.txt +1. Attaching a minor nitty patch, removing some redundant constructors and variables. Paging queries with IN on the partition key is broken - Key: CASSANDRA-6464 URL: https://issues.apache.org/jira/browse/CASSANDRA-6464 Project: Cassandra Issue Type: Bug Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Fix For: 2.0.4 Attachments: 6464.txt, redundant-stuff.txt Feels like MultiPartitionPager (which handles paging queries when there is a IN on the partition key) has completely missed CASSANDRA-5714's train. As a result, it completely broken and will typically loop infinitely. Attaching patch to fix. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Assigned] (CASSANDRA-6440) Repair should allow repairing particular endpoints to reduce WAN usage.
[ https://issues.apache.org/jira/browse/CASSANDRA-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-6440: - Assignee: sankalp kohli Repair should allow repairing particular endpoints to reduce WAN usage. Key: CASSANDRA-6440 URL: https://issues.apache.org/jira/browse/CASSANDRA-6440 Project: Cassandra Issue Type: New Feature Reporter: sankalp kohli Assignee: sankalp kohli Priority: Minor Attachments: JIRA-6440.diff The way we send out data that does not match over WAN can be improved. Example: Say there are four nodes(A,B,C,D) which are replica of a range we are repairing. A, B is in DC1 and C,D is in DC2. If A does not have the data which other replicas have, then we will have following streams 1) A to B and back 2) A to C and back(Goes over WAN) 3) A to D and back(Goes over WAN) One of the ways of doing it to reduce WAN traffic is this. 1) Repair A and B only with each other and C and D with each other starting at same time t. 2) Once these repairs have finished, A,B and C,D are in sync with respect to time t. 3) Now run a repair between A and C, the streams which are exchanged as a result of the diff will also be streamed to B and D via A and C(C and D behaves like a proxy to the streams). For a replication of DC1:2,DC2:2, the WAN traffic will get reduced by 50% and even more for higher replication factors. Another easy way to do this is to have repair command take nodes with which you want to repair with. Then we can do something like this. 1) Run repair between (A and B) and (C and D) 2) Run repair between (A and C) 3) Run repair between (A and B) and (C and D) But this will increase the traffic inside the DC as we wont be doing proxy. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-5745) Minor compaction tombstone-removal deadlock
[ https://issues.apache.org/jira/browse/CASSANDRA-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850879#comment-13850879 ] Jonathan Ellis commented on CASSANDRA-5745: --- bq. if 2 sstables are in the deadlock criteria, they will be close in levels and will in fact get compacted relatively quickly in practice, so I'm not sure you can get them to deadlock for long enough that it's a problem in practice Right, although we have some suboptimal behavior there currently (CASSANDRA-6216). A pretty simple tweak we could make would be to allow tombstone compactions to include L+1 overlaps. Then any tombstone-heavy ranges would bubble up to the top from L1 and ultimately be squashed. Just not sure how much extra overhead this introduces in extreme cases like everything is TTL'd. Minor compaction tombstone-removal deadlock --- Key: CASSANDRA-5745 URL: https://issues.apache.org/jira/browse/CASSANDRA-5745 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Fix For: 2.0.4 From a discussion with Axel Liljencrantz, If you have two SSTables that have temporally overlapping data, you can get lodged into a state where a compaction of SSTable A can't drop tombstones because SSTable B contains older data *and vice versa*. Once that's happened, Cassandra should be wedged into a state where CASSANDRA-4671 no longer helps with tombstone removal. The only way to break the wedge would be to perform a compaction containing both SSTable A and SSTable B. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (CASSANDRA-6440) Repair should allow repairing particular endpoints to reduce WAN usage.
[ https://issues.apache.org/jira/browse/CASSANDRA-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6440: -- Reviewer: Lyuben Todorov Repair should allow repairing particular endpoints to reduce WAN usage. Key: CASSANDRA-6440 URL: https://issues.apache.org/jira/browse/CASSANDRA-6440 Project: Cassandra Issue Type: New Feature Reporter: sankalp kohli Priority: Minor Attachments: JIRA-6440.diff The way we send out data that does not match over WAN can be improved. Example: Say there are four nodes(A,B,C,D) which are replica of a range we are repairing. A, B is in DC1 and C,D is in DC2. If A does not have the data which other replicas have, then we will have following streams 1) A to B and back 2) A to C and back(Goes over WAN) 3) A to D and back(Goes over WAN) One of the ways of doing it to reduce WAN traffic is this. 1) Repair A and B only with each other and C and D with each other starting at same time t. 2) Once these repairs have finished, A,B and C,D are in sync with respect to time t. 3) Now run a repair between A and C, the streams which are exchanged as a result of the diff will also be streamed to B and D via A and C(C and D behaves like a proxy to the streams). For a replication of DC1:2,DC2:2, the WAN traffic will get reduced by 50% and even more for higher replication factors. Another easy way to do this is to have repair command take nodes with which you want to repair with. Then we can do something like this. 1) Run repair between (A and B) and (C and D) 2) Run repair between (A and C) 3) Run repair between (A and B) and (C and D) But this will increase the traffic inside the DC as we wont be doing proxy. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
git commit: ninja-correcting the the constant for LOCAL_ONE.
Updated Branches: refs/heads/cassandra-1.2 13348c47a - 1b4c9b45c ninja-correcting the the constant for LOCAL_ONE. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b4c9b45 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b4c9b45 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b4c9b45 Branch: refs/heads/cassandra-1.2 Commit: 1b4c9b45cbf32a72318c42c1ec6154dc1371e8e2 Parents: 13348c4 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Dec 17 11:02:42 2013 -0800 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Dec 17 11:02:42 2013 -0800 -- doc/native_protocol.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b4c9b45/doc/native_protocol.spec -- diff --git a/doc/native_protocol.spec b/doc/native_protocol.spec index 0d2ff05..08cb91e 100644 --- a/doc/native_protocol.spec +++ b/doc/native_protocol.spec @@ -201,7 +201,7 @@ Table of Contents 0x0005ALL 0x0006LOCAL_QUORUM 0x0007EACH_QUORUM - 0x0010LOCAL_ONE + 0x000ALOCAL_ONE [string map] A [short] n, followed by n pair kv where k and v are [string].
[jira] [Commented] (CASSANDRA-5742) Add command list snapshots to nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850774#comment-13850774 ] Lyuben Todorov commented on CASSANDRA-5742: --- The output seems confusing, can we go for something along the lines of {noformat} Snapshot Details: Snapshot Name Keyspace Column Family TrueDiskSpaceUsed TotalDiskSpaceUsed 1387304478196 Keyspace1Standard1 0 bytes 308.66 MB 1387304417755 Keyspace1Standard1 0 bytes 107.21 MB 1387305820866 Keyspace1Standard2 0 bytes 41.69 MB Keyspace1Standard1 0 bytes 308.66 MB {noformat} Add command list snapshots to nodetool Key: CASSANDRA-5742 URL: https://issues.apache.org/jira/browse/CASSANDRA-5742 Project: Cassandra Issue Type: New Feature Components: Tools Affects Versions: 1.2.1 Reporter: Geert Schuring Assignee: sankalp kohli Priority: Minor Labels: lhf Attachments: JIRA-5742.diff, new_file.diff It would be nice if the nodetool could tell me which snapshots are present on the system instead of me having to browse the filesystem to fetch the names of the snapshots. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3bd65965 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3bd65965 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3bd65965 Branch: refs/heads/cassandra-2.0 Commit: 3bd65965fd4049614676b6b13ca349401d4e034e Parents: 22d8744 1b4c9b4 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Dec 17 11:04:03 2013 -0800 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Dec 17 11:04:03 2013 -0800 -- doc/native_protocol_v1.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3bd65965/doc/native_protocol_v1.spec -- diff --cc doc/native_protocol_v1.spec index 0d2ff05,000..08cb91e mode 100644,00..100644 --- a/doc/native_protocol_v1.spec +++ b/doc/native_protocol_v1.spec @@@ -1,636 -1,0 +1,636 @@@ + + CQL BINARY PROTOCOL v1 + + +Table of Contents + + 1. Overview + 2. Frame header +2.1. version +2.2. flags +2.3. stream +2.4. opcode +2.5. length + 3. Notations + 4. Messages +4.1. Requests + 4.1.1. STARTUP + 4.1.2. CREDENTIALS + 4.1.3. OPTIONS + 4.1.4. QUERY + 4.1.5. PREPARE + 4.1.6. EXECUTE + 4.1.7. REGISTER +4.2. Responses + 4.2.1. ERROR + 4.2.2. READY + 4.2.3. AUTHENTICATE + 4.2.4. SUPPORTED + 4.2.5. RESULT +4.2.5.1. Void +4.2.5.2. Rows +4.2.5.3. Set_keyspace +4.2.5.4. Prepared +4.2.5.5. Schema_change + 4.2.6. EVENT + 5. Compression + 6. Collection types + 7. Error codes + + +1. Overview + + The CQL binary protocol is a frame based protocol. Frames are defined as: + + 0 8162432 + +-+-+-+-+ + | version | flags | stream | opcode | + +-+-+-+-+ + |length | + +-+-+-+-+ + | | + .... body ... . + . . + . . + + + + The protocol is big-endian (network byte order). + + Each frame contains a fixed size header (8 bytes) followed by a variable size + body. The header is described in Section 2. The content of the body depends + on the header opcode value (the body can in particular be empty for some + opcode values). The list of allowed opcode is defined Section 2.3 and the + details of each corresponding message is described Section 4. + + The protocol distinguishes 2 types of frames: requests and responses. Requests + are those frame sent by the clients to the server, response are the ones sent + by the server. Note however that while communication are initiated by the + client with the server responding to request, the protocol may likely add + server pushes in the future, so responses does not obligatory come right after + a client request. + + Note to client implementors: clients library should always assume that the + body of a given frame may contain more data than what is described in this + document. It will however always be safe to ignore the remaining of the frame + body in such cases. The reason is that this may allow to sometimes extend the + protocol with optional features without needing to change the protocol + version. + + +2. Frame header + +2.1. version + + The version is a single byte that indicate both the direction of the message + (request or response) and the version of the protocol in use. The up-most bit + of version is used to define the direction of the message: 0 indicates a + request, 1 indicates a responses. This can be useful for protocol analyzers to + distinguish the nature of the packet from the direction which it is moving. + The rest of that byte is the protocol version (1 for the protocol defined in + this document). In other words, for this version of the protocol, version will + have one of: +0x01Request frame for this protocol version +0x81Response frame for this protocol version + + +2.2. flags + + Flags applying to this frame. The flags have the following meaning (described + by the mask that allow to select them): +0x01: Compression flag. If set, the frame body is compressed. The actual + compression to use should have been set up beforehand through the + Startup message (which
[1/3] git commit: ninja-correcting the the constant for LOCAL_ONE.
Updated Branches: refs/heads/cassandra-2.0 22d87444c - 09c7dee25 ninja-correcting the the constant for LOCAL_ONE. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b4c9b45 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b4c9b45 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b4c9b45 Branch: refs/heads/cassandra-2.0 Commit: 1b4c9b45cbf32a72318c42c1ec6154dc1371e8e2 Parents: 13348c4 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Dec 17 11:02:42 2013 -0800 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Dec 17 11:02:42 2013 -0800 -- doc/native_protocol.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b4c9b45/doc/native_protocol.spec -- diff --git a/doc/native_protocol.spec b/doc/native_protocol.spec index 0d2ff05..08cb91e 100644 --- a/doc/native_protocol.spec +++ b/doc/native_protocol.spec @@ -201,7 +201,7 @@ Table of Contents 0x0005ALL 0x0006LOCAL_QUORUM 0x0007EACH_QUORUM - 0x0010LOCAL_ONE + 0x000ALOCAL_ONE [string map] A [short] n, followed by n pair kv where k and v are [string].
[jira] [Commented] (CASSANDRA-6086) Node refuses to start with exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers when find that some to be removed files are already removed
[ https://issues.apache.org/jira/browse/CASSANDRA-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850854#comment-13850854 ] Yuki Morishita commented on CASSANDRA-6086: --- [~thobbs] Added test and slightly changed message: https://github.com/yukim/cassandra/commits/6086 I'm not good at wording, so suggestion is welcome. Node refuses to start with exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers when find that some to be removed files are already removed - Key: CASSANDRA-6086 URL: https://issues.apache.org/jira/browse/CASSANDRA-6086 Project: Cassandra Issue Type: Bug Components: Core Reporter: Oleg Anastasyev Assignee: Yuki Morishita Fix For: 2.0.4 Attachments: 6086-v2.txt, removeUnfinishedCompactionLeftovers.txt Node refuses to start with {code} Caused by: java.lang.IllegalStateException: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables. at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:544) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:262) {code} IMO, there is no reason to refuse to start discivering files that must be removed are already removed. It looks like pure bug diagnostic code and mean nothing to operator (nor he can do anything about this). Replaced throw of excepion with dump of diagnostic warning and continue startup. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[3/3] git commit: ninja-correcting the the constant for LOCAL_ONE in v2 of the native_protocol doc.
ninja-correcting the the constant for LOCAL_ONE in v2 of the native_protocol doc. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/09c7dee2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/09c7dee2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/09c7dee2 Branch: refs/heads/cassandra-2.0 Commit: 09c7dee2554f6732505e603b16127a2c0b426d49 Parents: 3bd6596 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Dec 17 11:05:06 2013 -0800 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Dec 17 11:05:06 2013 -0800 -- doc/native_protocol_v2.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/09c7dee2/doc/native_protocol_v2.spec -- diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec index 9ec2463..44061da 100644 --- a/doc/native_protocol_v2.spec +++ b/doc/native_protocol_v2.spec @@ -220,7 +220,7 @@ Table of Contents 0x0007EACH_QUORUM 0x0008SERIAL 0x0009LOCAL_SERIAL - 0x0010LOCAL_ONE + 0x000ALOCAL_ONE [string map] A [short] n, followed by n pair kv where k and v are [string].
[2/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3bd65965 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3bd65965 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3bd65965 Branch: refs/heads/trunk Commit: 3bd65965fd4049614676b6b13ca349401d4e034e Parents: 22d8744 1b4c9b4 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Dec 17 11:04:03 2013 -0800 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Dec 17 11:04:03 2013 -0800 -- doc/native_protocol_v1.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3bd65965/doc/native_protocol_v1.spec -- diff --cc doc/native_protocol_v1.spec index 0d2ff05,000..08cb91e mode 100644,00..100644 --- a/doc/native_protocol_v1.spec +++ b/doc/native_protocol_v1.spec @@@ -1,636 -1,0 +1,636 @@@ + + CQL BINARY PROTOCOL v1 + + +Table of Contents + + 1. Overview + 2. Frame header +2.1. version +2.2. flags +2.3. stream +2.4. opcode +2.5. length + 3. Notations + 4. Messages +4.1. Requests + 4.1.1. STARTUP + 4.1.2. CREDENTIALS + 4.1.3. OPTIONS + 4.1.4. QUERY + 4.1.5. PREPARE + 4.1.6. EXECUTE + 4.1.7. REGISTER +4.2. Responses + 4.2.1. ERROR + 4.2.2. READY + 4.2.3. AUTHENTICATE + 4.2.4. SUPPORTED + 4.2.5. RESULT +4.2.5.1. Void +4.2.5.2. Rows +4.2.5.3. Set_keyspace +4.2.5.4. Prepared +4.2.5.5. Schema_change + 4.2.6. EVENT + 5. Compression + 6. Collection types + 7. Error codes + + +1. Overview + + The CQL binary protocol is a frame based protocol. Frames are defined as: + + 0 8162432 + +-+-+-+-+ + | version | flags | stream | opcode | + +-+-+-+-+ + |length | + +-+-+-+-+ + | | + .... body ... . + . . + . . + + + + The protocol is big-endian (network byte order). + + Each frame contains a fixed size header (8 bytes) followed by a variable size + body. The header is described in Section 2. The content of the body depends + on the header opcode value (the body can in particular be empty for some + opcode values). The list of allowed opcode is defined Section 2.3 and the + details of each corresponding message is described Section 4. + + The protocol distinguishes 2 types of frames: requests and responses. Requests + are those frame sent by the clients to the server, response are the ones sent + by the server. Note however that while communication are initiated by the + client with the server responding to request, the protocol may likely add + server pushes in the future, so responses does not obligatory come right after + a client request. + + Note to client implementors: clients library should always assume that the + body of a given frame may contain more data than what is described in this + document. It will however always be safe to ignore the remaining of the frame + body in such cases. The reason is that this may allow to sometimes extend the + protocol with optional features without needing to change the protocol + version. + + +2. Frame header + +2.1. version + + The version is a single byte that indicate both the direction of the message + (request or response) and the version of the protocol in use. The up-most bit + of version is used to define the direction of the message: 0 indicates a + request, 1 indicates a responses. This can be useful for protocol analyzers to + distinguish the nature of the packet from the direction which it is moving. + The rest of that byte is the protocol version (1 for the protocol defined in + this document). In other words, for this version of the protocol, version will + have one of: +0x01Request frame for this protocol version +0x81Response frame for this protocol version + + +2.2. flags + + Flags applying to this frame. The flags have the following meaning (described + by the mask that allow to select them): +0x01: Compression flag. If set, the frame body is compressed. The actual + compression to use should have been set up beforehand through the + Startup message (which thus
[3/4] git commit: ninja-correcting the the constant for LOCAL_ONE in v2 of the native_protocol doc.
ninja-correcting the the constant for LOCAL_ONE in v2 of the native_protocol doc. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/09c7dee2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/09c7dee2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/09c7dee2 Branch: refs/heads/trunk Commit: 09c7dee2554f6732505e603b16127a2c0b426d49 Parents: 3bd6596 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Dec 17 11:05:06 2013 -0800 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Dec 17 11:05:06 2013 -0800 -- doc/native_protocol_v2.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/09c7dee2/doc/native_protocol_v2.spec -- diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec index 9ec2463..44061da 100644 --- a/doc/native_protocol_v2.spec +++ b/doc/native_protocol_v2.spec @@ -220,7 +220,7 @@ Table of Contents 0x0007EACH_QUORUM 0x0008SERIAL 0x0009LOCAL_SERIAL - 0x0010LOCAL_ONE + 0x000ALOCAL_ONE [string map] A [short] n, followed by n pair kv where k and v are [string].
[jira] [Created] (CASSANDRA-6500) SSTableSimpleWriters are not writing Summary.db
Yuki Morishita created CASSANDRA-6500: - Summary: SSTableSimpleWriters are not writing Summary.db Key: CASSANDRA-6500 URL: https://issues.apache.org/jira/browse/CASSANDRA-6500 Project: Cassandra Issue Type: Bug Reporter: Yuki Morishita Priority: Minor I noticed ERROR from one of test in ColumnFamilyStoreTest reporting Summary.db is missing: ERROR 10:08:15,122 Missing component: build/test/cassandra/data/Keyspace1/Standard3/Keyspace1-Standard3-jb-1-Summary.db Looks like this is due to the change in CASSANDRA-5894. SSTableSimpleWriter#close changed to call SSTableWriter#close instead of SSTW#closeAndOpenReader, which does not call SSTableReader.saveSummary anymore. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[1/4] git commit: ninja-correcting the the constant for LOCAL_ONE.
Updated Branches: refs/heads/trunk 6e6730d87 - 216139ff6 ninja-correcting the the constant for LOCAL_ONE. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b4c9b45 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b4c9b45 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b4c9b45 Branch: refs/heads/trunk Commit: 1b4c9b45cbf32a72318c42c1ec6154dc1371e8e2 Parents: 13348c4 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Dec 17 11:02:42 2013 -0800 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Dec 17 11:02:42 2013 -0800 -- doc/native_protocol.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b4c9b45/doc/native_protocol.spec -- diff --git a/doc/native_protocol.spec b/doc/native_protocol.spec index 0d2ff05..08cb91e 100644 --- a/doc/native_protocol.spec +++ b/doc/native_protocol.spec @@ -201,7 +201,7 @@ Table of Contents 0x0005ALL 0x0006LOCAL_QUORUM 0x0007EACH_QUORUM - 0x0010LOCAL_ONE + 0x000ALOCAL_ONE [string map] A [short] n, followed by n pair kv where k and v are [string].
[jira] [Commented] (CASSANDRA-6487) Log WARN on large batch sizes
[ https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850893#comment-13850893 ] Jonathan Ellis commented on CASSANDRA-6487: --- Can you make it configure by 1s instead of 1000s? Bikeshed: would prefer format of {noformat} Batch of statements for [test.cf, test.cf2, test2.cf] is of size 11024, exceeding specified threshold of 7168 {noformat} Log WARN on large batch sizes - Key: CASSANDRA-6487 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487 Project: Cassandra Issue Type: Improvement Reporter: Patrick McFadin Assignee: Lyuben Todorov Priority: Minor Attachments: 6487_trunk.patch Large batches on a coordinator can cause a lot of node stress. I propose adding a WARN log entry if batch sizes go beyond a configurable size. This will give more visibility to operators on something that can happen on the developer side. New yaml setting with 5k default. {{# Log WARN on any batch size exceeding this value. 5k by default.}} {{# Caution should be taken on increasing the size of this threshold as it can lead to node instability.}} {{batch_size_warn_threshold: 5k}} -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[4/4] git commit: Merge branch 'cassandra-2.0' into trunk
Merge branch 'cassandra-2.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/216139ff Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/216139ff Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/216139ff Branch: refs/heads/trunk Commit: 216139ff69dbd83907b3cd20d9c0f223135d10f5 Parents: 6e6730d 09c7dee Author: Jason Brown jasedbr...@gmail.com Authored: Tue Dec 17 11:06:01 2013 -0800 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Dec 17 11:06:01 2013 -0800 -- doc/native_protocol_v1.spec | 2 +- doc/native_protocol_v2.spec | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) --
[jira] [Commented] (CASSANDRA-6210) Repair hangs when a new datacenter is added to a cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850912#comment-13850912 ] Yuki Morishita commented on CASSANDRA-6210: --- I followed the steps bellow: {code} Node 1 up dc1 Stress Node 2 up dc2 Alter keyspace repair on node1 {code} And with auto_bootstrap: false, I got the following and repair hung: {code} ERROR [AntiEntropyStage:1] 2013-12-17 15:03:08,945 CassandraDaemon.java (line 187) Exception in thread Thread[AntiEntropyStage:1,5,main] java.lang.AssertionError: Unknown keyspace Keyspace1 at org.apache.cassandra.db.Keyspace.init(Keyspace.java:262) at org.apache.cassandra.db.Keyspace.open(Keyspace.java:110) at org.apache.cassandra.db.Keyspace.open(Keyspace.java:88) at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:46) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) {code} We should 'catch-all' in RepairVerbHandler to prevent hang at least. It was not the same exception. [~rspitzer], can you reproduce with 'log4j.logger.org.apache.cassandra.streaming=DEBUG' in your log4j-server.properties and attach the log here? Repair hangs when a new datacenter is added to a cluster Key: CASSANDRA-6210 URL: https://issues.apache.org/jira/browse/CASSANDRA-6210 Project: Cassandra Issue Type: Bug Components: Core Environment: Amazon Ec2 2 M1.large nodes Reporter: Russell Alexander Spitzer Assignee: Yuki Morishita Attempting to add a new datacenter to a cluster seems to cause repair operations to break. I've been reproducing this with 20~ node clusters but can get it to reliably occur on 2 node setups. {code} ##Basic Steps to reproduce #Node 1 is started using GossipingPropertyFileSnitch as dc1 #Cassandra-stress is used to insert a minimal amount of data $CASSANDRA_STRESS -t 100 -R org.apache.cassandra.locator.NetworkTopologyStrategy --num-keys=1000 --columns=10 --consistency-level=LOCAL_QUORUM --average-size-values - -compaction-strategy='LeveledCompactionStrategy' -O dc1:1 --operation=COUNTER_ADD #Alter Keyspace1 ALTER KEYSPACE Keyspace1 WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': 1 , 'dc2': 1 }; #Add node 2 using GossipingPropertyFileSnitch as dc2 run repair on node 1 run repair on node 2 {code} The repair task on node 1 never completes and while there are no exceptions in the logs of node1, netstat reports the following repair tasks {code} Mode: NORMAL Repair 4e71a250-36b4-11e3-bedc-1d1bb5c9abab Repair 6c64ded0-36b4-11e3-bedc-1d1bb5c9abab Read Repair Statistics: Attempted: 0 Mismatch (Blocking): 0 Mismatch (Background): 0 Pool NameActive Pending Completed Commandsn/a 0 10239 Responses n/a 0 3839 {code} Checking on node 2 we see the following exceptions {code} ERROR [STREAM-IN-/10.171.122.130] 2013-10-16 22:42:58,961 StreamSession.java (line 410) [Stream #4e71a250-36b4-11e3-bedc-1d1bb5c9abab] Streaming error occurred java.lang.NullPointerException at org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:174) at org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:436) at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:358) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:293) at java.lang.Thread.run(Thread.java:724) ... ERROR [STREAM-IN-/10.171.122.130] 2013-10-16 22:43:49,214 StreamSession.java (line 410) [Stream #6c64ded0-36b4-11e3-bedc-1d1bb5c9abab] Streaming error occurred java.lang.NullPointerException at org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:174) at org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:436) at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:358) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:293) at java.lang.Thread.run(Thread.java:724) {code} Netstats on node 2 reports {code} automaton@ip-10-171-15-234:~$ nodetool netstats Mode: NORMAL Repair 4e71a250-36b4-11e3-bedc-1d1bb5c9abab Read Repair Statistics: Attempted: 0 Mismatch (Blocking): 0 Mismatch (Background): 0 Pool
[jira] [Commented] (CASSANDRA-5351) Avoid repairing already-repaired data by default
[ https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850945#comment-13850945 ] Jonathan Ellis commented on CASSANDRA-5351: --- It looks like the sstable being replaced doesn't actually exist in the current View. Look at the debug log to see what happened to sstable {{-1-}}. I bet it got compacted by another thread, indicating that it's not getting locked properly by repair/anticompaction. Avoid repairing already-repaired data by default Key: CASSANDRA-5351 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351 Project: Cassandra Issue Type: Task Components: Core Reporter: Jonathan Ellis Assignee: Lyuben Todorov Labels: repair Fix For: 2.1 Repair has always built its merkle tree from all the data in a columnfamily, which is guaranteed to work but is inefficient. We can improve this by remembering which sstables have already been successfully repaired, and only repairing sstables new since the last repair. (This automatically makes CASSANDRA-3362 much less of a problem too.) The tricky part is, compaction will (if not taught otherwise) mix repaired data together with non-repaired. So we should segregate unrepaired sstables from the repaired ones. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6496) Endless L0 LCS compactions
[ https://issues.apache.org/jira/browse/CASSANDRA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850765#comment-13850765 ] Marcus Eriksson commented on CASSANDRA-6496: [~ngrigoriev] thanks for testing and patch lgtm, +1 Endless L0 LCS compactions -- Key: CASSANDRA-6496 URL: https://issues.apache.org/jira/browse/CASSANDRA-6496 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 2.0.3, Linux, 6 nodes, 5 disks per node Reporter: Nikolai Grigoriev Assignee: Jonathan Ellis Labels: compaction Fix For: 2.0.4 Attachments: 6496.txt, system.log.1.gz, system.log.gz I have first described the problem here: http://stackoverflow.com/questions/20589324/cassandra-2-0-3-endless-compactions-with-no-traffic I think I have really abused my system with the traffic (mix of reads, heavy updates and some deletes). Now after stopping the traffic I see the compactions that are going on endlessly for over 4 days. For a specific CF I have about 4700 sstable data files right now. The compaction estimates are logged as [3312, 4, 0, 0, 0, 0, 0, 0, 0]. sstable_size_in_mb=256. 3214 files are about 256Mb (+/1 few megs), other files are smaller or much smaller than that. No sstables are larger than 256Mb. What I observe is that LCS picks 32 sstables from L0 and compacts them into 32 sstables of approximately the same size. So, what my system is doing for last 4 days (no traffic at all) is compacting groups of 32 sstables into groups of 32 sstables without any changes. Seems like a bug to me regardless of what did I do to get the system into this state... -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6216) Level Compaction should persist last compacted key per level
[ https://issues.apache.org/jira/browse/CASSANDRA-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850971#comment-13850971 ] Jonathan Ellis commented on CASSANDRA-6216: --- We should be going by size-on-disk, not sstable count. They do tend to correspond but it's common to get leftover sstables smaller than the max, which can be meaningful as max gets larger. We can also get slightly more sophisticated and count bytes that would be written after accounting for expired tombstones. (See {{findDroppableSSTable}} for example of using tombstone stats.) Also, this is actually backwards -- we want the *least* overlapping, since that means we spend less time rewriting data that doesn't change (less write amplification). See improved compaction here: http://hackingdistributed.com/2013/06/17/hyperleveldb/, although I think they did overcomplicate the solution (as I mentioned in my comment on that page). Level Compaction should persist last compacted key per level Key: CASSANDRA-6216 URL: https://issues.apache.org/jira/browse/CASSANDRA-6216 Project: Cassandra Issue Type: Improvement Components: Core Reporter: sankalp kohli Assignee: sankalp kohli Priority: Minor Attachments: JIRA-6216.diff Level compaction does not persist the last compacted key per level. This is important for higher levels. The sstables with higher token and in higher levels wont get a chance to compact as the last compacted key will get reset after a restart. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6216) Level Compaction should persist last compacted key per level
[ https://issues.apache.org/jira/browse/CASSANDRA-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850982#comment-13850982 ] Jonathan Ellis commented on CASSANDRA-6216: --- That is, we should prefer smallest write amplification where wa = (overlapped size-minus-tombstones + candidate size-minus-tombstones) / candidate size-minus-tombstones Level Compaction should persist last compacted key per level Key: CASSANDRA-6216 URL: https://issues.apache.org/jira/browse/CASSANDRA-6216 Project: Cassandra Issue Type: Improvement Components: Core Reporter: sankalp kohli Assignee: sankalp kohli Priority: Minor Attachments: JIRA-6216.diff Level compaction does not persist the last compacted key per level. This is important for higher levels. The sstables with higher token and in higher levels wont get a chance to compact as the last compacted key will get reset after a restart. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0
[ https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851009#comment-13851009 ] Jonathan Ellis commented on CASSANDRA-6378: --- Can you review, [~mishail]? sstableloader does not support client encryption on Cassandra 2.0 - Key: CASSANDRA-6378 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378 Project: Cassandra Issue Type: Bug Reporter: David Laube Assignee: Sam Tunnicliffe Labels: client, encryption, ssl, sstableloader Fix For: 2.0.4 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch We have been testing backup/restore from one ring to another and we recently stumbled upon an issue with sstableloader. When client_enc_enable: true, the exception below is generated. However, when client_enc_enable is set to false, the sstableloader is able to get to the point where it is discovers endpoints, connects to stream data, etc. ==BEGIN EXCEPTION== sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 /tmp/import/keyspace_name/columnfamily_name Exception in thread main java.lang.RuntimeException: Could not retrieve endpoint ranges: at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68) Caused by: org.apache.thrift.transport.TTransportException: Frame size (352518400) larger than max length (16384000)! at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292) at org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280) at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199) ... 2 more ==END EXCEPTION== -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[2/5] git commit: cleanup + debug logging
cleanup + debug logging Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d605281 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d605281 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d605281 Branch: refs/heads/trunk Commit: 1d6052810df9363ed8dee308444b8466be112b5d Parents: ecec863 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 16:34:53 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 16:37:05 2013 -0600 -- .../apache/cassandra/net/MessagingService.java | 48 +--- 1 file changed, 22 insertions(+), 26 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d605281/src/java/org/apache/cassandra/net/MessagingService.java -- diff --git a/src/java/org/apache/cassandra/net/MessagingService.java b/src/java/org/apache/cassandra/net/MessagingService.java index 20cad82..b2c8014 100644 --- a/src/java/org/apache/cassandra/net/MessagingService.java +++ b/src/java/org/apache/cassandra/net/MessagingService.java @@ -37,7 +37,6 @@ import com.google.common.collect.Lists; import org.cliffc.high_scale_lib.NonBlockingHashMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor; import org.apache.cassandra.concurrent.Stage; import org.apache.cassandra.concurrent.StageManager; import org.apache.cassandra.concurrent.TracingAwareExecutorService; @@ -391,7 +390,7 @@ public final class MessagingService implements MessagingServiceMBean public void listen(InetAddress localEp) throws ConfigurationException { callbacks.reset(); // hack to allow tests to stop/restart MS -for (ServerSocket ss : getServerSocket(localEp)) +for (ServerSocket ss : getServerSockets(localEp)) { SocketThread th = new SocketThread(ss, ACCEPT- + localEp); th.start(); @@ -400,7 +399,7 @@ public final class MessagingService implements MessagingServiceMBean listenGate.signalAll(); } -private ListServerSocket getServerSocket(InetAddress localEp) throws ConfigurationException +private ListServerSocket getServerSockets(InetAddress localEp) throws ConfigurationException { final ListServerSocket ss = new ArrayListServerSocket(2); if (DatabaseDescriptor.getServerEncryptionOptions().internode_encryption != ServerEncryptionOptions.InternodeEncryption.none) @@ -834,36 +833,31 @@ public final class MessagingService implements MessagingServiceMBean try { socket = server.accept(); -if (authenticate(socket)) -{ -socket.setKeepAlive(true); -// determine the connection type to decide whether to buffer -DataInputStream in = new DataInputStream(socket.getInputStream()); -MessagingService.validateMagic(in.readInt()); -int header = in.readInt(); -boolean isStream = MessagingService.getBits(header, 3, 1) == 1; -int version = MessagingService.getBits(header, 15, 8); -logger.debug(Connection version {} from {}, version, socket.getInetAddress()); - -if (isStream) -{ -new IncomingStreamingConnection(version, socket).start(); -} -else -{ -boolean compressed = MessagingService.getBits(header, 2, 1) == 1; -new IncomingTcpConnection(version, compressed, socket).start(); -} -} -else +if (!authenticate(socket)) { +logger.debug(remote failed to authenticate); socket.close(); +continue; } + +socket.setKeepAlive(true); +// determine the connection type to decide whether to buffer +DataInputStream in = new DataInputStream(socket.getInputStream()); +MessagingService.validateMagic(in.readInt()); +int header = in.readInt(); +boolean isStream = MessagingService.getBits(header, 3, 1) == 1; +int version = MessagingService.getBits(header, 15, 8); +logger.debug(Connection version {} from {}, version, socket.getInetAddress()); + +Thread thread = isStream +
[3/5] git commit: loop based on isClosed to accomodate SSL sockets patch by Mikhail Stepura; reviewed by jbellis for CASSANDRA-6349
loop based on isClosed to accomodate SSL sockets patch by Mikhail Stepura; reviewed by jbellis for CASSANDRA-6349 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53af91e6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53af91e6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53af91e6 Branch: refs/heads/trunk Commit: 53af91e650d3fd881df6e811f5d4c5e46a039119 Parents: 1d60528 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 16:37:43 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 16:38:32 2013 -0600 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/net/MessagingService.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/53af91e6/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c2cd052..b8757d7 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.4 + * Fix accept() loop for SSL sockets post-shutdown (CASSANDRA-6468) * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496) * Fix assertion failure in filterColdSSTables (CASSANDRA-6483) * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008) http://git-wip-us.apache.org/repos/asf/cassandra/blob/53af91e6/src/java/org/apache/cassandra/net/MessagingService.java -- diff --git a/src/java/org/apache/cassandra/net/MessagingService.java b/src/java/org/apache/cassandra/net/MessagingService.java index b2c8014..21c9345 100644 --- a/src/java/org/apache/cassandra/net/MessagingService.java +++ b/src/java/org/apache/cassandra/net/MessagingService.java @@ -827,7 +827,7 @@ public final class MessagingService implements MessagingServiceMBean public void run() { -while (true) +while (!server.isClosed()) { Socket socket = null; try
[4/5] git commit: loop based on isClosed to accomodate SSL sockets patch by Mikhail Stepura; reviewed by jbellis for CASSANDRA-6349
loop based on isClosed to accomodate SSL sockets patch by Mikhail Stepura; reviewed by jbellis for CASSANDRA-6349 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53af91e6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53af91e6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53af91e6 Branch: refs/heads/cassandra-2.0 Commit: 53af91e650d3fd881df6e811f5d4c5e46a039119 Parents: 1d60528 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 16:37:43 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 16:38:32 2013 -0600 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/net/MessagingService.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/53af91e6/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c2cd052..b8757d7 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.4 + * Fix accept() loop for SSL sockets post-shutdown (CASSANDRA-6468) * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496) * Fix assertion failure in filterColdSSTables (CASSANDRA-6483) * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008) http://git-wip-us.apache.org/repos/asf/cassandra/blob/53af91e6/src/java/org/apache/cassandra/net/MessagingService.java -- diff --git a/src/java/org/apache/cassandra/net/MessagingService.java b/src/java/org/apache/cassandra/net/MessagingService.java index b2c8014..21c9345 100644 --- a/src/java/org/apache/cassandra/net/MessagingService.java +++ b/src/java/org/apache/cassandra/net/MessagingService.java @@ -827,7 +827,7 @@ public final class MessagingService implements MessagingServiceMBean public void run() { -while (true) +while (!server.isClosed()) { Socket socket = null; try
[1/5] git commit: cleanup + debug logging
Updated Branches: refs/heads/cassandra-2.0 ecec863d1 - 53af91e65 refs/heads/trunk 90e585dde - 6635cde3a cleanup + debug logging Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d605281 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d605281 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d605281 Branch: refs/heads/cassandra-2.0 Commit: 1d6052810df9363ed8dee308444b8466be112b5d Parents: ecec863 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 16:34:53 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 16:37:05 2013 -0600 -- .../apache/cassandra/net/MessagingService.java | 48 +--- 1 file changed, 22 insertions(+), 26 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d605281/src/java/org/apache/cassandra/net/MessagingService.java -- diff --git a/src/java/org/apache/cassandra/net/MessagingService.java b/src/java/org/apache/cassandra/net/MessagingService.java index 20cad82..b2c8014 100644 --- a/src/java/org/apache/cassandra/net/MessagingService.java +++ b/src/java/org/apache/cassandra/net/MessagingService.java @@ -37,7 +37,6 @@ import com.google.common.collect.Lists; import org.cliffc.high_scale_lib.NonBlockingHashMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor; import org.apache.cassandra.concurrent.Stage; import org.apache.cassandra.concurrent.StageManager; import org.apache.cassandra.concurrent.TracingAwareExecutorService; @@ -391,7 +390,7 @@ public final class MessagingService implements MessagingServiceMBean public void listen(InetAddress localEp) throws ConfigurationException { callbacks.reset(); // hack to allow tests to stop/restart MS -for (ServerSocket ss : getServerSocket(localEp)) +for (ServerSocket ss : getServerSockets(localEp)) { SocketThread th = new SocketThread(ss, ACCEPT- + localEp); th.start(); @@ -400,7 +399,7 @@ public final class MessagingService implements MessagingServiceMBean listenGate.signalAll(); } -private ListServerSocket getServerSocket(InetAddress localEp) throws ConfigurationException +private ListServerSocket getServerSockets(InetAddress localEp) throws ConfigurationException { final ListServerSocket ss = new ArrayListServerSocket(2); if (DatabaseDescriptor.getServerEncryptionOptions().internode_encryption != ServerEncryptionOptions.InternodeEncryption.none) @@ -834,36 +833,31 @@ public final class MessagingService implements MessagingServiceMBean try { socket = server.accept(); -if (authenticate(socket)) -{ -socket.setKeepAlive(true); -// determine the connection type to decide whether to buffer -DataInputStream in = new DataInputStream(socket.getInputStream()); -MessagingService.validateMagic(in.readInt()); -int header = in.readInt(); -boolean isStream = MessagingService.getBits(header, 3, 1) == 1; -int version = MessagingService.getBits(header, 15, 8); -logger.debug(Connection version {} from {}, version, socket.getInetAddress()); - -if (isStream) -{ -new IncomingStreamingConnection(version, socket).start(); -} -else -{ -boolean compressed = MessagingService.getBits(header, 2, 1) == 1; -new IncomingTcpConnection(version, compressed, socket).start(); -} -} -else +if (!authenticate(socket)) { +logger.debug(remote failed to authenticate); socket.close(); +continue; } + +socket.setKeepAlive(true); +// determine the connection type to decide whether to buffer +DataInputStream in = new DataInputStream(socket.getInputStream()); +MessagingService.validateMagic(in.readInt()); +int header = in.readInt(); +boolean isStream = MessagingService.getBits(header, 3, 1) == 1; +int version = MessagingService.getBits(header, 15, 8); +
[5/5] git commit: Merge branch 'cassandra-2.0' into trunk
Merge branch 'cassandra-2.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6635cde3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6635cde3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6635cde3 Branch: refs/heads/trunk Commit: 6635cde3a0eb6fa2e0599aa9d970596a8664cd63 Parents: 90e585d 53af91e Author: Jonathan Ellis jbel...@apache.org Authored: Tue Dec 17 16:38:46 2013 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Dec 17 16:38:46 2013 -0600 -- CHANGES.txt | 1 + .../apache/cassandra/net/MessagingService.java | 50 +--- 2 files changed, 24 insertions(+), 27 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6635cde3/CHANGES.txt -- diff --cc CHANGES.txt index 6c9f2e1,b8757d7..1c88431 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,25 -1,5 +1,26 @@@ +2.1 + * Multithreaded commitlog (CASSANDRA-3578) + * allocate fixed index summary memory pool and resample cold index summaries + to use less memory (CASSANDRA-5519) + * Removed multithreaded compaction (CASSANDRA-6142) + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337) + * change logging from log4j to logback (CASSANDRA-5883) + * switch to LZ4 compression for internode communication (CASSANDRA-5887) + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971) + * Remove 1.2 network compatibility code (CASSANDRA-5960) + * Remove leveled json manifest migration code (CASSANDRA-5996) + * Remove CFDefinition (CASSANDRA-6253) + * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278) + * User-defined types for CQL3 (CASSANDRA-5590) + * Use of o.a.c.metrics in nodetool (CASSANDRA-5871, 6406) + * Batch read from OTC's queue and cleanup (CASSANDRA-1632) + * Secondary index support for collections (CASSANDRA-4511) + * SSTable metadata(Stats.db) format change (CASSANDRA-6356) + * Push composites support in the storage engine (CASSANDRA-5417) + + 2.0.4 + * Fix accept() loop for SSL sockets post-shutdown (CASSANDRA-6468) * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496) * Fix assertion failure in filterColdSSTables (CASSANDRA-6483) * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008) http://git-wip-us.apache.org/repos/asf/cassandra/blob/6635cde3/src/java/org/apache/cassandra/net/MessagingService.java --
[jira] [Assigned] (CASSANDRA-6472) Node hangs when Drop Keyspace / Table is executed
[ https://issues.apache.org/jira/browse/CASSANDRA-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-6472: - Assignee: Benedict (was: amorton) Node hangs when Drop Keyspace / Table is executed - Key: CASSANDRA-6472 URL: https://issues.apache.org/jira/browse/CASSANDRA-6472 Project: Cassandra Issue Type: Bug Components: Core Reporter: amorton Assignee: Benedict Fix For: 2.1 from http://www.mail-archive.com/user@cassandra.apache.org/msg33566.html CommitLogSegmentManager.flushDataFrom() returns a FutureTask to wait on the flushes, but the task is not started in flushDataFrom(). The CLSM manager thread does not use the result and forceRecycleAll (eventually called when making schema mods) does not start it so hangs when calling get(). plan to patch so flushDataFrom() returns a Future. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-5906) Avoid allocating over-large bloom filters
[ https://issues.apache.org/jira/browse/CASSANDRA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851049#comment-13851049 ] Jonathan Ellis commented on CASSANDRA-5906: --- Most changes to CompactionTask are cosmetic -- split out to a separate commit? Why would {{metadata.cardinalityEstimator}} be null if we've already checked {{newStatsFile}}? Otherwise +1. Avoid allocating over-large bloom filters - Key: CASSANDRA-5906 URL: https://issues.apache.org/jira/browse/CASSANDRA-5906 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Yuki Morishita Fix For: 2.1 Attachments: 5906.txt We conservatively estimate the number of partitions post-compaction to be the total number of partitions pre-compaction. That is, we assume the worst-case scenario of no partition overlap at all. This can result in substantial memory wasted in sstables resulting from highly overlapping compactions. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6471) Executing a prepared CREATE KEYSPACE multiple times doesn't work
[ https://issues.apache.org/jira/browse/CASSANDRA-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851074#comment-13851074 ] Jonathan Ellis commented on CASSANDRA-6471: --- +1 Executing a prepared CREATE KEYSPACE multiple times doesn't work Key: CASSANDRA-6471 URL: https://issues.apache.org/jira/browse/CASSANDRA-6471 Project: Cassandra Issue Type: Bug Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Trivial Fix For: 1.2.13 Attachments: 6471.txt See user reports on the java driver JIRA: https://datastax-oss.atlassian.net/browse/JAVA-223. Preparing CREATE KEYSPACE queries is not particularly useful but there is no reason for it to be broken. The reason is that calling KSPropDef/CFPropDef.validate() methods are not idempotent. Attaching simple patch to fix. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6157) Selectively Disable hinted handoff for a data center
[ https://issues.apache.org/jira/browse/CASSANDRA-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851076#comment-13851076 ] Jonathan Ellis commented on CASSANDRA-6157: --- Can you review, [~lyubent]? Selectively Disable hinted handoff for a data center Key: CASSANDRA-6157 URL: https://issues.apache.org/jira/browse/CASSANDRA-6157 Project: Cassandra Issue Type: Improvement Components: Core Reporter: sankalp kohli Assignee: sankalp kohli Priority: Minor Fix For: 2.0.4 Attachments: trunk-6157.txt Cassandra supports disabling the hints or reducing the window for hints. It would be helpful to have a switch which stops hints to a down data center but continue hints to other DCs. This is helpful during data center fail over as hints will put more unnecessary pressure on the DC taking double traffic. Also since now Cassandra is under reduced reduncany, we don't want to disable hints within the DC. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (CASSANDRA-6487) Log WARN on large batch sizes
[ https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyuben Todorov updated CASSANDRA-6487: -- Attachment: 6487_trunk_v2.patch Sure thing, changed from kb to bytes and updated the warning message in v2. Log WARN on large batch sizes - Key: CASSANDRA-6487 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487 Project: Cassandra Issue Type: Improvement Reporter: Patrick McFadin Assignee: Lyuben Todorov Priority: Minor Attachments: 6487_trunk.patch, 6487_trunk_v2.patch Large batches on a coordinator can cause a lot of node stress. I propose adding a WARN log entry if batch sizes go beyond a configurable size. This will give more visibility to operators on something that can happen on the developer side. New yaml setting with 5k default. {{# Log WARN on any batch size exceeding this value. 5k by default.}} {{# Caution should be taken on increasing the size of this threshold as it can lead to node instability.}} {{batch_size_warn_threshold: 5k}} -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6157) Selectively Disable hinted handoff for a data center
[ https://issues.apache.org/jira/browse/CASSANDRA-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851083#comment-13851083 ] Lyuben Todorov commented on CASSANDRA-6157: --- [~jbellis] Yep! Selectively Disable hinted handoff for a data center Key: CASSANDRA-6157 URL: https://issues.apache.org/jira/browse/CASSANDRA-6157 Project: Cassandra Issue Type: Improvement Components: Core Reporter: sankalp kohli Assignee: sankalp kohli Priority: Minor Fix For: 2.0.4 Attachments: trunk-6157.txt Cassandra supports disabling the hints or reducing the window for hints. It would be helpful to have a switch which stops hints to a down data center but continue hints to other DCs. This is helpful during data center fail over as hints will put more unnecessary pressure on the DC taking double traffic. Also since now Cassandra is under reduced reduncany, we don't want to disable hints within the DC. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6487) Log WARN on large batch sizes
[ https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851084#comment-13851084 ] Jonathan Ellis commented on CASSANDRA-6487: --- Oops, I skimmed too fast and thought we were counting statements not bytes. Is that what you were thinking when you estimated 5k [~pmcfadin]? Log WARN on large batch sizes - Key: CASSANDRA-6487 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487 Project: Cassandra Issue Type: Improvement Reporter: Patrick McFadin Assignee: Lyuben Todorov Priority: Minor Attachments: 6487_trunk.patch, 6487_trunk_v2.patch Large batches on a coordinator can cause a lot of node stress. I propose adding a WARN log entry if batch sizes go beyond a configurable size. This will give more visibility to operators on something that can happen on the developer side. New yaml setting with 5k default. {{# Log WARN on any batch size exceeding this value. 5k by default.}} {{# Caution should be taken on increasing the size of this threshold as it can lead to node instability.}} {{batch_size_warn_threshold: 5k}} -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (CASSANDRA-6158) Nodetool command to purge hints
[ https://issues.apache.org/jira/browse/CASSANDRA-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sankalp kohli updated CASSANDRA-6158: - Attachment: JIRA-6158-v2.diff Nodetool command to purge hints --- Key: CASSANDRA-6158 URL: https://issues.apache.org/jira/browse/CASSANDRA-6158 Project: Cassandra Issue Type: Improvement Components: Core Reporter: sankalp kohli Assignee: sankalp kohli Priority: Minor Attachments: JIRA-6158-v2.diff, trunk-6158.txt The only way to truncate all hints in Cassandra is to truncate the hints CF in system table. It would be cleaner to have a nodetool command for it. Also ability to selectively remove hints by host or DC would also be nice rather than removing all the hints. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6158) Nodetool command to purge hints
[ https://issues.apache.org/jira/browse/CASSANDRA-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851099#comment-13851099 ] sankalp kohli commented on CASSANDRA-6158: -- Truncate hints is now blocking on both JMX and Nodetool. Nodetool command to purge hints --- Key: CASSANDRA-6158 URL: https://issues.apache.org/jira/browse/CASSANDRA-6158 Project: Cassandra Issue Type: Improvement Components: Core Reporter: sankalp kohli Assignee: sankalp kohli Priority: Minor Attachments: JIRA-6158-v2.diff, trunk-6158.txt The only way to truncate all hints in Cassandra is to truncate the hints CF in system table. It would be cleaner to have a nodetool command for it. Also ability to selectively remove hints by host or DC would also be nice rather than removing all the hints. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-5742) Add command list snapshots to nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851114#comment-13851114 ] sankalp kohli commented on CASSANDRA-5742: -- [~lyubent] The output in my comment was not properly formatted. The output now is similar to your. [~jbellis] Node can have many snapshots which will have many overlapping files. I am not sure how to display them as there could be many combinations. Most people want to see the overall true disk space used by all snapshots. I can print the total in the end which will sum all the snapshots. I can print the total TrueDiskSpace there. Add command list snapshots to nodetool Key: CASSANDRA-5742 URL: https://issues.apache.org/jira/browse/CASSANDRA-5742 Project: Cassandra Issue Type: New Feature Components: Tools Affects Versions: 1.2.1 Reporter: Geert Schuring Assignee: sankalp kohli Priority: Minor Labels: lhf Attachments: JIRA-5742.diff, new_file.diff It would be nice if the nodetool could tell me which snapshots are present on the system instead of me having to browse the filesystem to fetch the names of the snapshots. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-5742) Add command list snapshots to nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851117#comment-13851117 ] Jonathan Ellis commented on CASSANDRA-5742: --- Right, that's pretty much what I was proposing initially. :) Add command list snapshots to nodetool Key: CASSANDRA-5742 URL: https://issues.apache.org/jira/browse/CASSANDRA-5742 Project: Cassandra Issue Type: New Feature Components: Tools Affects Versions: 1.2.1 Reporter: Geert Schuring Assignee: sankalp kohli Priority: Minor Labels: lhf Attachments: JIRA-5742.diff, new_file.diff It would be nice if the nodetool could tell me which snapshots are present on the system instead of me having to browse the filesystem to fetch the names of the snapshots. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6210) Repair hangs when a new datacenter is added to a cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851378#comment-13851378 ] Russell Alexander Spitzer commented on CASSANDRA-6210: -- Sure, It was a while back so things have most likely changed in the interim. I'll get the test running tomorrow with the DEBUG statements turned on. Repair hangs when a new datacenter is added to a cluster Key: CASSANDRA-6210 URL: https://issues.apache.org/jira/browse/CASSANDRA-6210 Project: Cassandra Issue Type: Bug Components: Core Environment: Amazon Ec2 2 M1.large nodes Reporter: Russell Alexander Spitzer Assignee: Yuki Morishita Attempting to add a new datacenter to a cluster seems to cause repair operations to break. I've been reproducing this with 20~ node clusters but can get it to reliably occur on 2 node setups. {code} ##Basic Steps to reproduce #Node 1 is started using GossipingPropertyFileSnitch as dc1 #Cassandra-stress is used to insert a minimal amount of data $CASSANDRA_STRESS -t 100 -R org.apache.cassandra.locator.NetworkTopologyStrategy --num-keys=1000 --columns=10 --consistency-level=LOCAL_QUORUM --average-size-values - -compaction-strategy='LeveledCompactionStrategy' -O dc1:1 --operation=COUNTER_ADD #Alter Keyspace1 ALTER KEYSPACE Keyspace1 WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': 1 , 'dc2': 1 }; #Add node 2 using GossipingPropertyFileSnitch as dc2 run repair on node 1 run repair on node 2 {code} The repair task on node 1 never completes and while there are no exceptions in the logs of node1, netstat reports the following repair tasks {code} Mode: NORMAL Repair 4e71a250-36b4-11e3-bedc-1d1bb5c9abab Repair 6c64ded0-36b4-11e3-bedc-1d1bb5c9abab Read Repair Statistics: Attempted: 0 Mismatch (Blocking): 0 Mismatch (Background): 0 Pool NameActive Pending Completed Commandsn/a 0 10239 Responses n/a 0 3839 {code} Checking on node 2 we see the following exceptions {code} ERROR [STREAM-IN-/10.171.122.130] 2013-10-16 22:42:58,961 StreamSession.java (line 410) [Stream #4e71a250-36b4-11e3-bedc-1d1bb5c9abab] Streaming error occurred java.lang.NullPointerException at org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:174) at org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:436) at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:358) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:293) at java.lang.Thread.run(Thread.java:724) ... ERROR [STREAM-IN-/10.171.122.130] 2013-10-16 22:43:49,214 StreamSession.java (line 410) [Stream #6c64ded0-36b4-11e3-bedc-1d1bb5c9abab] Streaming error occurred java.lang.NullPointerException at org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:174) at org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:436) at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:358) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:293) at java.lang.Thread.run(Thread.java:724) {code} Netstats on node 2 reports {code} automaton@ip-10-171-15-234:~$ nodetool netstats Mode: NORMAL Repair 4e71a250-36b4-11e3-bedc-1d1bb5c9abab Read Repair Statistics: Attempted: 0 Mismatch (Blocking): 0 Mismatch (Background): 0 Pool NameActive Pending Completed Commandsn/a 0 2562 Responses n/a 0 4284 {code} -- This message was sent by Atlassian JIRA (v6.1.4#6159)
Git Push Summary
Updated Tags: refs/tags/1.2.13-tentative [deleted] 4be9e6720
[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851480#comment-13851480 ] Cyril Scetbon commented on CASSANDRA-6421: -- Sorry I didn't see the comment :( [~lyubent] that's what you should get. I'm on OSX and using bash completion 1.3 from Homebrew : {code} $ brew list bash-completion /usr/local/Cellar/bash-completion/1.3/etc/bash_completion.d/ (180 files) /usr/local/Cellar/bash-completion/1.3/etc/profile.d/bash_completion.sh /usr/local/Cellar/bash-completion/1.3/etc/bash_completion {code} have is a bash-completion function. Tell me the ./nodetool is not executing the bash completion script :) To use it, you have to - place the file in your bash_completion.d directory (in my case ) : {code} $ ls /usr/local/etc/bash_completion.d/node* /usr/local/etc/bash_completion.d/nodetool {code} - add the following (add absolute path if you don't use hombrew) in your ~/.bash_profile : {code} if [ -f `brew --prefix`/etc/bash_completion ]; then . `brew --prefix`/etc/bash_completion fi {code} - start a new bash session and try : {code} nodetool cfh[TAB] nodetool cfhistograms [TAB][TAB] pns_fr system system_authsystem_traces test nodetool cfhistograms system HintsColumnFamily Migrations batchlog peer_eventsschema_columnfamilies IndexInfo NodeIdInfo hints peers schema_columns LocationInfo Schema local range_xfersschema_keyspaces {code} As you see after cfh has been completed to cfhistograms if you add 2 more \[TAB\] you get the name of keyspaces, and if you add the name of the keyspace system and 2 mote \[TAB\] you get names of column families :) The first word is the nodetool script from cassandra, not the bash completion script Add bash completion to nodetool --- Key: CASSANDRA-6421 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Cyril Scetbon Assignee: Cyril Scetbon Priority: Trivial Fix For: 2.0.4 You can find the patch from my commit here : https://github.com/cscetbon/cassandra/commit/07a10b99778f14362ac05c70269c108870555bf3.patch it uses cqlsh to get keyspaces and namespaces and could use an environment variable (not implemented) to get access which cqlsh if authentification is needed. But I think that's really a good start :) -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0
[ https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851484#comment-13851484 ] Mikhail Stepura commented on CASSANDRA-6378: The only minor comment I have is that {{opts}} parameter for {{org.apache.cassandra.tools.BulkLoader.LoaderOptions.getTransportFactory()}} is never used. sstableloader does not support client encryption on Cassandra 2.0 - Key: CASSANDRA-6378 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378 Project: Cassandra Issue Type: Bug Reporter: David Laube Assignee: Sam Tunnicliffe Labels: client, encryption, ssl, sstableloader Fix For: 2.0.4 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch We have been testing backup/restore from one ring to another and we recently stumbled upon an issue with sstableloader. When client_enc_enable: true, the exception below is generated. However, when client_enc_enable is set to false, the sstableloader is able to get to the point where it is discovers endpoints, connects to stream data, etc. ==BEGIN EXCEPTION== sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 /tmp/import/keyspace_name/columnfamily_name Exception in thread main java.lang.RuntimeException: Could not retrieve endpoint ranges: at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68) Caused by: org.apache.thrift.transport.TTransportException: Frame size (352518400) larger than max length (16384000)! at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292) at org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280) at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199) ... 2 more ==END EXCEPTION== -- This message was sent by Atlassian JIRA (v6.1.4#6159)