git commit: Copy tokenMetadata to avoid assertion error in getTopology
Updated Branches: refs/heads/trunk a500e2835 - f1aec3d5f Copy tokenMetadata to avoid assertion error in getTopology Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f1aec3d5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f1aec3d5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f1aec3d5 Branch: refs/heads/trunk Commit: f1aec3d5f6eb817d883325d8bb9204ce8c91bf27 Parents: a500e28 Author: Sylvain Lebresne sylv...@datastax.com Authored: Mon Sep 17 08:44:31 2012 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Mon Sep 17 08:44:31 2012 +0200 -- .../org/apache/cassandra/service/StorageProxy.java |5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f1aec3d5/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index e3d7de5..08aad56 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -451,8 +451,9 @@ public class StorageProxy implements StorageProxyMBean */ private static CollectionInetAddress getBatchlogEndpoints(String localDataCenter) throws UnavailableException { -// will include every known node including localhost. -CollectionInetAddress localMembers = StorageService.instance.getTokenMetadata().getTopology().getDatacenterEndpoints().get(localDataCenter); +// will include every known node in the DC, including localhost. +TokenMetadata.Topology topology = StorageService.instance.getTokenMetadata().cloneOnlyTokenMap().getTopology(); +CollectionInetAddress localMembers = topology.getDatacenterEndpoints().get(localDataCenter); // special case for single-node datacenters if (localMembers.size() == 1)
[jira] [Commented] (CASSANDRA-4542) add atomic_batch_mutate method
[ https://issues.apache.org/jira/browse/CASSANDRA-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456839#comment-13456839 ] Sylvain Lebresne commented on CASSANDRA-4542: - Committed in f1aec3d. add atomic_batch_mutate method -- Key: CASSANDRA-4542 URL: https://issues.apache.org/jira/browse/CASSANDRA-4542 Project: Cassandra Issue Type: Sub-task Components: API, Core Reporter: Jonathan Ellis Assignee: Aleksey Yeschenko Fix For: 1.2.0 beta 1 Attachments: 4542-diff.txt, 4542-v4.txt, 4542-v5.patch, CASSANDRA-4542-4543-4544-v3.patch, GET-TOPOLOGY-FIX.patch atomic_batch_mutate will have the same parameters as batch_mutate, but will write to the batchlog before attempting distribution to the batch rows' replicas. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4579) CQL queries using LIMIT sometimes missing results
[ https://issues.apache.org/jira/browse/CASSANDRA-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-4579: Attachment: (was: 0002-Fix-LIMIT-for-NamesQueryFilter.txt) CQL queries using LIMIT sometimes missing results - Key: CASSANDRA-4579 URL: https://issues.apache.org/jira/browse/CASSANDRA-4579 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 beta 1 Reporter: paul cannon Assignee: Sylvain Lebresne Labels: cql, cql3 Fix For: 1.2.0 beta 1 In certain conditions, CQL queries using LIMIT clauses are not being given all of the expected results (whether unset column values or missing rows). Here are the condition sets I've been able to identify: First mode: all rows are returned, but in the last row of results, all columns which are not part of the primary key receive no values, except for the first non-primary-key column. Conditions: * Table has a multi-component primary key * Table has more than one column which is not a component of the primary key * The number of results which would be returned by a query is equal to or more than the specified LIMIT Second mode: result has fewer rows than it should, lower than both the LIMIT and the actual number of matching rows. Conditions: * Table has a single-column primary key * Table has more than one column which is not a component of the primary key * The number of results which would be returned by a query is equal to or more than the specified LIMIT It would make sense to me that this would have started with CASSANDRA-4329, but bisecting indicates that this behavior started with commit 91bdf7fb4220b27e9566c6673bf5dbd14153017c, implementing CASSANDRA-3647. Test case for the first failure mode: {noformat} DROP KEYSPACE test; CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE testcf ( a int, b int, c int, d int, e int, PRIMARY KEY (a, b) ); INSERT INTO testcf (a, b, c, d, e) VALUES (1, 11, 111, , 1); INSERT INTO testcf (a, b, c, d, e) VALUES (2, 22, 222, , 2); INSERT INTO testcf (a, b, c, d, e) VALUES (3, 33, 333, , 3); INSERT INTO testcf (a, b, c, d, e) VALUES (4, 44, 444, , 4); SELECT * FROM testcf; SELECT * FROM testcf LIMIT 1; -- columns d and e in result row are null SELECT * FROM testcf LIMIT 2; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 3; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 4; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 5; -- results are correct (4 rows returned) {noformat} Test case for the second failure mode: {noformat} CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE testcf ( a int primary key, b int, c int, ); INSERT INTO testcf (a, b, c) VALUES (1, 11, 111); INSERT INTO testcf (a, b, c) VALUES (2, 22, 222); INSERT INTO testcf (a, b, c) VALUES (3, 33, 333); INSERT INTO testcf (a, b, c) VALUES (4, 44, 444); SELECT * FROM testcf; SELECT * FROM testcf LIMIT 1; -- gives 1 row SELECT * FROM testcf LIMIT 2; -- gives 1 row SELECT * FROM testcf LIMIT 3; -- gives 2 rows SELECT * FROM testcf LIMIT 4; -- gives 2 rows SELECT * FROM testcf LIMIT 5; -- gives 3 rows {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4579) CQL queries using LIMIT sometimes missing results
[ https://issues.apache.org/jira/browse/CASSANDRA-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-4579: Attachment: (was: 0001-Add-all-columns-from-a-prefix-group-before-stopping.txt) CQL queries using LIMIT sometimes missing results - Key: CASSANDRA-4579 URL: https://issues.apache.org/jira/browse/CASSANDRA-4579 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 beta 1 Reporter: paul cannon Assignee: Sylvain Lebresne Labels: cql, cql3 Fix For: 1.2.0 beta 1 In certain conditions, CQL queries using LIMIT clauses are not being given all of the expected results (whether unset column values or missing rows). Here are the condition sets I've been able to identify: First mode: all rows are returned, but in the last row of results, all columns which are not part of the primary key receive no values, except for the first non-primary-key column. Conditions: * Table has a multi-component primary key * Table has more than one column which is not a component of the primary key * The number of results which would be returned by a query is equal to or more than the specified LIMIT Second mode: result has fewer rows than it should, lower than both the LIMIT and the actual number of matching rows. Conditions: * Table has a single-column primary key * Table has more than one column which is not a component of the primary key * The number of results which would be returned by a query is equal to or more than the specified LIMIT It would make sense to me that this would have started with CASSANDRA-4329, but bisecting indicates that this behavior started with commit 91bdf7fb4220b27e9566c6673bf5dbd14153017c, implementing CASSANDRA-3647. Test case for the first failure mode: {noformat} DROP KEYSPACE test; CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE testcf ( a int, b int, c int, d int, e int, PRIMARY KEY (a, b) ); INSERT INTO testcf (a, b, c, d, e) VALUES (1, 11, 111, , 1); INSERT INTO testcf (a, b, c, d, e) VALUES (2, 22, 222, , 2); INSERT INTO testcf (a, b, c, d, e) VALUES (3, 33, 333, , 3); INSERT INTO testcf (a, b, c, d, e) VALUES (4, 44, 444, , 4); SELECT * FROM testcf; SELECT * FROM testcf LIMIT 1; -- columns d and e in result row are null SELECT * FROM testcf LIMIT 2; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 3; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 4; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 5; -- results are correct (4 rows returned) {noformat} Test case for the second failure mode: {noformat} CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE testcf ( a int primary key, b int, c int, ); INSERT INTO testcf (a, b, c) VALUES (1, 11, 111); INSERT INTO testcf (a, b, c) VALUES (2, 22, 222); INSERT INTO testcf (a, b, c) VALUES (3, 33, 333); INSERT INTO testcf (a, b, c) VALUES (4, 44, 444); SELECT * FROM testcf; SELECT * FROM testcf LIMIT 1; -- gives 1 row SELECT * FROM testcf LIMIT 2; -- gives 1 row SELECT * FROM testcf LIMIT 3; -- gives 2 rows SELECT * FROM testcf LIMIT 4; -- gives 2 rows SELECT * FROM testcf LIMIT 5; -- gives 3 rows {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4579) CQL queries using LIMIT sometimes missing results
[ https://issues.apache.org/jira/browse/CASSANDRA-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-4579: Attachment: 0002-Fix-LIMIT-for-NamesQueryFilter.txt 0001-Add-all-columns-from-a-prefix-group-before-stopping.txt Rebased patches attached. CQL queries using LIMIT sometimes missing results - Key: CASSANDRA-4579 URL: https://issues.apache.org/jira/browse/CASSANDRA-4579 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 beta 1 Reporter: paul cannon Assignee: Sylvain Lebresne Labels: cql, cql3 Fix For: 1.2.0 beta 1 Attachments: 0001-Add-all-columns-from-a-prefix-group-before-stopping.txt, 0002-Fix-LIMIT-for-NamesQueryFilter.txt In certain conditions, CQL queries using LIMIT clauses are not being given all of the expected results (whether unset column values or missing rows). Here are the condition sets I've been able to identify: First mode: all rows are returned, but in the last row of results, all columns which are not part of the primary key receive no values, except for the first non-primary-key column. Conditions: * Table has a multi-component primary key * Table has more than one column which is not a component of the primary key * The number of results which would be returned by a query is equal to or more than the specified LIMIT Second mode: result has fewer rows than it should, lower than both the LIMIT and the actual number of matching rows. Conditions: * Table has a single-column primary key * Table has more than one column which is not a component of the primary key * The number of results which would be returned by a query is equal to or more than the specified LIMIT It would make sense to me that this would have started with CASSANDRA-4329, but bisecting indicates that this behavior started with commit 91bdf7fb4220b27e9566c6673bf5dbd14153017c, implementing CASSANDRA-3647. Test case for the first failure mode: {noformat} DROP KEYSPACE test; CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE testcf ( a int, b int, c int, d int, e int, PRIMARY KEY (a, b) ); INSERT INTO testcf (a, b, c, d, e) VALUES (1, 11, 111, , 1); INSERT INTO testcf (a, b, c, d, e) VALUES (2, 22, 222, , 2); INSERT INTO testcf (a, b, c, d, e) VALUES (3, 33, 333, , 3); INSERT INTO testcf (a, b, c, d, e) VALUES (4, 44, 444, , 4); SELECT * FROM testcf; SELECT * FROM testcf LIMIT 1; -- columns d and e in result row are null SELECT * FROM testcf LIMIT 2; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 3; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 4; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 5; -- results are correct (4 rows returned) {noformat} Test case for the second failure mode: {noformat} CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE testcf ( a int primary key, b int, c int, ); INSERT INTO testcf (a, b, c) VALUES (1, 11, 111); INSERT INTO testcf (a, b, c) VALUES (2, 22, 222); INSERT INTO testcf (a, b, c) VALUES (3, 33, 333); INSERT INTO testcf (a, b, c) VALUES (4, 44, 444); SELECT * FROM testcf; SELECT * FROM testcf LIMIT 1; -- gives 1 row SELECT * FROM testcf LIMIT 2; -- gives 1 row SELECT * FROM testcf LIMIT 3; -- gives 2 rows SELECT * FROM testcf LIMIT 4; -- gives 2 rows SELECT * FROM testcf LIMIT 5; -- gives 3 rows {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4670) LeveledCompaction destroys secondary indexes
Roland Gude created CASSANDRA-4670: -- Summary: LeveledCompaction destroys secondary indexes Key: CASSANDRA-4670 URL: https://issues.apache.org/jira/browse/CASSANDRA-4670 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.5, 1.1.4 Reporter: Roland Gude When LeveledCompactionStrategy is active on a ColumnFamily with an Index enabled on TTL Columns, the Index is not working correctly, because the compaction is throwing away index data very aggressively. Steps to reproduce: create a cluster with a columnfamily with an indexed column and leveled compaction: create column family CorruptIndex with column_type = 'Standard' and comparator = 'TimeUUIDType' and default_validation_class = 'BytesType' and key_validation_class = 'BytesType' and read_repair_chance = 0.5 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy' and caching = 'NONE' and column_metadata = [ {column_name : '0003--1000--', validation_class : BytesType, index_name : 'idx_corrupt', index_type : 0}]; in that cf insert expiring data (expiration date should be in the far future for the sake of this test) query the data by index: get CorruptIndex where 0003--1000--=utf8(‘value’) see results (should be correct for some time) wait for leveled compaction to compact the index query the data by index: get CorruptIndex where 0003--1000--=utf8(‘value’) see results (are empty) trigger rebuild index via nodetool query the data by index: get CorruptIndex where 0003--1000--=utf8(‘value’) should be corretc again wait for leveled compaction to compact the index query the data by index: get CorruptIndex where 0003--1000--=utf8(‘value’) see results (are empty) repeat until bored -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4261) [patch] Support consistency-latency prediction in nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456871#comment-13456871 ] Matt Blair commented on CASSANDRA-4261: --- So now that CASSANDRA-1123 has been resolved, will this get merged in time for 1.2? [patch] Support consistency-latency prediction in nodetool -- Key: CASSANDRA-4261 URL: https://issues.apache.org/jira/browse/CASSANDRA-4261 Project: Cassandra Issue Type: New Feature Components: Tools Affects Versions: 1.2.0 beta 1 Reporter: Peter Bailis Attachments: demo-pbs-v3.sh, pbs-nodetool-v3.patch h3. Introduction Cassandra supports a variety of replication configurations: ReplicationFactor is set per-ColumnFamily and ConsistencyLevel is set per-request. Setting {{ConsistencyLevel}} to {{QUORUM}} for reads and writes ensures strong consistency, but {{QUORUM}} is often slower than {{ONE}}, {{TWO}}, or {{THREE}}. What should users choose? This patch provides a latency-consistency analysis within {{nodetool}}. Users can accurately predict Cassandra's behavior in their production environments without interfering with performance. What's the probability that we'll read a write t seconds after it completes? What about reading one of the last k writes? This patch provides answers via {{nodetool predictconsistency}}: {{nodetool predictconsistency ReplicationFactor TimeAfterWrite Versions}} \\ \\ {code:title=Example output|borderStyle=solid} //N == ReplicationFactor //R == read ConsistencyLevel //W == write ConsistencyLevel user@test:$ nodetool predictconsistency 3 100 1 Performing consistency prediction 100ms after a given write, with maximum version staleness of k=1 N=3, R=1, W=1 Probability of consistent reads: 0.678900 Average read latency: 5.377900ms (99.900th %ile 40ms) Average write latency: 36.971298ms (99.900th %ile 294ms) N=3, R=1, W=2 Probability of consistent reads: 0.791600 Average read latency: 5.372500ms (99.900th %ile 39ms) Average write latency: 303.630890ms (99.900th %ile 357ms) N=3, R=1, W=3 Probability of consistent reads: 1.00 Average read latency: 5.426600ms (99.900th %ile 42ms) Average write latency: 1382.650879ms (99.900th %ile 629ms) N=3, R=2, W=1 Probability of consistent reads: 0.915800 Average read latency: 11.091000ms (99.900th %ile 348ms) Average write latency: 42.663101ms (99.900th %ile 284ms) N=3, R=2, W=2 Probability of consistent reads: 1.00 Average read latency: 10.606800ms (99.900th %ile 263ms) Average write latency: 310.117615ms (99.900th %ile 335ms) N=3, R=3, W=1 Probability of consistent reads: 1.00 Average read latency: 52.657501ms (99.900th %ile 565ms) Average write latency: 39.949799ms (99.900th %ile 237ms) {code} h3. Demo Here's an example scenario you can run using [ccm|https://github.com/pcmanus/ccm]. The prediction is fast: {code:borderStyle=solid} cd cassandra-source-dir with patch applied ant ccm create consistencytest --cassandra-dir=. ccm populate -n 5 ccm start # if start fails, you might need to initialize more loopback interfaces # e.g., sudo ifconfig lo0 alias 127.0.0.2 # use stress to get some sample latency data tools/bin/stress -d 127.0.0.1 -l 3 -n 1 -o insert tools/bin/stress -d 127.0.0.1 -l 3 -n 1 -o read bin/nodetool -h 127.0.0.1 -p 7100 predictconsistency 3 100 1 {code} h3. What and Why We've implemented [Probabilistically Bounded Staleness|http://pbs.cs.berkeley.edu/#demo], a new technique for predicting consistency-latency trade-offs within Cassandra. Our [paper|http://arxiv.org/pdf/1204.6082.pdf] will appear in [VLDB 2012|http://www.vldb2012.org/], and, in it, we've used PBS to profile a range of Dynamo-style data store deployments at places like LinkedIn and Yammer in addition to profiling our own Cassandra deployments. In our experience, prediction is both accurate and much more lightweight than profiling and manually testing each possible replication configuration (especially in production!). This analysis is important for the many users we've talked to and heard about who use partial quorum operation (e.g., non-{{QUORUM}} {{ConsistencyLevel}}). Should they use CL={{ONE}}? CL={{TWO}}? It likely depends on their runtime environment and, short of profiling in production, there's no existing way to answer these questions. (Keep in mind, Cassandra defaults to CL={{ONE}}, meaning users don't know how stale their data will be.) We outline limitations of the current approach after describing how it's done. We believe that this is a useful feature that can provide guidance and fairly accurate estimation for most users. h3. Interface This patch allows users to perform this prediction in production using
[jira] [Commented] (CASSANDRA-1337) parallelize fetching rows for low-cardinality indexes
[ https://issues.apache.org/jira/browse/CASSANDRA-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456908#comment-13456908 ] David Alves commented on CASSANDRA-1337: cool, that means things are probably kosher at least semantically, right? I mean the bug present in v1 is no longer present and all other related tests pass. parallelize fetching rows for low-cardinality indexes - Key: CASSANDRA-1337 URL: https://issues.apache.org/jira/browse/CASSANDRA-1337 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2.1 Attachments: 1137-bugfix.patch, 1337.patch, 1337-v4.patch, ASF.LICENSE.NOT.GRANTED--0001-CASSANDRA-1337-scan-concurrently-depending-on-num-rows.txt, CASSANDRA-1337.patch Original Estimate: 8h Remaining Estimate: 8h currently, we read the indexed rows from the first node (in partitioner order); if that does not have enough matching rows, we read the rows from the next, and so forth. we should use the statistics fom CASSANDRA-1155 to query multiple nodes in parallel, such that we have a high chance of getting enough rows w/o having to do another round of queries (but, if our estimate is incorrect, we do need to loop and do more rounds until we have enough data or we have fetched from each node). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4656) StorageProxy histograms
[ https://issues.apache.org/jira/browse/CASSANDRA-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Zotov updated CASSANDRA-4656: Attachment: StorageProxy_histograms_with_empty_values_v11.patch StorageProxy histograms --- Key: CASSANDRA-4656 URL: https://issues.apache.org/jira/browse/CASSANDRA-4656 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.4 Reporter: Alexey Zotov Priority: Minor Attachments: Empty_values_in_histograms_v12.patch, StorageProxy_histograms_with_empty_values_v11.patch I suggest to do two improvements: 1. StorageProxy histograms need to be added to the cli. In my opinion that statistic is very important, because it shows real server response time (with accounting of additional requests to other nodes). It can be usefull for gathering of the server response time statistics by some monitoring systems (without using additional JMX modules). 2. Output of 'nodetool cfhistograms' command has an empty values in 'SSTables' column. Output is not usable for parse because of that. I suggest to insert '0' instead of empty values. Old output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 505 0 0 1916 0 566 0 0 {code} New output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 0 505 0 0 1916 0 0 566 0 0 {code} PS: I've attached a patch that fixes all described problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4656) StorageProxy histograms
[ https://issues.apache.org/jira/browse/CASSANDRA-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Zotov updated CASSANDRA-4656: Attachment: (was: StorageProxy_histograms_with_empty_values_v11.patch) StorageProxy histograms --- Key: CASSANDRA-4656 URL: https://issues.apache.org/jira/browse/CASSANDRA-4656 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.4 Reporter: Alexey Zotov Priority: Minor Attachments: Empty_values_in_histograms_v12.patch, StorageProxy_histograms_with_empty_values_v11.patch I suggest to do two improvements: 1. StorageProxy histograms need to be added to the cli. In my opinion that statistic is very important, because it shows real server response time (with accounting of additional requests to other nodes). It can be usefull for gathering of the server response time statistics by some monitoring systems (without using additional JMX modules). 2. Output of 'nodetool cfhistograms' command has an empty values in 'SSTables' column. Output is not usable for parse because of that. I suggest to insert '0' instead of empty values. Old output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 505 0 0 1916 0 566 0 0 {code} New output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 0 505 0 0 1916 0 0 566 0 0 {code} PS: I've attached a patch that fixes all described problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4656) StorageProxy histograms
[ https://issues.apache.org/jira/browse/CASSANDRA-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Zotov updated CASSANDRA-4656: Attachment: (was: StorageProxy_histograms.patch) StorageProxy histograms --- Key: CASSANDRA-4656 URL: https://issues.apache.org/jira/browse/CASSANDRA-4656 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.4 Reporter: Alexey Zotov Priority: Minor Attachments: Empty_values_in_histograms_v12.patch, StorageProxy_histograms_with_empty_values_v11.patch I suggest to do two improvements: 1. StorageProxy histograms need to be added to the cli. In my opinion that statistic is very important, because it shows real server response time (with accounting of additional requests to other nodes). It can be usefull for gathering of the server response time statistics by some monitoring systems (without using additional JMX modules). 2. Output of 'nodetool cfhistograms' command has an empty values in 'SSTables' column. Output is not usable for parse because of that. I suggest to insert '0' instead of empty values. Old output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 505 0 0 1916 0 566 0 0 {code} New output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 0 505 0 0 1916 0 0 566 0 0 {code} PS: I've attached a patch that fixes all described problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4656) StorageProxy histograms
[ https://issues.apache.org/jira/browse/CASSANDRA-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Zotov updated CASSANDRA-4656: Attachment: StorageProxy_histograms_with_empty_values_v11.patch StorageProxy histograms --- Key: CASSANDRA-4656 URL: https://issues.apache.org/jira/browse/CASSANDRA-4656 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.4 Reporter: Alexey Zotov Priority: Minor Attachments: Empty_values_in_histograms_v12.patch, StorageProxy_histograms_with_empty_values_v11.patch I suggest to do two improvements: 1. StorageProxy histograms need to be added to the cli. In my opinion that statistic is very important, because it shows real server response time (with accounting of additional requests to other nodes). It can be usefull for gathering of the server response time statistics by some monitoring systems (without using additional JMX modules). 2. Output of 'nodetool cfhistograms' command has an empty values in 'SSTables' column. Output is not usable for parse because of that. I suggest to insert '0' instead of empty values. Old output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 505 0 0 1916 0 566 0 0 {code} New output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 0 505 0 0 1916 0 0 566 0 0 {code} PS: I've attached a patch that fixes all described problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4656) StorageProxy histograms
[ https://issues.apache.org/jira/browse/CASSANDRA-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456915#comment-13456915 ] Alexey Zotov commented on CASSANDRA-4656: - I've backported 'nodetool proxyhistograms' functionality to 1.1 version. All patches are attached. StorageProxy histograms --- Key: CASSANDRA-4656 URL: https://issues.apache.org/jira/browse/CASSANDRA-4656 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.4 Reporter: Alexey Zotov Priority: Minor Attachments: Empty_values_in_histograms_v12.patch, StorageProxy_histograms_with_empty_values_v11.patch I suggest to do two improvements: 1. StorageProxy histograms need to be added to the cli. In my opinion that statistic is very important, because it shows real server response time (with accounting of additional requests to other nodes). It can be usefull for gathering of the server response time statistics by some monitoring systems (without using additional JMX modules). 2. Output of 'nodetool cfhistograms' command has an empty values in 'SSTables' column. Output is not usable for parse because of that. I suggest to insert '0' instead of empty values. Old output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 505 0 0 1916 0 566 0 0 {code} New output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 0 505 0 0 1916 0 0 566 0 0 {code} PS: I've attached a patch that fixes all described problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4616) add test to make sure KEYS indexes handle row deletions
[ https://issues.apache.org/jira/browse/CASSANDRA-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456925#comment-13456925 ] Sam Tunnicliffe commented on CASSANDRA-4616: Doesn't ColumnFamilyStoreTest.testIndexDeletions cover this? Or did you mean to add a row delete check to CFST.testDeleteOfInconsistentValuesInKeysIndex ? add test to make sure KEYS indexes handle row deletions --- Key: CASSANDRA-4616 URL: https://issues.apache.org/jira/browse/CASSANDRA-4616 Project: Cassandra Issue Type: Test Affects Versions: 1.2.0 beta 1 Reporter: Jonathan Ellis Assignee: Sam Tunnicliffe Fix For: 1.2.0 pretty sure we lost this in the refactoring we did for CASSANDRA-2897 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4545) add cql support for batchlog
[ https://issues.apache.org/jira/browse/CASSANDRA-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456932#comment-13456932 ] Sylvain Lebresne commented on CASSANDRA-4545: - As a side note let me remark that a BATCH is not synonymous of non atomic, since everything within the same partition key will be atomic and isolated (and in a fair amount of cases, those can be done with a single INSERT/UPDATE). It would be nice if whatever choice of syntax we make here don't suggest too much that what is atomic is not. And as a side side note, it would be nice to make batchlog writes skip the batchlog patch if the batch is on only one partition key. add cql support for batchlog Key: CASSANDRA-4545 URL: https://issues.apache.org/jira/browse/CASSANDRA-4545 Project: Cassandra Issue Type: Sub-task Reporter: Jonathan Ellis Assignee: Aleksey Yeschenko Need to expose the equivalent of atomic_batch_mutate (CASSANDRA-4542) to CQL3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4656) StorageProxy histograms
[ https://issues.apache.org/jira/browse/CASSANDRA-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4656: -- Reviewer: brandon.williams (was: yukim) StorageProxy histograms --- Key: CASSANDRA-4656 URL: https://issues.apache.org/jira/browse/CASSANDRA-4656 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.4 Reporter: Alexey Zotov Priority: Minor Attachments: Empty_values_in_histograms_v12.patch, StorageProxy_histograms_with_empty_values_v11.patch I suggest to do two improvements: 1. StorageProxy histograms need to be added to the cli. In my opinion that statistic is very important, because it shows real server response time (with accounting of additional requests to other nodes). It can be usefull for gathering of the server response time statistics by some monitoring systems (without using additional JMX modules). 2. Output of 'nodetool cfhistograms' command has an empty values in 'SSTables' column. Output is not usable for parse because of that. I suggest to insert '0' instead of empty values. Old output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 505 0 0 1916 0 566 0 0 {code} New output: {code} Offset SSTables Write Latency Read Latency Row Size Column Count 1 109298 0 0 0 128700943 2778 0 0 0 0 1597 0 0 505 0 0 1916 0 0 566 0 0 {code} PS: I've attached a patch that fixes all described problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4545) add cql support for batchlog
[ https://issues.apache.org/jira/browse/CASSANDRA-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457027#comment-13457027 ] Jonathan Ellis commented on CASSANDRA-4545: --- bq. it would be nice to make batchlog writes skip the batchlog patch if the batch is on only one partition key true, although that does mean clients need to be aware of the difference as well (clients need to retry if it timesout during the batchlog write, or if it times out during the data write and there is only one key), so maybe it's simpler to just say clients should be responsible for this optimization in the first place. add cql support for batchlog Key: CASSANDRA-4545 URL: https://issues.apache.org/jira/browse/CASSANDRA-4545 Project: Cassandra Issue Type: Sub-task Reporter: Jonathan Ellis Assignee: Aleksey Yeschenko Need to expose the equivalent of atomic_batch_mutate (CASSANDRA-4542) to CQL3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4545) add cql support for batchlog
[ https://issues.apache.org/jira/browse/CASSANDRA-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457049#comment-13457049 ] Sylvain Lebresne commented on CASSANDRA-4545: - bq. maybe it's simpler to just say clients should be responsible for this optimization in the first place I disagree. I don't want user to be like Hum, I know I need atomicity for that query but wait ... I'm only writing to one partition key so I'd better not use the ATOMIC keyword in my query because that's slower and those lazy C* devs didn't bother adding a one line 'if' to optimize the atomic path. Oh, and wait, I'd better add a comment that this needs to be atomic since someone else reading my query would think it's not required. Besides, the details of when to retry should hopefully largely be hidden by the client library, but that optimization cannot, or at least not easily as this would require parsing the query. add cql support for batchlog Key: CASSANDRA-4545 URL: https://issues.apache.org/jira/browse/CASSANDRA-4545 Project: Cassandra Issue Type: Sub-task Reporter: Jonathan Ellis Assignee: Aleksey Yeschenko Need to expose the equivalent of atomic_batch_mutate (CASSANDRA-4542) to CQL3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Fix typo
Updated Branches: refs/heads/trunk f1aec3d5f - 20d53c095 Fix typo Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/20d53c09 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/20d53c09 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/20d53c09 Branch: refs/heads/trunk Commit: 20d53c095d51650ffefbac5f13c888d8faffc05d Parents: f1aec3d Author: Sylvain Lebresne sylv...@datastax.com Authored: Mon Sep 17 18:00:48 2012 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Mon Sep 17 18:00:48 2012 +0200 -- .../org/apache/cassandra/transport/DataType.java |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/20d53c09/src/java/org/apache/cassandra/transport/DataType.java -- diff --git a/src/java/org/apache/cassandra/transport/DataType.java b/src/java/org/apache/cassandra/transport/DataType.java index 0cb9d2d..80528e7 100644 --- a/src/java/org/apache/cassandra/transport/DataType.java +++ b/src/java/org/apache/cassandra/transport/DataType.java @@ -159,7 +159,7 @@ public enum DataType implements OptionCodec.CodecableDataType else { assert type instanceof SetType; -return Pair.DataType, Objectcreate(LIST, ((SetType)type).elements); +return Pair.DataType, Objectcreate(SET, ((SetType)type).elements); } } return Pair.DataType, Objectcreate(CUSTOM, type.toString());
[jira] [Commented] (CASSANDRA-4237) Add back 0.8-style memtable_lifetime feature
[ https://issues.apache.org/jira/browse/CASSANDRA-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457134#comment-13457134 ] Jonathan Ellis commented on CASSANDRA-4237: --- bq. There is expire check when flushed memtable is clean and reschedule another flush Hmm... is there a race here where if user manually calls force flush while it's expired, we get two scheduled tasks added? Maybe just inlining a simple runnable instead of using forceFlush would be cleaner. (We don't need to worry about indexes being dirty independently of the base memtable in this case, either.) bq. 10sec brute force would work if memtable_flush_period is set longer (seconds or minutes). Good point. bq. Here, you mean the period of brute force flush? Yes. Add back 0.8-style memtable_lifetime feature Key: CASSANDRA-4237 URL: https://issues.apache.org/jira/browse/CASSANDRA-4237 Project: Cassandra Issue Type: New Feature Components: Core Affects Versions: 1.0.0 Reporter: Jonathan Ellis Assignee: Yuki Morishita Priority: Minor Fix For: 1.2.0 Attachments: 4237.txt Back in 0.8 we had a memtable_lifetime_in_minutes setting. We got rid of it in 1.0 when we added CASSANDRA-2427, which is a better way to ensure flushing on low-activity memtables. However, at the same time we also added the ability to disable durable writes. So it's entirely possible to configure a low-activity memtable, that isn't part of the commitlog. So, we should add back a memtable lifetime setting. An additional motive is pointed out by http://www.fsl.cs.sunysb.edu/~pshetty/socc11-gtssl.pdf: if you have a *high* activity columnfamily, and don't require absolute durability, the commitlog is redundant if you are flushing faster than the commitlog sync period. So, disabling durable writes but setting memtable lifetime to the same as the commitlog sync would be a reasonable optimization. Thus, when we add back memtable lifetime, I think we should measure it in seconds or possibly even milliseconds (to match commitlog_sync_period) rather than minutes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[1/5] git commit: Merge branch 'cassandra-1.1' into trunk
Updated Branches: refs/heads/cassandra-1.1 6a8ba0731 - aa7dafaca refs/heads/trunk 20d53c095 - 85fc72be8 Merge branch 'cassandra-1.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/85fc72be Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/85fc72be Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/85fc72be Branch: refs/heads/trunk Commit: 85fc72be8629827b2e92b6aaf2d35a2132f1ab27 Parents: 20d53c0 aa7dafa Author: Jonathan Ellis jbel...@apache.org Authored: Mon Sep 17 11:44:07 2012 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Sep 17 11:44:07 2012 -0500 -- build.xml |4 .../apache/cassandra/thrift/ITransportFactory.java |2 -- 2 files changed, 4 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/85fc72be/build.xml --
[4/5] git commit: cqlsh: check for non-empty history file before loading Patch by Brian O'Neill, reviewed by brandonwilliams for CASSANDRA-4669
cqlsh: check for non-empty history file before loading Patch by Brian O'Neill, reviewed by brandonwilliams for CASSANDRA-4669 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6a8ba073 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6a8ba073 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6a8ba073 Branch: refs/heads/trunk Commit: 6a8ba07313e2fad8c5a6cf7566bbd77321274c60 Parents: 5551d9c Author: Brandon Williams brandonwilli...@apache.org Authored: Fri Sep 14 16:42:54 2012 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Fri Sep 14 16:42:54 2012 -0500 -- bin/cqlsh |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6a8ba073/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 3bef142..5f99e45 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -2690,7 +2690,7 @@ def setup_cqlruleset(cqlmodule): def main(options, hostname, port): setup_cqlruleset(options.cqlmodule) -if os.path.exists(HISTORY) and readline is not None: +if os.path.exists(HISTORY) and readline is not None and readline.get_history_length()0: readline.read_history_file(HISTORY) delims = readline.get_completer_delims() delims.replace(', )
[3/5] git commit: Add ITransportFactory to cassandra-thrift jar patch by Sam Tunnicliffe for CASSANDRA-4668
Add ITransportFactory to cassandra-thrift jar patch by Sam Tunnicliffe for CASSANDRA-4668 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aa7dafac Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aa7dafac Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aa7dafac Branch: refs/heads/trunk Commit: aa7dafaca4843fcd2eea70bca51bd303910f756e Parents: 6a8ba07 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Sep 17 11:42:46 2012 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Sep 17 11:43:00 2012 -0500 -- build.xml |4 .../apache/cassandra/thrift/ITransportFactory.java |2 -- 2 files changed, 4 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa7dafac/build.xml -- diff --git a/build.xml b/build.xml index b498d12..6a97687 100644 --- a/build.xml +++ b/build.xml @@ -739,6 +739,10 @@ file=${build.dir}/${ant.project.name}-thrift-${version}.pom/ jar jarfile=${build.dir}/${ant.project.name}-thrift-${version}.jar basedir=${build.classes.thrift} +fileset dir=${build.classes.main} + include name=org/apache/cassandra/thrift/ITransportFactory.class / + include name=org/apache/cassandra/thrift/TFramedTransportFactory.class / +/fileset manifest attribute name=Implementation-Title value=Cassandra/ attribute name=Implementation-Version value=${version}/ http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa7dafac/src/java/org/apache/cassandra/thrift/ITransportFactory.java -- diff --git a/src/java/org/apache/cassandra/thrift/ITransportFactory.java b/src/java/org/apache/cassandra/thrift/ITransportFactory.java index 47cd034..4940fc6 100644 --- a/src/java/org/apache/cassandra/thrift/ITransportFactory.java +++ b/src/java/org/apache/cassandra/thrift/ITransportFactory.java @@ -21,13 +21,11 @@ package org.apache.cassandra.thrift; * */ -import org.apache.hadoop.conf.Configuration; import org.apache.thrift.transport.TSocket; import org.apache.thrift.transport.TTransport; import org.apache.thrift.transport.TTransportException; import javax.security.auth.login.LoginException; -import java.io.IOException; public interface ITransportFactory
[5/5] git commit: cqlsh: combine multiline statements into single line history Patch by Matthew Horsfall, reviewed by brandonwilliams for CASSANDRA-4666
cqlsh: combine multiline statements into single line history Patch by Matthew Horsfall, reviewed by brandonwilliams for CASSANDRA-4666 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5551d9c3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5551d9c3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5551d9c3 Branch: refs/heads/trunk Commit: 5551d9c396d63cc844ce2db2c7d94aa9ca942d6b Parents: 751e58d Author: Brandon Williams brandonwilli...@apache.org Authored: Thu Sep 13 15:47:26 2012 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Thu Sep 13 15:47:26 2012 -0500 -- bin/cqlsh | 11 +++ 1 files changed, 11 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5551d9c3/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index b2a11e3..3bef142 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -552,6 +552,7 @@ class Shell(cmd.Cmd): show_line_nums = False debug = False stop = False +last_hist = None shunted_query_out = None csv_dialect_defaults = dict(delimiter=',', doublequote=False, escapechar='\\', quotechar='') @@ -941,6 +942,16 @@ class Shell(cmd.Cmd): self.do_exit() def handle_statement(self, tokens, srcstr): +# Concat multi-line statements and insert into history +if readline is not None: +nl_count = srcstr.count(\n) + +new_hist = srcstr.replace(\n, ).rstrip() + +if nl_count 1 and self.last_hist != new_hist: +readline.add_history(new_hist) + +self.last_hist = new_hist cmdword = tokens[0][1] if cmdword == '?': cmdword = 'help'
[2/5] git commit: Add ITransportFactory to cassandra-thrift jar patch by Sam Tunnicliffe for CASSANDRA-4668
Add ITransportFactory to cassandra-thrift jar patch by Sam Tunnicliffe for CASSANDRA-4668 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aa7dafac Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aa7dafac Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aa7dafac Branch: refs/heads/cassandra-1.1 Commit: aa7dafaca4843fcd2eea70bca51bd303910f756e Parents: 6a8ba07 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Sep 17 11:42:46 2012 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Sep 17 11:43:00 2012 -0500 -- build.xml |4 .../apache/cassandra/thrift/ITransportFactory.java |2 -- 2 files changed, 4 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa7dafac/build.xml -- diff --git a/build.xml b/build.xml index b498d12..6a97687 100644 --- a/build.xml +++ b/build.xml @@ -739,6 +739,10 @@ file=${build.dir}/${ant.project.name}-thrift-${version}.pom/ jar jarfile=${build.dir}/${ant.project.name}-thrift-${version}.jar basedir=${build.classes.thrift} +fileset dir=${build.classes.main} + include name=org/apache/cassandra/thrift/ITransportFactory.class / + include name=org/apache/cassandra/thrift/TFramedTransportFactory.class / +/fileset manifest attribute name=Implementation-Title value=Cassandra/ attribute name=Implementation-Version value=${version}/ http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa7dafac/src/java/org/apache/cassandra/thrift/ITransportFactory.java -- diff --git a/src/java/org/apache/cassandra/thrift/ITransportFactory.java b/src/java/org/apache/cassandra/thrift/ITransportFactory.java index 47cd034..4940fc6 100644 --- a/src/java/org/apache/cassandra/thrift/ITransportFactory.java +++ b/src/java/org/apache/cassandra/thrift/ITransportFactory.java @@ -21,13 +21,11 @@ package org.apache.cassandra.thrift; * */ -import org.apache.hadoop.conf.Configuration; import org.apache.thrift.transport.TSocket; import org.apache.thrift.transport.TTransport; import org.apache.thrift.transport.TTransportException; import javax.security.auth.login.LoginException; -import java.io.IOException; public interface ITransportFactory
[jira] [Updated] (CASSANDRA-4668) Add ITransportFactory to cassandra-thrift jar
[ https://issues.apache.org/jira/browse/CASSANDRA-4668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4668: -- Affects Version/s: (was: 1.1.6) 1.1.5 (ITF introduced by CASSANDRA-4558) Add ITransportFactory to cassandra-thrift jar - Key: CASSANDRA-4668 URL: https://issues.apache.org/jira/browse/CASSANDRA-4668 Project: Cassandra Issue Type: Task Components: Drivers, Packaging Affects Versions: 1.1.5 Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Fix For: 1.1.6 Attachments: CASSANDRA-4668.txt o.a.c.thrift.ITransportFactory should be included in the cassandra-thrift jar so that clients such as cassandra-jdbc driver can provide or use custom implementations. o.a.c.thrift.TFramedTransportFactory could also be included in the jar as default option for clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4644) Compaction error with Cassandra 1.1.4 and LCS
[ https://issues.apache.org/jira/browse/CASSANDRA-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457147#comment-13457147 ] Sylvain Lebresne commented on CASSANDRA-4644: - Regarding Jonathan's patch: that's a good idea though the condition in the patch is reversed. It should be: {noformat} if (previous != null current.first.compareTo(previous.last) = 0) {noformat} (i.e. we send back if the new start is smaller than that the previous end). Other than that, lgtm. Compaction error with Cassandra 1.1.4 and LCS -- Key: CASSANDRA-4644 URL: https://issues.apache.org/jira/browse/CASSANDRA-4644 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.4 Environment: Cassandra 1.1.4, Ubuntu Lucid (2.6.32-346), Amazon EC2 m1.xlarge Reporter: Rudolf VanderLeeden Assignee: Jonathan Ellis Attachments: 4644.txt In our 1.1.4 testcluster of 3 nodes with RF=3, KS=1, and CF=5, we are getting an asserting error when running 'nodetool compact highscores leaderboard'. This stops compactions on the 'leaderboard' CF summing up to 11835 pending compactions. This error is seen only one one node. The SSTables have originally been created on a 1.1.2 cluster with STCS and then copied to the testcluster also with 1.1.2. Repair, cleanup, compact were OK with STCS. Next, we changed to LCS and did again repair, cleanup, compact with success. Then we started to use this LCS-based testcluster intensively and created lots of data and also large keys with millions of columns. The assertion error in system.log : INFO [CompactionExecutor:8] 2012-09-11 14:20:45,043 CompactionController.java (line 172) Compacting large row highscores/leaderboard:4c422d64626331353166372d363464612d343235342d396130322d6535616365343337373532332d676c6f62616c2d30 (72589650 bytes) incrementally ERROR [CompactionExecutor:8] 2012-09-11 14:20:50,336 AbstractCassandraDaemon.java (line 135) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime] java.lang.AssertionError at org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:214) at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:158) at org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:531) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:254) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:992) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200) at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50) at org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:288) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4671) Improve removal of gcable tomstones during minor compaction
Sylvain Lebresne created CASSANDRA-4671: --- Summary: Improve removal of gcable tomstones during minor compaction Key: CASSANDRA-4671 URL: https://issues.apache.org/jira/browse/CASSANDRA-4671 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Priority: Minor When we minor compact, we only purge a row if we know it's not present in any of the sstables that are not in the compaction set. It is however possible to have scenario where this leads us to keep irrelevant tombstone for longer than necessary (and I suspect LCS make those scenario a little bit more likely). We could however purge tombstone if we know that the non-compacted sstables doesn't have any info that is older than the tombstones we're about to purge (since then we know that the tombstones we'll consider can't delete data in non compacted sstables). In other words, we should force CompactionController.shouldPurge() to return true if min_timestamp(non-compacted-overlapping-sstables) max_timestamp(compacted-sstables). This does require us to record the min timestamp of an sstable first though (we only record the max timestamp so far). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4261) [patch] Support consistency-latency prediction in nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4261: -- Attachment: 4261-v4.txt v4 attached, mostly rebased to trunk. Mostly means that I'm not sure what to do with the message timestamps. CASSANDRA-2858 added the timestamp to QueuedMessage / MessageDeliveryTask instead of MessageOut/MessageIn. In the case of MessageOut/QM, QM is definitely the right place since MO construction is relatively expensive so we avoid any per-replica information there. Less clear for MessageIn/MDT: if we leave it in MDT we either need to add the timestamp to all IVerbHandler for this one special case, or do some hackish contortions to pass it just to ResponseVerbHandler (and thence to PBSPredictor). OTH putting it in MessageIn breaks the MI/MO symmetry which is confusing. Thoughts? Also: is there any way we can leverage the metrics from CASSANDRA-4009 instead of storing a second copy of certain metrics in PBS? [patch] Support consistency-latency prediction in nodetool -- Key: CASSANDRA-4261 URL: https://issues.apache.org/jira/browse/CASSANDRA-4261 Project: Cassandra Issue Type: New Feature Components: Tools Affects Versions: 1.2.0 beta 1 Reporter: Peter Bailis Attachments: 4261-v4.txt, demo-pbs-v3.sh, pbs-nodetool-v3.patch h3. Introduction Cassandra supports a variety of replication configurations: ReplicationFactor is set per-ColumnFamily and ConsistencyLevel is set per-request. Setting {{ConsistencyLevel}} to {{QUORUM}} for reads and writes ensures strong consistency, but {{QUORUM}} is often slower than {{ONE}}, {{TWO}}, or {{THREE}}. What should users choose? This patch provides a latency-consistency analysis within {{nodetool}}. Users can accurately predict Cassandra's behavior in their production environments without interfering with performance. What's the probability that we'll read a write t seconds after it completes? What about reading one of the last k writes? This patch provides answers via {{nodetool predictconsistency}}: {{nodetool predictconsistency ReplicationFactor TimeAfterWrite Versions}} \\ \\ {code:title=Example output|borderStyle=solid} //N == ReplicationFactor //R == read ConsistencyLevel //W == write ConsistencyLevel user@test:$ nodetool predictconsistency 3 100 1 Performing consistency prediction 100ms after a given write, with maximum version staleness of k=1 N=3, R=1, W=1 Probability of consistent reads: 0.678900 Average read latency: 5.377900ms (99.900th %ile 40ms) Average write latency: 36.971298ms (99.900th %ile 294ms) N=3, R=1, W=2 Probability of consistent reads: 0.791600 Average read latency: 5.372500ms (99.900th %ile 39ms) Average write latency: 303.630890ms (99.900th %ile 357ms) N=3, R=1, W=3 Probability of consistent reads: 1.00 Average read latency: 5.426600ms (99.900th %ile 42ms) Average write latency: 1382.650879ms (99.900th %ile 629ms) N=3, R=2, W=1 Probability of consistent reads: 0.915800 Average read latency: 11.091000ms (99.900th %ile 348ms) Average write latency: 42.663101ms (99.900th %ile 284ms) N=3, R=2, W=2 Probability of consistent reads: 1.00 Average read latency: 10.606800ms (99.900th %ile 263ms) Average write latency: 310.117615ms (99.900th %ile 335ms) N=3, R=3, W=1 Probability of consistent reads: 1.00 Average read latency: 52.657501ms (99.900th %ile 565ms) Average write latency: 39.949799ms (99.900th %ile 237ms) {code} h3. Demo Here's an example scenario you can run using [ccm|https://github.com/pcmanus/ccm]. The prediction is fast: {code:borderStyle=solid} cd cassandra-source-dir with patch applied ant ccm create consistencytest --cassandra-dir=. ccm populate -n 5 ccm start # if start fails, you might need to initialize more loopback interfaces # e.g., sudo ifconfig lo0 alias 127.0.0.2 # use stress to get some sample latency data tools/bin/stress -d 127.0.0.1 -l 3 -n 1 -o insert tools/bin/stress -d 127.0.0.1 -l 3 -n 1 -o read bin/nodetool -h 127.0.0.1 -p 7100 predictconsistency 3 100 1 {code} h3. What and Why We've implemented [Probabilistically Bounded Staleness|http://pbs.cs.berkeley.edu/#demo], a new technique for predicting consistency-latency trade-offs within Cassandra. Our [paper|http://arxiv.org/pdf/1204.6082.pdf] will appear in [VLDB 2012|http://www.vldb2012.org/], and, in it, we've used PBS to profile a range of Dynamo-style data store deployments at places like LinkedIn and Yammer in addition to profiling our own Cassandra deployments. In our experience, prediction is both accurate and much more lightweight than profiling and manually testing each possible replication configuration (especially in production!).
[1/3] git commit: Merge branch 'cassandra-1.1' into trunk
Updated Branches: refs/heads/cassandra-1.1 aa7dafaca - 6ad7d45ad refs/heads/trunk 85fc72be8 - 0666e1952 Merge branch 'cassandra-1.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0666e195 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0666e195 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0666e195 Branch: refs/heads/trunk Commit: 0666e1952b92fec067ae255021fa384b02ac51f0 Parents: 85fc72b 6ad7d45 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Sep 17 12:37:19 2012 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Sep 17 12:37:19 2012 -0500 -- CHANGES.txt|1 + .../cassandra/db/compaction/LeveledManifest.java | 39 +++ .../apache/cassandra/tools/StandaloneScrubber.java | 27 +-- 3 files changed, 31 insertions(+), 36 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0666e195/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0666e195/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0666e195/src/java/org/apache/cassandra/tools/StandaloneScrubber.java --
[3/3] git commit: Automatic fixing of overlapping leveled sstables patch by jbellis; reviewed by slebresne for CASSANDRA-4644
Automatic fixing of overlapping leveled sstables patch by jbellis; reviewed by slebresne for CASSANDRA-4644 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ad7d45a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ad7d45a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ad7d45a Branch: refs/heads/cassandra-1.1 Commit: 6ad7d45ad56482707ecd541984894e4e2a278cfb Parents: aa7dafa Author: Jonathan Ellis jbel...@apache.org Authored: Mon Sep 17 12:37:01 2012 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Sep 17 12:37:01 2012 -0500 -- CHANGES.txt|1 + .../cassandra/db/compaction/LeveledManifest.java | 39 +++ .../apache/cassandra/tools/StandaloneScrubber.java | 27 +-- 3 files changed, 31 insertions(+), 36 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ad7d45a/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7098320..0530134 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -3,6 +3,7 @@ access permissions and grant/revoke commands (CASSANDRA-4490) * fix assumption error in CLI when updating/describing keyspace (CASSANDRA-4322) * Adds offline sstablescrub to debian packaging (CASSANDRA-4642) + * Automatic fixing of overlapping leveled sstables (CASSANDRA-4644) 1.1.5 http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ad7d45a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java index b54fe93..493fd9f 100644 --- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java +++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java @@ -204,19 +204,38 @@ public class LeveledManifest for (SSTableReader ssTableReader : added) add(ssTableReader, newLevel); +// Fix overlapping sstables from CASSANDRA-4321/4411 if (newLevel != 0) +repairOverlappingSSTables(newLevel); + +serialize(); +} + +public synchronized void repairOverlappingSSTables(int level) +{ +SSTableReader previous = null; +Collections.sort(generations[level], SSTable.sstableComparator); +ListSSTableReader outOfOrderSSTables = new ArrayListSSTableReader(); +for (SSTableReader current : generations[level]) { -// Integerity check -DecoratedKey last = null; -Collections.sort(generations[newLevel], SSTable.sstableComparator); -for (SSTableReader sstable : generations[newLevel]) +if (previous != null current.first.compareTo(previous.last) = 0) +{ +logger.error(String.format(At level %d, %s [%s, %s] overlaps %s [%s, %s]. This is caused by a bug in Cassandra 1.1.0 .. 1.1.3. Sending back to L0. If you have not yet run scrub, you should do so since you may also have rows out-of-order within an sstable, + level, previous, previous.first, previous.last, current, current.first, current.last)); +outOfOrderSSTables.add(current); +} +else { -assert last == null || sstable.first.compareTo(last) 0; -last = sstable.last; +previous = current; } } -serialize(); +if (!outOfOrderSSTables.isEmpty()) +{ +for (SSTableReader sstable : outOfOrderSSTables) +sendBackToL0(sstable); +serialize(); +} } public synchronized void replace(IterableSSTableReader removed, IterableSSTableReader added) @@ -235,12 +254,10 @@ public class LeveledManifest serialize(); } -public synchronized void sendBackToL0(SSTableReader sstable) +private synchronized void sendBackToL0(SSTableReader sstable) { remove(sstable); add(sstable, 0); - -serialize(); } private String toString(IterableSSTableReader sstables) @@ -593,4 +610,6 @@ public class LeveledManifest new Object[] {Arrays.asList(estimated), cfs.table.name, cfs.columnFamily}); return Ints.checkedCast(tasks); } + + } http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ad7d45a/src/java/org/apache/cassandra/tools/StandaloneScrubber.java -- diff --git a/src/java/org/apache/cassandra/tools/StandaloneScrubber.java
[2/3] git commit: Automatic fixing of overlapping leveled sstables patch by jbellis; reviewed by slebresne for CASSANDRA-4644
Automatic fixing of overlapping leveled sstables patch by jbellis; reviewed by slebresne for CASSANDRA-4644 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ad7d45a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ad7d45a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ad7d45a Branch: refs/heads/trunk Commit: 6ad7d45ad56482707ecd541984894e4e2a278cfb Parents: aa7dafa Author: Jonathan Ellis jbel...@apache.org Authored: Mon Sep 17 12:37:01 2012 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Sep 17 12:37:01 2012 -0500 -- CHANGES.txt|1 + .../cassandra/db/compaction/LeveledManifest.java | 39 +++ .../apache/cassandra/tools/StandaloneScrubber.java | 27 +-- 3 files changed, 31 insertions(+), 36 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ad7d45a/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7098320..0530134 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -3,6 +3,7 @@ access permissions and grant/revoke commands (CASSANDRA-4490) * fix assumption error in CLI when updating/describing keyspace (CASSANDRA-4322) * Adds offline sstablescrub to debian packaging (CASSANDRA-4642) + * Automatic fixing of overlapping leveled sstables (CASSANDRA-4644) 1.1.5 http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ad7d45a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java index b54fe93..493fd9f 100644 --- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java +++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java @@ -204,19 +204,38 @@ public class LeveledManifest for (SSTableReader ssTableReader : added) add(ssTableReader, newLevel); +// Fix overlapping sstables from CASSANDRA-4321/4411 if (newLevel != 0) +repairOverlappingSSTables(newLevel); + +serialize(); +} + +public synchronized void repairOverlappingSSTables(int level) +{ +SSTableReader previous = null; +Collections.sort(generations[level], SSTable.sstableComparator); +ListSSTableReader outOfOrderSSTables = new ArrayListSSTableReader(); +for (SSTableReader current : generations[level]) { -// Integerity check -DecoratedKey last = null; -Collections.sort(generations[newLevel], SSTable.sstableComparator); -for (SSTableReader sstable : generations[newLevel]) +if (previous != null current.first.compareTo(previous.last) = 0) +{ +logger.error(String.format(At level %d, %s [%s, %s] overlaps %s [%s, %s]. This is caused by a bug in Cassandra 1.1.0 .. 1.1.3. Sending back to L0. If you have not yet run scrub, you should do so since you may also have rows out-of-order within an sstable, + level, previous, previous.first, previous.last, current, current.first, current.last)); +outOfOrderSSTables.add(current); +} +else { -assert last == null || sstable.first.compareTo(last) 0; -last = sstable.last; +previous = current; } } -serialize(); +if (!outOfOrderSSTables.isEmpty()) +{ +for (SSTableReader sstable : outOfOrderSSTables) +sendBackToL0(sstable); +serialize(); +} } public synchronized void replace(IterableSSTableReader removed, IterableSSTableReader added) @@ -235,12 +254,10 @@ public class LeveledManifest serialize(); } -public synchronized void sendBackToL0(SSTableReader sstable) +private synchronized void sendBackToL0(SSTableReader sstable) { remove(sstable); add(sstable, 0); - -serialize(); } private String toString(IterableSSTableReader sstables) @@ -593,4 +610,6 @@ public class LeveledManifest new Object[] {Arrays.asList(estimated), cfs.table.name, cfs.columnFamily}); return Ints.checkedCast(tasks); } + + } http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ad7d45a/src/java/org/apache/cassandra/tools/StandaloneScrubber.java -- diff --git a/src/java/org/apache/cassandra/tools/StandaloneScrubber.java
[jira] [Resolved] (CASSANDRA-4644) Compaction error with Cassandra 1.1.4 and LCS
[ https://issues.apache.org/jira/browse/CASSANDRA-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-4644. --- Resolution: Cannot Reproduce Fix Version/s: 1.1.6 Committed. (Marking this issue can't reproduce referring to the original report of overlapping sstables caused by 1.1.4.) Compaction error with Cassandra 1.1.4 and LCS -- Key: CASSANDRA-4644 URL: https://issues.apache.org/jira/browse/CASSANDRA-4644 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.4 Environment: Cassandra 1.1.4, Ubuntu Lucid (2.6.32-346), Amazon EC2 m1.xlarge Reporter: Rudolf VanderLeeden Assignee: Jonathan Ellis Fix For: 1.1.6 Attachments: 4644.txt In our 1.1.4 testcluster of 3 nodes with RF=3, KS=1, and CF=5, we are getting an asserting error when running 'nodetool compact highscores leaderboard'. This stops compactions on the 'leaderboard' CF summing up to 11835 pending compactions. This error is seen only one one node. The SSTables have originally been created on a 1.1.2 cluster with STCS and then copied to the testcluster also with 1.1.2. Repair, cleanup, compact were OK with STCS. Next, we changed to LCS and did again repair, cleanup, compact with success. Then we started to use this LCS-based testcluster intensively and created lots of data and also large keys with millions of columns. The assertion error in system.log : INFO [CompactionExecutor:8] 2012-09-11 14:20:45,043 CompactionController.java (line 172) Compacting large row highscores/leaderboard:4c422d64626331353166372d363464612d343235342d396130322d6535616365343337373532332d676c6f62616c2d30 (72589650 bytes) incrementally ERROR [CompactionExecutor:8] 2012-09-11 14:20:50,336 AbstractCassandraDaemon.java (line 135) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime] java.lang.AssertionError at org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:214) at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:158) at org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:531) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:254) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:992) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200) at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50) at org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:288) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-4644) Compaction error with Cassandra 1.1.4 and LCS
[ https://issues.apache.org/jira/browse/CASSANDRA-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457180#comment-13457180 ] Jonathan Ellis edited comment on CASSANDRA-4644 at 9/18/12 4:38 AM: Committed with = fix. (Marking this issue can't reproduce referring to the original report of overlapping sstables caused by 1.1.4.) was (Author: jbellis): Committed. (Marking this issue can't reproduce referring to the original report of overlapping sstables caused by 1.1.4.) Compaction error with Cassandra 1.1.4 and LCS -- Key: CASSANDRA-4644 URL: https://issues.apache.org/jira/browse/CASSANDRA-4644 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.4 Environment: Cassandra 1.1.4, Ubuntu Lucid (2.6.32-346), Amazon EC2 m1.xlarge Reporter: Rudolf VanderLeeden Assignee: Jonathan Ellis Fix For: 1.1.6 Attachments: 4644.txt In our 1.1.4 testcluster of 3 nodes with RF=3, KS=1, and CF=5, we are getting an asserting error when running 'nodetool compact highscores leaderboard'. This stops compactions on the 'leaderboard' CF summing up to 11835 pending compactions. This error is seen only one one node. The SSTables have originally been created on a 1.1.2 cluster with STCS and then copied to the testcluster also with 1.1.2. Repair, cleanup, compact were OK with STCS. Next, we changed to LCS and did again repair, cleanup, compact with success. Then we started to use this LCS-based testcluster intensively and created lots of data and also large keys with millions of columns. The assertion error in system.log : INFO [CompactionExecutor:8] 2012-09-11 14:20:45,043 CompactionController.java (line 172) Compacting large row highscores/leaderboard:4c422d64626331353166372d363464612d343235342d396130322d6535616365343337373532332d676c6f62616c2d30 (72589650 bytes) incrementally ERROR [CompactionExecutor:8] 2012-09-11 14:20:50,336 AbstractCassandraDaemon.java (line 135) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime] java.lang.AssertionError at org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:214) at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:158) at org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:531) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:254) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:992) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200) at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50) at org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:288) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4672) _TRACE verb is not droppable which causes an AssertionError
David Alves created CASSANDRA-4672: -- Summary: _TRACE verb is not droppable which causes an AssertionError Key: CASSANDRA-4672 URL: https://issues.apache.org/jira/browse/CASSANDRA-4672 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: David Alves Priority: Trivial Fix For: 1.2.0 When a big enough statement is traced (like select *) an assertion error is fired because the _TRACE verb is not droppable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4672) _TRACE verb is not droppable which causes an AssertionError
[ https://issues.apache.org/jira/browse/CASSANDRA-4672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Alves updated CASSANDRA-4672: --- Assignee: David Alves _TRACE verb is not droppable which causes an AssertionError --- Key: CASSANDRA-4672 URL: https://issues.apache.org/jira/browse/CASSANDRA-4672 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: David Alves Assignee: David Alves Priority: Trivial Fix For: 1.2.0 When a big enough statement is traced (like select *) an assertion error is fired because the _TRACE verb is not droppable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4672) _TRACE verb is not droppable which causes an AssertionError
[ https://issues.apache.org/jira/browse/CASSANDRA-4672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Alves updated CASSANDRA-4672: --- Attachment: 4672.patch attaching trivial fix _TRACE verb is not droppable which causes an AssertionError --- Key: CASSANDRA-4672 URL: https://issues.apache.org/jira/browse/CASSANDRA-4672 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: David Alves Assignee: David Alves Priority: Trivial Fix For: 1.2.0 Attachments: 4672.patch When a big enough statement is traced (like select *) an assertion error is fired because the _TRACE verb is not droppable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4545) add cql support for batchlog
[ https://issues.apache.org/jira/browse/CASSANDRA-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457239#comment-13457239 ] Aleksey Yeschenko commented on CASSANDRA-4545: -- Another thing to remember is that counter mutations are not allowed in atomic batches, so the two statements won't be functionally same anyway. add cql support for batchlog Key: CASSANDRA-4545 URL: https://issues.apache.org/jira/browse/CASSANDRA-4545 Project: Cassandra Issue Type: Sub-task Reporter: Jonathan Ellis Assignee: Aleksey Yeschenko Need to expose the equivalent of atomic_batch_mutate (CASSANDRA-4542) to CQL3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4545) add cql support for batchlog
[ https://issues.apache.org/jira/browse/CASSANDRA-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457252#comment-13457252 ] Jonathan Ellis commented on CASSANDRA-4545: --- we allow counters in normal batches? that's a bug waiting to happen since you can't replay a counter update safely... add cql support for batchlog Key: CASSANDRA-4545 URL: https://issues.apache.org/jira/browse/CASSANDRA-4545 Project: Cassandra Issue Type: Sub-task Reporter: Jonathan Ellis Assignee: Aleksey Yeschenko Need to expose the equivalent of atomic_batch_mutate (CASSANDRA-4542) to CQL3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4545) add cql support for batchlog
[ https://issues.apache.org/jira/browse/CASSANDRA-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457257#comment-13457257 ] Aleksey Yeschenko commented on CASSANDRA-4545: -- bq. we allow counters in normal batches? that's a bug waiting to happen since you can't replay a counter update safely... We do. At least in thrift API and cql3, most likely in cql2, too. add cql support for batchlog Key: CASSANDRA-4545 URL: https://issues.apache.org/jira/browse/CASSANDRA-4545 Project: Cassandra Issue Type: Sub-task Reporter: Jonathan Ellis Assignee: Aleksey Yeschenko Need to expose the equivalent of atomic_batch_mutate (CASSANDRA-4542) to CQL3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4671) Improve removal of gcable tomstones during minor compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457262#comment-13457262 ] rene kochen commented on CASSANDRA-4671: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/minor-compaction-and-delete-expired-column-tombstones-td7582425.html Improve removal of gcable tomstones during minor compaction --- Key: CASSANDRA-4671 URL: https://issues.apache.org/jira/browse/CASSANDRA-4671 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Priority: Minor When we minor compact, we only purge a row if we know it's not present in any of the sstables that are not in the compaction set. It is however possible to have scenario where this leads us to keep irrelevant tombstone for longer than necessary (and I suspect LCS make those scenario a little bit more likely). We could however purge tombstone if we know that the non-compacted sstables doesn't have any info that is older than the tombstones we're about to purge (since then we know that the tombstones we'll consider can't delete data in non compacted sstables). In other words, we should force CompactionController.shouldPurge() to return true if min_timestamp(non-compacted-overlapping-sstables) max_timestamp(compacted-sstables). This does require us to record the min timestamp of an sstable first though (we only record the max timestamp so far). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4208) ColumnFamilyOutputFormat should support writing to multiple column families
[ https://issues.apache.org/jira/browse/CASSANDRA-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457310#comment-13457310 ] Michael Kjellman commented on CASSANDRA-4208: - any additional updates on this? Robbie -- what direction did you decide to pursue? ColumnFamilyOutputFormat should support writing to multiple column families --- Key: CASSANDRA-4208 URL: https://issues.apache.org/jira/browse/CASSANDRA-4208 Project: Cassandra Issue Type: Improvement Components: Hadoop Affects Versions: 1.1.0 Reporter: Robbie Strickland Attachments: cassandra-1.1-4208.txt, cassandra-1.1-4208-v2.txt, cassandra-1.1-4208-v3.txt, trunk-4208.txt, trunk-4208-v2.txt It is not currently possible to output records to more than one column family in a single reducer. Considering that writing values to Cassandra often involves multiple column families (i.e. updating your index when you insert a new value), this seems overly restrictive. I am submitting a patch that moves the specification of column family from the job configuration to the write() call in ColumnFamilyRecordWriter. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4208) ColumnFamilyOutputFormat should support writing to multiple column families
[ https://issues.apache.org/jira/browse/CASSANDRA-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457343#comment-13457343 ] Robbie Strickland commented on CASSANDRA-4208: -- The attached patch works and we have it running in production. I'm not sure why I haven't received any response since May on whether this will be included in some future release. I presume everyone is busy on other features. ColumnFamilyOutputFormat should support writing to multiple column families --- Key: CASSANDRA-4208 URL: https://issues.apache.org/jira/browse/CASSANDRA-4208 Project: Cassandra Issue Type: Improvement Components: Hadoop Affects Versions: 1.1.0 Reporter: Robbie Strickland Attachments: cassandra-1.1-4208.txt, cassandra-1.1-4208-v2.txt, cassandra-1.1-4208-v3.txt, trunk-4208.txt, trunk-4208-v2.txt It is not currently possible to output records to more than one column family in a single reducer. Considering that writing values to Cassandra often involves multiple column families (i.e. updating your index when you insert a new value), this seems overly restrictive. I am submitting a patch that moves the specification of column family from the job configuration to the write() call in ColumnFamilyRecordWriter. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4579) CQL queries using LIMIT sometimes missing results
[ https://issues.apache.org/jira/browse/CASSANDRA-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457346#comment-13457346 ] Pavel Yaskevich commented on CASSANDRA-4579: +1 CQL queries using LIMIT sometimes missing results - Key: CASSANDRA-4579 URL: https://issues.apache.org/jira/browse/CASSANDRA-4579 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 beta 1 Reporter: paul cannon Assignee: Sylvain Lebresne Labels: cql, cql3 Fix For: 1.2.0 beta 1 Attachments: 0001-Add-all-columns-from-a-prefix-group-before-stopping.txt, 0002-Fix-LIMIT-for-NamesQueryFilter.txt In certain conditions, CQL queries using LIMIT clauses are not being given all of the expected results (whether unset column values or missing rows). Here are the condition sets I've been able to identify: First mode: all rows are returned, but in the last row of results, all columns which are not part of the primary key receive no values, except for the first non-primary-key column. Conditions: * Table has a multi-component primary key * Table has more than one column which is not a component of the primary key * The number of results which would be returned by a query is equal to or more than the specified LIMIT Second mode: result has fewer rows than it should, lower than both the LIMIT and the actual number of matching rows. Conditions: * Table has a single-column primary key * Table has more than one column which is not a component of the primary key * The number of results which would be returned by a query is equal to or more than the specified LIMIT It would make sense to me that this would have started with CASSANDRA-4329, but bisecting indicates that this behavior started with commit 91bdf7fb4220b27e9566c6673bf5dbd14153017c, implementing CASSANDRA-3647. Test case for the first failure mode: {noformat} DROP KEYSPACE test; CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE testcf ( a int, b int, c int, d int, e int, PRIMARY KEY (a, b) ); INSERT INTO testcf (a, b, c, d, e) VALUES (1, 11, 111, , 1); INSERT INTO testcf (a, b, c, d, e) VALUES (2, 22, 222, , 2); INSERT INTO testcf (a, b, c, d, e) VALUES (3, 33, 333, , 3); INSERT INTO testcf (a, b, c, d, e) VALUES (4, 44, 444, , 4); SELECT * FROM testcf; SELECT * FROM testcf LIMIT 1; -- columns d and e in result row are null SELECT * FROM testcf LIMIT 2; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 3; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 4; -- columns d and e in last result row are null SELECT * FROM testcf LIMIT 5; -- results are correct (4 rows returned) {noformat} Test case for the second failure mode: {noformat} CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE testcf ( a int primary key, b int, c int, ); INSERT INTO testcf (a, b, c) VALUES (1, 11, 111); INSERT INTO testcf (a, b, c) VALUES (2, 22, 222); INSERT INTO testcf (a, b, c) VALUES (3, 33, 333); INSERT INTO testcf (a, b, c) VALUES (4, 44, 444); SELECT * FROM testcf; SELECT * FROM testcf LIMIT 1; -- gives 1 row SELECT * FROM testcf LIMIT 2; -- gives 1 row SELECT * FROM testcf LIMIT 3; -- gives 2 rows SELECT * FROM testcf LIMIT 4; -- gives 2 rows SELECT * FROM testcf LIMIT 5; -- gives 3 rows {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4609) Add thrift transport factory impl to cassandra-cli
[ https://issues.apache.org/jira/browse/CASSANDRA-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457464#comment-13457464 ] Jason Brown commented on CASSANDRA-4609: Sorry for the delay. Ok, part of the reason there's a little bit of a dance going on here is that it wasn't clear to me that the ticket name 'Add thrift transport factory' didn't map 1-to-1 with the existing ITransportFactory interface. Hence, that's why thew submitted code may look a little contorted trying to shoehorn that interface into a more general high level factory for thrift transports. I'll rework this patch, as well as CASSANDRA-4608/CASSANDRA-4662. Add thrift transport factory impl to cassandra-cli -- Key: CASSANDRA-4609 URL: https://issues.apache.org/jira/browse/CASSANDRA-4609 Project: Cassandra Issue Type: Sub-task Reporter: T Jake Luciani Assignee: Jason Brown Fix For: 1.1.6 Attachments: 0003-CASSANDRA-4609-add-thrift-transport-factory-support-.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected
[ https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457504#comment-13457504 ] Bartłomiej Romański commented on CASSANDRA-4417: Is it possible to predict how dangerous this bug could be? We are already experiencing very serious problems with CASSANDRA-4639. Our counter values suddenly became a few times higher than expected. As you can imagine this is a disaster from the business point of view. We are already seriously thinking about going back to SQL databases :/ I wonder how (if) this bug (and possible other counter related bugs) can affect us. We rely heavily on counters. Can this bug possibly lead to incorrect counter values? Temporarily or permanently - will running repair fix it? How incorrect counter values could be? Loosing a couple increments immediately preceding a node failure is probably acceptable in most cases. Is it possible to loose more increments? Or end up in completely incorrect counter values as in CASSANDRA-4639? What would exactly happen after hitting this bug. Running repair should fix it? The self-healing mechanism would actually make counter consistent again? Or we will get this error messages over and over? Sorry for writing a comment full of questions, but I've got very limited knowledge of cassandra internals. I'll be very thankful if someone could refer to the questions above. invalid counter shard detected --- Key: CASSANDRA-4417 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.1 Environment: Amazon Linux Reporter: Senthilvel Rangaswamy Seeing errors like these: 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard What does it mean ? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4673) Compaction of HintsColumnFamily stucks
Bartłomiej Romański created CASSANDRA-4673: -- Summary: Compaction of HintsColumnFamily stucks Key: CASSANDRA-4673 URL: https://issues.apache.org/jira/browse/CASSANDRA-4673 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.1 Environment: We've got a 24 nodes cluster with 3 virtual data centers. We've got 7 nodes with SSD drives. We are operating under very heavy read/write load. We typically seeing CPU usage 90% on our machines. We are using 1.1.2 - why this version is not listed in the 'Affected Version' drop down? Reporter: Bartłomiej Romański On some nodes the compaction of HintsColumnFamily stucked. Here is a typical output of 'nodetool compactionstats': pending tasks: 1 compaction typekeyspace column family bytes compacted bytes total progress Compaction systemHintsColumnFamily 346205828 34690966299.80% Active compaction remaining time : 0h00m00s Rebooting a node does not help. The compaction starts immediately after booting and stucks and the same point. If this can be related we are also expiring a problem described in CASSANDRA-4639. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected
[ https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457511#comment-13457511 ] Bartłomiej Romański commented on CASSANDRA-4417: In the previous comment I wanted to point directly to CASSANDRA-4436 - I've mixed up numbers. One more thing: could hinted-handoff be possible somehow related to this issue? We've got a problem with it (CASSANDRA-4673) which was discovered in (more or less) in the same time that our counters problems. Is there a possibility that sending hinted-handoff a few times ends up with incrementing counters a few time? invalid counter shard detected --- Key: CASSANDRA-4417 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.1 Environment: Amazon Linux Reporter: Senthilvel Rangaswamy Seeing errors like these: 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard What does it mean ? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3
Aleksey Yeschenko created CASSANDRA-4674: Summary: cqlsh COPY TO and COPY FROM don't work with cql3 Key: CASSANDRA-4674 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 Reporter: Aleksey Yeschenko Assignee: Aleksey Yeschenko cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4594) COPY TO and COPY FROM don't default to consistent ordering of columns
[ https://issues.apache.org/jira/browse/CASSANDRA-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-4594: - Fix Version/s: (was: 1.1.6) 1.2.0 COPY TO and COPY FROM don't default to consistent ordering of columns - Key: CASSANDRA-4594 URL: https://issues.apache.org/jira/browse/CASSANDRA-4594 Project: Cassandra Issue Type: Bug Environment: Happens in CQLSH 2, may or may not happen in CQLSH 3 Reporter: Tyler Patterson Assignee: paul cannon Priority: Minor Labels: cqlsh Fix For: 1.1.6 Here is the input: {code} CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE airplanes ( name text PRIMARY KEY, manufacturer ascii, year int, mach float ); INSERT INTO airplanes (name, manufacturer, year, mach) VALUES ('P38-Lightning', 'Lockheed', 1937, '.7'); COPY airplanes TO 'temp.cfg' WITH HEADER=true; TRUNCATE airplanes; COPY airplanes FROM 'temp.cfg' WITH HEADER=true; SELECT * FROM airplanes; {code} Here is what happens when executed. Note how it tried to import the float into the int column: {code} cqlsh:test DROP KEYSPACE test; cqlsh:test CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; cqlsh:test USE test; cqlsh:test cqlsh:test CREATE TABLE airplanes ( ... name text PRIMARY KEY, ... manufacturer ascii, ... year int, ... mach float ... ); cqlsh:test cqlsh:test INSERT INTO airplanes (name, manufacturer, year, mach) VALUES ('P38-Lightning', 'Lockheed', 1937, '.7'); cqlsh:test cqlsh:test COPY airplanes TO 'temp.cfg' WITH HEADER=true; 1 rows exported in 0.003 seconds. cqlsh:test TRUNCATE airplanes; cqlsh:test cqlsh:test COPY airplanes FROM 'temp.cfg' WITH HEADER=true; Bad Request: unable to make int from '0.7' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.002 seconds. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4594) COPY TO and COPY FROM don't default to consistent ordering of columns
[ https://issues.apache.org/jira/browse/CASSANDRA-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-4594: - Fix Version/s: (was: 1.2.0) 1.1.6 COPY TO and COPY FROM don't default to consistent ordering of columns - Key: CASSANDRA-4594 URL: https://issues.apache.org/jira/browse/CASSANDRA-4594 Project: Cassandra Issue Type: Bug Environment: Happens in CQLSH 2, may or may not happen in CQLSH 3 Reporter: Tyler Patterson Assignee: paul cannon Priority: Minor Labels: cqlsh Fix For: 1.1.6 Here is the input: {code} CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE airplanes ( name text PRIMARY KEY, manufacturer ascii, year int, mach float ); INSERT INTO airplanes (name, manufacturer, year, mach) VALUES ('P38-Lightning', 'Lockheed', 1937, '.7'); COPY airplanes TO 'temp.cfg' WITH HEADER=true; TRUNCATE airplanes; COPY airplanes FROM 'temp.cfg' WITH HEADER=true; SELECT * FROM airplanes; {code} Here is what happens when executed. Note how it tried to import the float into the int column: {code} cqlsh:test DROP KEYSPACE test; cqlsh:test CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; cqlsh:test USE test; cqlsh:test cqlsh:test CREATE TABLE airplanes ( ... name text PRIMARY KEY, ... manufacturer ascii, ... year int, ... mach float ... ); cqlsh:test cqlsh:test INSERT INTO airplanes (name, manufacturer, year, mach) VALUES ('P38-Lightning', 'Lockheed', 1937, '.7'); cqlsh:test cqlsh:test COPY airplanes TO 'temp.cfg' WITH HEADER=true; 1 rows exported in 0.003 seconds. cqlsh:test TRUNCATE airplanes; cqlsh:test cqlsh:test COPY airplanes FROM 'temp.cfg' WITH HEADER=true; Bad Request: unable to make int from '0.7' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.002 seconds. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3
[ https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-4674: - Attachment: FIX-CQLSH-COPY.patch cqlsh COPY TO and COPY FROM don't work with cql3 Key: CASSANDRA-4674 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 Reporter: Aleksey Yeschenko Assignee: Aleksey Yeschenko Labels: cqlsh Attachments: FIX-CQLSH-COPY.patch cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4261) [patch] Support consistency-latency prediction in nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457525#comment-13457525 ] Peter Bailis commented on CASSANDRA-4261: - Jonathan, Thanks for the rebase! Looking at the updated code, we can still log the start of the operation in MessagingService.sendRR() but move the reply timestamp logging from the ResponseVerbHandler to MessagingService.receive(). This won't be too bad, and we can filter the MessageIn instances passed to PBSPredictor by both the verb type and/or by id. Does that make sense? Also, re: CASSANDRA-4009, it should be possible to use this code, but there are two issues: 1.) We need finer-granularity tracing than what is currently implemented. We need to know how long it takes to hit a given node and not just the end-to-end round-trip latencies. 2.) Using a histogram instead of keeping around the actual latencies will reduce the fidelity of the predictions. The impact of this depends on the bucket size and distribution. Let us know what you think! [patch] Support consistency-latency prediction in nodetool -- Key: CASSANDRA-4261 URL: https://issues.apache.org/jira/browse/CASSANDRA-4261 Project: Cassandra Issue Type: New Feature Components: Tools Affects Versions: 1.2.0 beta 1 Reporter: Peter Bailis Attachments: 4261-v4.txt, demo-pbs-v3.sh, pbs-nodetool-v3.patch h3. Introduction Cassandra supports a variety of replication configurations: ReplicationFactor is set per-ColumnFamily and ConsistencyLevel is set per-request. Setting {{ConsistencyLevel}} to {{QUORUM}} for reads and writes ensures strong consistency, but {{QUORUM}} is often slower than {{ONE}}, {{TWO}}, or {{THREE}}. What should users choose? This patch provides a latency-consistency analysis within {{nodetool}}. Users can accurately predict Cassandra's behavior in their production environments without interfering with performance. What's the probability that we'll read a write t seconds after it completes? What about reading one of the last k writes? This patch provides answers via {{nodetool predictconsistency}}: {{nodetool predictconsistency ReplicationFactor TimeAfterWrite Versions}} \\ \\ {code:title=Example output|borderStyle=solid} //N == ReplicationFactor //R == read ConsistencyLevel //W == write ConsistencyLevel user@test:$ nodetool predictconsistency 3 100 1 Performing consistency prediction 100ms after a given write, with maximum version staleness of k=1 N=3, R=1, W=1 Probability of consistent reads: 0.678900 Average read latency: 5.377900ms (99.900th %ile 40ms) Average write latency: 36.971298ms (99.900th %ile 294ms) N=3, R=1, W=2 Probability of consistent reads: 0.791600 Average read latency: 5.372500ms (99.900th %ile 39ms) Average write latency: 303.630890ms (99.900th %ile 357ms) N=3, R=1, W=3 Probability of consistent reads: 1.00 Average read latency: 5.426600ms (99.900th %ile 42ms) Average write latency: 1382.650879ms (99.900th %ile 629ms) N=3, R=2, W=1 Probability of consistent reads: 0.915800 Average read latency: 11.091000ms (99.900th %ile 348ms) Average write latency: 42.663101ms (99.900th %ile 284ms) N=3, R=2, W=2 Probability of consistent reads: 1.00 Average read latency: 10.606800ms (99.900th %ile 263ms) Average write latency: 310.117615ms (99.900th %ile 335ms) N=3, R=3, W=1 Probability of consistent reads: 1.00 Average read latency: 52.657501ms (99.900th %ile 565ms) Average write latency: 39.949799ms (99.900th %ile 237ms) {code} h3. Demo Here's an example scenario you can run using [ccm|https://github.com/pcmanus/ccm]. The prediction is fast: {code:borderStyle=solid} cd cassandra-source-dir with patch applied ant ccm create consistencytest --cassandra-dir=. ccm populate -n 5 ccm start # if start fails, you might need to initialize more loopback interfaces # e.g., sudo ifconfig lo0 alias 127.0.0.2 # use stress to get some sample latency data tools/bin/stress -d 127.0.0.1 -l 3 -n 1 -o insert tools/bin/stress -d 127.0.0.1 -l 3 -n 1 -o read bin/nodetool -h 127.0.0.1 -p 7100 predictconsistency 3 100 1 {code} h3. What and Why We've implemented [Probabilistically Bounded Staleness|http://pbs.cs.berkeley.edu/#demo], a new technique for predicting consistency-latency trade-offs within Cassandra. Our [paper|http://arxiv.org/pdf/1204.6082.pdf] will appear in [VLDB 2012|http://www.vldb2012.org/], and, in it, we've used PBS to profile a range of Dynamo-style data store deployments at places like LinkedIn and Yammer in addition to profiling our own Cassandra deployments. In our experience, prediction is both accurate and much more lightweight than profiling and manually testing each possible
[jira] [Created] (CASSANDRA-4675) NPE in NTS when using LQ against a node (DC) that doesn't have replica
Jackson Chung created CASSANDRA-4675: Summary: NPE in NTS when using LQ against a node (DC) that doesn't have replica Key: CASSANDRA-4675 URL: https://issues.apache.org/jira/browse/CASSANDRA-4675 Project: Cassandra Issue Type: Bug Reporter: Jackson Chung Priority: Minor in a NetworkTopologyStrategy where there are 2 DC: {panel} Address DC RackStatus State LoadOwns Token 85070591730234615865843651857942052864 127.0.0.1 dc1 r1 Up Normal 115.78 KB 50.00% 0 127.0.0.2 dc2 r1 Up Normal 129.3 KB50.00% 85070591730234615865843651857942052864 {panel} I have a KS that has replica is 1 of the dc (dc1): {panel} [default@unknown] describe Keyspace3; Keyspace: Keyspace3: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [dc1:1] Column Families: ColumnFamily: testcf {panel} But if I connect to a node in dc2, using LOCAL_QUORUM, I get NPE in the Cassandra node's log: {panel} [default@unknown] consistencylevel as LOCAL_QUORUM; Consistency level is set to 'LOCAL_QUORUM'. [default@unknown] use Keyspace3; Authenticated to keyspace: Keyspace3 [default@Keyspace3] get testcf[utf8('k1')][utf8('c1')]; Internal error processing get org.apache.thrift.TApplicationException: Internal error processing get at org.apache.thrift.TApplicationException.read(TApplicationException.java:108) at org.apache.cassandra.thrift.Cassandra$Client.recv_get(Cassandra.java:511) at org.apache.cassandra.thrift.Cassandra$Client.get(Cassandra.java:492) at org.apache.cassandra.cli.CliClient.executeGet(CliClient.java:648) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:209) at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:220) at org.apache.cassandra.cli.CliMain.main(CliMain.java:348) {panel} node2's log: {panel} ERROR [Thrift:3] 2012-09-17 18:15:16,868 Cassandra.java (line 2999) Internal error processing get java.lang.NullPointerException at org.apache.cassandra.locator.NetworkTopologyStrategy.getReplicationFactor(NetworkTopologyStrategy.java:142) at org.apache.cassandra.service.DatacenterReadCallback.determineBlockFor(DatacenterReadCallback.java:90) at org.apache.cassandra.service.ReadCallback.init(ReadCallback.java:67) at org.apache.cassandra.service.DatacenterReadCallback.init(DatacenterReadCallback.java:63) at org.apache.cassandra.service.StorageProxy.getReadCallback(StorageProxy.java:775) at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:609) at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564) at org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128) at org.apache.cassandra.thrift.CassandraServer.internal_get(CassandraServer.java:383) at org.apache.cassandra.thrift.CassandraServer.get(CassandraServer.java:401) at org.apache.cassandra.thrift.Cassandra$Processor$get.process(Cassandra.java:2989) at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889) at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) {panel} I could workaround it by adding dc2:0 to the option: {panel} [default@Keyspace3] describe Keyspace3; Keyspace: Keyspace3: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [dc2:0, dc1:1] Column Families: ColumnFamily: testcf {panel} Now you get UA: {panel} [default@Keyspace3] get testcf[utf8('k1')][utf8('c1')]; null UnavailableException() at org.apache.cassandra.thrift.Cassandra$get_result.read(Cassandra.java:6506) at org.apache.cassandra.thrift.Cassandra$Client.recv_get(Cassandra.java:519) at org.apache.cassandra.thrift.Cassandra$Client.get(Cassandra.java:492) at org.apache.cassandra.cli.CliClient.executeGet(CliClient.java:648) at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:209)
[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected
[ https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457528#comment-13457528 ] Bartłomiej Romański commented on CASSANDRA-4417: And the last comment. Could this be related to: CASSANDRA-4071? If I understand the description correctly any topology changes (adding a node, moving a node) when the counter is spread across more than one sstable can result in the invalid counter shard detected error message during reads. Am I right? invalid counter shard detected --- Key: CASSANDRA-4417 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.1 Environment: Amazon Linux Reporter: Senthilvel Rangaswamy Seeing errors like these: 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard What does it mean ? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko reopened CASSANDRA-4669: -- This patch breaks cqlsh history loading. Also, the original issue isn't an issue at all - readline handles empty history files perfectly well. Reverting the commit should fix the new issue. Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[2/3] git commit: Revert cqlsh: check for non-empty history file before loading
Revert cqlsh: check for non-empty history file before loading This reverts commit 6a8ba07313e2fad8c5a6cf7566bbd77321274c60. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4d600f33 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4d600f33 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4d600f33 Branch: refs/heads/trunk Commit: 4d600f338186e2e9c598ccff3e4c6d21327c20c2 Parents: 6ad7d45 Author: Brandon Williams brandonwilli...@apache.org Authored: Mon Sep 17 20:44:02 2012 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Mon Sep 17 20:44:12 2012 -0500 -- bin/cqlsh |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d600f33/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 5f99e45..3bef142 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -2690,7 +2690,7 @@ def setup_cqlruleset(cqlmodule): def main(options, hostname, port): setup_cqlruleset(options.cqlmodule) -if os.path.exists(HISTORY) and readline is not None and readline.get_history_length()0: +if os.path.exists(HISTORY) and readline is not None: readline.read_history_file(HISTORY) delims = readline.get_completer_delims() delims.replace(', )
[3/3] git commit: Revert cqlsh: check for non-empty history file before loading
Revert cqlsh: check for non-empty history file before loading This reverts commit 6a8ba07313e2fad8c5a6cf7566bbd77321274c60. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4d600f33 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4d600f33 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4d600f33 Branch: refs/heads/cassandra-1.1 Commit: 4d600f338186e2e9c598ccff3e4c6d21327c20c2 Parents: 6ad7d45 Author: Brandon Williams brandonwilli...@apache.org Authored: Mon Sep 17 20:44:02 2012 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Mon Sep 17 20:44:12 2012 -0500 -- bin/cqlsh |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d600f33/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 5f99e45..3bef142 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -2690,7 +2690,7 @@ def setup_cqlruleset(cqlmodule): def main(options, hostname, port): setup_cqlruleset(options.cqlmodule) -if os.path.exists(HISTORY) and readline is not None and readline.get_history_length()0: +if os.path.exists(HISTORY) and readline is not None: readline.read_history_file(HISTORY) delims = readline.get_completer_delims() delims.replace(', )
[1/3] git commit: Merge branch 'cassandra-1.1' into trunk
Updated Branches: refs/heads/cassandra-1.1 6ad7d45ad - 4d600f338 refs/heads/trunk 0666e1952 - 1f3990f12 Merge branch 'cassandra-1.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1f3990f1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1f3990f1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1f3990f1 Branch: refs/heads/trunk Commit: 1f3990f122cc77c3f52f72189a7430119fc5125e Parents: 0666e19 4d600f3 Author: Brandon Williams brandonwilli...@apache.org Authored: Mon Sep 17 20:44:20 2012 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Mon Sep 17 20:44:20 2012 -0500 -- bin/cqlsh |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1f3990f1/bin/cqlsh --
[jira] [Resolved] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams resolved CASSANDRA-4669. - Resolution: Fixed Reverted. Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457538#comment-13457538 ] Brian ONeill commented on CASSANDRA-4669: - I trust you guys, but you may just want to double check that this isn't an issue. I can easily reproduce the problem. Here is a log. You can see it working initially. I truncate the file, and no joy. bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 2.2.0 | Cassandra 1.1.5 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh quit bone@zen:~/dev/boneill42/cassandra- rm -fr ~/.cqlsh_history bone@zen:~/dev/boneill42/cassandra- touch ~/.cqlsh_history bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument bone@zen:~/dev/boneill42/cassandra- Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457540#comment-13457540 ] Brandon Williams commented on CASSANDRA-4669: - Maybe we should just catch any error from loading the history, issue a warning, and move along. Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457547#comment-13457547 ] Aleksey Yeschenko commented on CASSANDRA-4669: -- Brian, what are your python/readline versions and OS? {quote} ➤ bin/cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 2.2.0 | Cassandra unknown | CQL spec 3.0.0 | Thrift protocol 19.34.0] Use HELP for help. cqlsh quit ➤ rm -fr ~/.cqlsh_history ➤ touch ~/.cqlsh_history ➤ bin/cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 2.2.0 | Cassandra unknown | CQL spec 3.0.0 | Thrift protocol 19.34.0] Use HELP for help. cqlsh {quote} Also, shell will write empty .cqlsh_history when you open it for the first time and close it without entering any commands. And will open just fine next time. Brandon, or do that, yes. Then we can get rid of os.path.exists(HISTORY) check. Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457553#comment-13457553 ] Brian ONeill commented on CASSANDRA-4669: - Not sure how to tell exact version of readline. (I'm a java head =) This is on OSX. Looks like the version is the one provided by apple in the python install. Here is what i've got: bone@zen:~/dev/boneill42/cassandra- python Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin Type help, copyright, credits or license for more information. import readline readline module 'readline' from '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/readline.so' FWW, this can be a really annoying error unless you can read Python. It doesn't get far enough to recrete the history file. So, the problem persists. Your average user would be stuck. I first reinstalled Cassandra because I thought something was corrupt. Fortunately, python isn't compiled. =) Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-4669: - Attachment: fix-cqlsh-history.patch Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: fix-cqlsh-history.patch, trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457559#comment-13457559 ] Aleksey Yeschenko commented on CASSANDRA-4669: -- Ah, OS X. Maybe it really does fail on OS X. Works on Ubuntu though. Attached a patch that catches IOError when reading/writing from/to history file. Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: fix-cqlsh-history.patch, trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457560#comment-13457560 ] Brian ONeill commented on CASSANDRA-4669: - Beautiful. Thanks guys. Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: fix-cqlsh-history.patch, trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[1/3] git commit: Merge branch 'cassandra-1.1' into trunk
Updated Branches: refs/heads/cassandra-1.1 4d600f338 - e7f28f303 refs/heads/trunk 1f3990f12 - 12691ae3a Merge branch 'cassandra-1.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12691ae3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12691ae3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12691ae3 Branch: refs/heads/trunk Commit: 12691ae3a677ffde37866c1f72648f2ed690b938 Parents: 1f3990f e7f28f3 Author: Brandon Williams brandonwilli...@apache.org Authored: Mon Sep 17 21:42:25 2012 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Mon Sep 17 21:42:25 2012 -0500 -- bin/cqlsh | 26 ++ 1 files changed, 18 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/12691ae3/bin/cqlsh --
[2/3] git commit: cqlsh: catch IOError on history loading Patch by Aleksey Yeschenko, reviewed by brandonwilliams for CASSANDRA-4669
cqlsh: catch IOError on history loading Patch by Aleksey Yeschenko, reviewed by brandonwilliams for CASSANDRA-4669 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7f28f30 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7f28f30 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7f28f30 Branch: refs/heads/trunk Commit: e7f28f30320bae4faa1092567d7b9b46f8947842 Parents: 4d600f3 Author: Brandon Williams brandonwilli...@apache.org Authored: Mon Sep 17 21:41:30 2012 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Mon Sep 17 21:41:30 2012 -0500 -- bin/cqlsh | 26 ++ 1 files changed, 18 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7f28f30/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 3bef142..4e93db7 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -2687,16 +2687,28 @@ def setup_cqlruleset(cqlmodule): cqlruleset.completer_for(rulename, termname)(func) cqlruleset.commands_end_with_newline.update(my_commands_ending_with_newline) -def main(options, hostname, port): -setup_cqlruleset(options.cqlmodule) - -if os.path.exists(HISTORY) and readline is not None: -readline.read_history_file(HISTORY) +def init_history(): +if readline is not None: +try: +readline.read_history_file(HISTORY) +except IOError: +pass delims = readline.get_completer_delims() delims.replace(', ) delims += '.' readline.set_completer_delims(delims) +def save_history(): +if readline is not None: +try: +readline.write_history_file(HISTORY) +except IOError: +pass + +def main(options, hostname, port): +setup_cqlruleset(options.cqlmodule) +init_history() + if options.file is None: stdin = None else: @@ -2731,9 +2743,7 @@ def main(options, hostname, port): shell.debug = True shell.cmdloop() - -if readline is not None: -readline.write_history_file(HISTORY) +save_history() if __name__ == '__main__': main(*read_options(sys.argv[1:], os.environ))
[jira] [Closed] (CASSANDRA-4669) Empty .cqlsh_history file causes cqlsh to crash on startup.
[ https://issues.apache.org/jira/browse/CASSANDRA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams closed CASSANDRA-4669. --- Committed. Empty .cqlsh_history file causes cqlsh to crash on startup. --- Key: CASSANDRA-4669 URL: https://issues.apache.org/jira/browse/CASSANDRA-4669 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5 Environment: Python 2.7.1 on Mac OSX Reporter: Brian ONeill Assignee: Brian ONeill Priority: Minor Fix For: 1.1.6 Attachments: fix-cqlsh-history.patch, trunk-4669.txt Not sure how I got it, but I ended up with an empty .cqlsh_history file. In that state, when starting cqlsh, you end up with: bone@zen:~/dev/boneill42/cassandra- bin/cqlsh Traceback (most recent call last): File bin/cqlsh, line 2588, in module main(*read_options(sys.argv[1:], os.environ)) File bin/cqlsh, line 2543, in main readline.read_history_file(HISTORY) IOError: [Errno 22] Invalid argument Its a simple fix to check for a non-empty history file. I'll attach the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2338) C* consistency level needs to be pluggable
[ https://issues.apache.org/jira/browse/CASSANDRA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457563#comment-13457563 ] David Strauss commented on CASSANDRA-2338: -- Just chiming in with another use case: first data found. We have a large CF indexed by the hash of what's in the row. Basically, any non-empty result is guaranteed to be the same. The common case is for all replicas to be up-to-date, so CL.ONE is almost ideal. But, we also write with CL.ONE, and that means we only have eventually consistent read-after-write. A pluggable CL would allow something more ideal. The ideal would be: 1. Send read to all replicas. 2. On the first non-NotFoundException response, we're done. 3. If all replicas return NotFoundException, then it truly doesn't exist. CL.ONE just goes with the NotFoundException if it's the first response. CL.ALL waits for all responses, even if the first one is all we need. This may be useful in the reverse, too: 1. Send read to all replicas. 2. On the first NotFoundException response, we're done. 3. If all replicas return data, then use the one with the latest timestamp. I also second Peter's request for controlling what extent requests get sent to all replicas on reads. In our case, it's usually okay to wait a bit longer but only put I/O load on one out of all the replica boxes. This map well to the algorithms above in higher-latency scenarios. C* consistency level needs to be pluggable -- Key: CASSANDRA-2338 URL: https://issues.apache.org/jira/browse/CASSANDRA-2338 Project: Cassandra Issue Type: New Feature Reporter: Matthew F. Dennis Priority: Minor for cases where people want to run C* across multiple DCs for disaster recovery et cetera where normal operations only happen in the first DC (e.g. no writes/reads happen in the remove DC under normal operation) neither LOCAL_QUORUM or EACH_QUORUM really suffices. Consider the case with RF of DC1:3 DC2:2 LOCAL_QUORUM doesn't provide any guarantee that data is in the remote DC. EACH_QUORUM requires that both nodes in the remote DC are up. It would be useful in some situations to be able to specify a strategy where LOCAL_QUORUM is used for the local DC and at least one in a remote DC (and/or at least in *each* remote DC). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4594) COPY TO and COPY FROM don't default to consistent ordering of columns
[ https://issues.apache.org/jira/browse/CASSANDRA-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457594#comment-13457594 ] paul cannon commented on CASSANDRA-4594: Ah, sorry, I didn't make it clear on 4491 that the github branch had multiple commits from off of trunk. Looks like you only cherry-picked the last. I'll make a note there. COPY TO and COPY FROM don't default to consistent ordering of columns - Key: CASSANDRA-4594 URL: https://issues.apache.org/jira/browse/CASSANDRA-4594 Project: Cassandra Issue Type: Bug Environment: Happens in CQLSH 2, may or may not happen in CQLSH 3 Reporter: Tyler Patterson Assignee: paul cannon Priority: Minor Labels: cqlsh Fix For: 1.1.6 Here is the input: {code} CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; USE test; CREATE TABLE airplanes ( name text PRIMARY KEY, manufacturer ascii, year int, mach float ); INSERT INTO airplanes (name, manufacturer, year, mach) VALUES ('P38-Lightning', 'Lockheed', 1937, '.7'); COPY airplanes TO 'temp.cfg' WITH HEADER=true; TRUNCATE airplanes; COPY airplanes FROM 'temp.cfg' WITH HEADER=true; SELECT * FROM airplanes; {code} Here is what happens when executed. Note how it tried to import the float into the int column: {code} cqlsh:test DROP KEYSPACE test; cqlsh:test CREATE KEYSPACE test WITH strategy_class = 'SimpleStrategy' AND strategy_options:replication_factor = 1; cqlsh:test USE test; cqlsh:test cqlsh:test CREATE TABLE airplanes ( ... name text PRIMARY KEY, ... manufacturer ascii, ... year int, ... mach float ... ); cqlsh:test cqlsh:test INSERT INTO airplanes (name, manufacturer, year, mach) VALUES ('P38-Lightning', 'Lockheed', 1937, '.7'); cqlsh:test cqlsh:test COPY airplanes TO 'temp.cfg' WITH HEADER=true; 1 rows exported in 0.003 seconds. cqlsh:test TRUNCATE airplanes; cqlsh:test cqlsh:test COPY airplanes FROM 'temp.cfg' WITH HEADER=true; Bad Request: unable to make int from '0.7' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.002 seconds. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (CASSANDRA-4491) cqlsh needs to use system.local instead of system.Versions
[ https://issues.apache.org/jira/browse/CASSANDRA-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] paul cannon reopened CASSANDRA-4491: Sorry, my fault for not being clear: this change involved multiple commits. Three, in this case. The last one got in right. cqlsh needs to use system.local instead of system.Versions -- Key: CASSANDRA-4491 URL: https://issues.apache.org/jira/browse/CASSANDRA-4491 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.0 beta 1 Reporter: paul cannon Assignee: paul cannon Priority: Minor Labels: cqlsh Fix For: 1.2.0 beta 1 Apparently the system.Versions table was removed as part of CASSANDRA-4018. cqlsh in 1.2 should use system.local preferentially, and fall back on system.Versions to keep backwards compatibility with older c*. Also changed in 4018: all the system.schema_* CFs now use columns named keyspace_name, columnfamily_name, and column_name instead of keyspace, columnfamily, and column. While we're at it, let's update the cql3 table structure parsing and the DESCRIBE command for the recent Cassandra changes too. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3841) long-test timing out
[ https://issues.apache.org/jira/browse/CASSANDRA-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-3841: --- Attachment: 0001-CASSANDRA-3841-long-test-timing-out.patch I was able to reproduce the originally reported error - it was just the junit test timing out. The previous time limit was 30 ms (5 minutes), and I increased it to 60 ms (10 minutes). However, I was not able to reproduce Jonathan's reported problems, even after executing a run of all the individual tests: for i in LongTableTest MeteredFlusherTest LongCompactionSpeedTest LongBloomFilterTest LongLegacyBloomFilterTest; do ant clean long-test -Dtest.name=$i; done long-test timing out Key: CASSANDRA-3841 URL: https://issues.apache.org/jira/browse/CASSANDRA-3841 Project: Cassandra Issue Type: Bug Components: Tests Affects Versions: 1.1.0 Reporter: Michael Allen Assignee: Jason Brown Priority: Minor Fix For: 1.1.6 Attachments: 0001-CASSANDRA-3841-long-test-timing-out.patch [junit] - --- [junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionSpeedTest [junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionSpeedTest [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec [junit] [junit] Testcase: org.apache.cassandra.db.compaction.LongCompactionSpeedTest:BeforeFirstTest: Caused an ERROR [junit] Timeout occurred. Please note the time in the report does not reflect the time until the timeout. [junit] junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout. [junit] [junit] [junit] Test org.apache.cassandra.db.compaction.LongCompactionSpeedTest FAILED (timeout) [junit] Testsuite: org.apache.cassandra.utils.LongBloomFilterTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 64.536 sec [junit] [junit] Testsuite: org.apache.cassandra.utils.LongLegacyBloomFilterTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 41.104 sec [junit] BUILD FAILED /Users/mallen/dstax/repos/git/cassandra/build.xml:1113: The following error occurred while executing this line: /Users/mallen/dstax/repos/git/cassandra/build.xml:1036: Some long test(s) failed. Total time: 63 minutes 9 seconds -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira