[jira] [Commented] (CASSANDRA-5651) Custom authentication plugin should not need to prepopulate users in system_auth.users column family
[ https://issues.apache.org/jira/browse/CASSANDRA-5651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686487#comment-13686487 ] Aleksey Yeschenko commented on CASSANDRA-5651: -- This is done how it's done for two reasons: 1. User existence validation. We don't want someone to accidentally grant/revoke/make superuser a non-existent user, silently, then have that user created later and have these accidental permissions. We chose to keep the registry in Cassandra itself because there are cases where an authenticator itself cannot answer the question (Auth.isExistingUser()) easily (with Kerberos, for example). 2. Superuser status management. For every implementation to not reinvent the wheel, Cassandra manages it itself. So it's not there just for authentication purposes, it bridges different authenticator/authorizer implementations, too. So it's not as simple as another boolean method similar to IAuthenticator.requireAuthentication() so that custom authentication plugin can skip this isExistingUser check if needed. Custom authentication plugin should not need to prepopulate users in system_auth.users column family Key: CASSANDRA-5651 URL: https://issues.apache.org/jira/browse/CASSANDRA-5651 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.5 Environment: RHEL 6.3, jdk 1.7 Reporter: Bao Le Current implementation in ClientState.login makes a call to Auth.isExistingUser(user.getName()) if the AuthenticatedUser is not Anonymous. This involves querying system_auth.users column family. Our custom authentication plugin does not need to pre-create and store users, and it worked fine under 1.1.5. On 1.2.5, however, we run into authentication problem because of this. I feel we should either do this isExistingUser check inside IAuthenticator.authenticate, or expose another boolean method similar to IAuthenticator.requireAuthentication() so that custom authentication plugin can skip this isExistingUser check if needed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5639) Update CREATE CUSTOM INDEX syntax to match new CREATE INDEX syntax
[ https://issues.apache.org/jira/browse/CASSANDRA-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5639: - Attachment: 5639.txt Update CREATE CUSTOM INDEX syntax to match new CREATE INDEX syntax -- Key: CASSANDRA-5639 URL: https://issues.apache.org/jira/browse/CASSANDRA-5639 Project: Cassandra Issue Type: Improvement Reporter: Aleksey Yeschenko Assignee: Aleksey Yeschenko Fix For: 1.2.6 Attachments: 5639.txt CASSANDRA-5484 introduced CREATE CUSTOM INDEX syntax for custom 2i and CASSANDRA-5576 will add CQL3 support for creating/dropping triggers (CREATE TRIGGER name ON table USING classname). For consistency' sake, CREATE CUSTOM INDEX should be updated to also use 'USING' keyword, e.g. CREATE CUSTOM INDEX ON table(column) USING classname. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5604) Vnodes decrease Hadoop performances cause it creates too many small splits
[ https://issues.apache.org/jira/browse/CASSANDRA-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686571#comment-13686571 ] Cyril Scetbon commented on CASSANDRA-5604: -- I've made some tests with a program which lasts 8 minutes before [CASSANDRA-5544|https://issues.apache.org/jira/browse/CASSANDRA-5544] and 56 minutes after to complete only 25% of the task ! Vnodes decrease Hadoop performances cause it creates too many small splits -- Key: CASSANDRA-5604 URL: https://issues.apache.org/jira/browse/CASSANDRA-5604 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.4 Environment: Linux Ubuntu 12.04 LTS x86_64 Reporter: Cyril Scetbon Priority: Trivial -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5652) Suppress custom exceptions thru jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686607#comment-13686607 ] Sylvain Lebresne commented on CASSANDRA-5652: - lgtm, though provided jconsole can handle it, I'd replace the exception by something like: {noformat} new RuntimeException(Error starting native transport: + e.getMessage(), e); {noformat} so we don't lose the original stack trace (but if it's still too hard to handle for JMX, let's just ship it as in you patch). Suppress custom exceptions thru jmx --- Key: CASSANDRA-5652 URL: https://issues.apache.org/jira/browse/CASSANDRA-5652 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.5 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 1.2.6 Attachments: 5652.txt startNativeTransport, can send back org.jboss.netty.channel.ChannelException which causes jconsole to puke with a bad message such as Problem invoking startNativeTransport: java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.EOFException convert to RuntimeException so you get something like: org.jboss.netty.channel.ChannelException: Failed to bind to: localhost/127.0.0.1:9042 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-1585) Support renaming columnfamilies and keyspaces
[ https://issues.apache.org/jira/browse/CASSANDRA-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686621#comment-13686621 ] Francisco Trujillo commented on CASSANDRA-1585: --- For the people who is going to try the manual method describe by Robert Coli i found some difficulties (my errors but could be that someone else have the same problem): In 1) You have to create the scheme with all the column familys and indexes. In 4) remember that the files that stored the sstables start with the name of the keyspace. You have to rename the files in order to be recognized by the nodetool refresh. Support renaming columnfamilies and keyspaces - Key: CASSANDRA-1585 URL: https://issues.apache.org/jira/browse/CASSANDRA-1585 Project: Cassandra Issue Type: New Feature Components: Core Reporter: Stu Hood Priority: Minor Renames were briefly supported but were race-prone. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5654) [patch] improve performance of JdbcDecimal.decompose
Julien Aymé created CASSANDRA-5654: -- Summary: [patch] improve performance of JdbcDecimal.decompose Key: CASSANDRA-5654 URL: https://issues.apache.org/jira/browse/CASSANDRA-5654 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.5, 1.1.12 Reporter: Julien Aymé Priority: Minor JdbcDecimal.decompose creates a new byte array and copies byte per byte instead of doing a bulk copying. This can lead to performance degradations when lots of calls are made. Patch will follow -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5654) [patch] improve performance of JdbcDecimal.decompose
[ https://issues.apache.org/jira/browse/CASSANDRA-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julien Aymé updated CASSANDRA-5654: --- Attachment: cassandra-1.1-5654.diff The proposed patch [patch] improve performance of JdbcDecimal.decompose Key: CASSANDRA-5654 URL: https://issues.apache.org/jira/browse/CASSANDRA-5654 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.12, 1.2.5 Reporter: Julien Aymé Priority: Minor Attachments: cassandra-1.1-5654.diff JdbcDecimal.decompose creates a new byte array and copies byte per byte instead of doing a bulk copying. This can lead to performance degradations when lots of calls are made. Patch will follow -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5654) [patch] improve performance of JdbcDecimal.decompose
[ https://issues.apache.org/jira/browse/CASSANDRA-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686625#comment-13686625 ] Julien Aymé edited comment on CASSANDRA-5654 at 6/18/13 12:07 PM: -- The proposed patch, made against branch cassandra-1.1 (same changes can be applied to 1.2 or trunk) was (Author: julien.a...@gmail.com): The proposed patch [patch] improve performance of JdbcDecimal.decompose Key: CASSANDRA-5654 URL: https://issues.apache.org/jira/browse/CASSANDRA-5654 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.12, 1.2.5 Reporter: Julien Aymé Priority: Minor Attachments: cassandra-1.1-5654.diff JdbcDecimal.decompose creates a new byte array and copies byte per byte instead of doing a bulk copying. This can lead to performance degradations when lots of calls are made. Patch will follow -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-5654) [patch] improve performance of JdbcDecimal.decompose
[ https://issues.apache.org/jira/browse/CASSANDRA-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julien Aymé reassigned CASSANDRA-5654: -- Assignee: Dave Brosius Assign to Dave Brosius since he reported the original issue [patch] improve performance of JdbcDecimal.decompose Key: CASSANDRA-5654 URL: https://issues.apache.org/jira/browse/CASSANDRA-5654 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.12, 1.2.5 Reporter: Julien Aymé Assignee: Dave Brosius Priority: Minor Attachments: cassandra-1.1-5654.diff JdbcDecimal.decompose creates a new byte array and copies byte per byte instead of doing a bulk copying. This can lead to performance degradations when lots of calls are made. Patch will follow -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5654) [patch] improve performance of JdbcDecimal.decompose
[ https://issues.apache.org/jira/browse/CASSANDRA-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686626#comment-13686626 ] Julien Aymé commented on CASSANDRA-5654: Note: This issue already has been reported here: CASSANDRA-3837. But the fix seems to have been lost since (not in branch cassandra-1.1, cassandra-1.2 and trunk) [patch] improve performance of JdbcDecimal.decompose Key: CASSANDRA-5654 URL: https://issues.apache.org/jira/browse/CASSANDRA-5654 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.12, 1.2.5 Reporter: Julien Aymé Priority: Minor Attachments: cassandra-1.1-5654.diff JdbcDecimal.decompose creates a new byte array and copies byte per byte instead of doing a bulk copying. This can lead to performance degradations when lots of calls are made. Patch will follow -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5649) Move resultset type information into prepare, not execute
[ https://issues.apache.org/jira/browse/CASSANDRA-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686648#comment-13686648 ] Sylvain Lebresne commented on CASSANDRA-5649: - I have some doubts that this would provide a noticeable benefit in practice. The type information in the result set is fairly compact (though it's true we could save the full metadata in practice). I'm not sure reading the message is much of a bottleneck in practice for small messages (and for big ones, the metadata size is negligeable anyway). And there is compression too. On the other side, this does complexify client drivers. Currently, you can fully decode a result message without any external information. This is a nice property implementation wise, and is somewhat safer. And I'm not sure requiring too much state from the client driver to do basic things is ideal. I can be wrong, but my intuition is that neither MySQL nor PostgreSQL does that because they don't consider it worth the complexity. And that's my intuition too. So I'm fine doing some benchmarking to see if this can make a measurable difference in practice, but I'm -1 on going ahead with this without concrete evidence of the benefits since there is known drawbacks. And I kind of feel it's too late for 2.0. Move resultset type information into prepare, not execute - Key: CASSANDRA-5649 URL: https://issues.apache.org/jira/browse/CASSANDRA-5649 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: Sylvain Lebresne Fix For: 2.0 Native protocol 1.0 sends type information on execute. This is a minor inefficiency for large resultsets; unfortunately, single-row resultsets are common. This does represent a performance regression from Thrift; Thrift does not send type information at all. (Bad for driver complexity, but good for performance.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5652) Suppress custom exceptions thru jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686650#comment-13686650 ] Dave Brosius commented on CASSANDRA-5652: - yea, unfortunately jmx will still CNFE if you pass e as the initcause, but i will change the error message. Suppress custom exceptions thru jmx --- Key: CASSANDRA-5652 URL: https://issues.apache.org/jira/browse/CASSANDRA-5652 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.5 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 1.2.6 Attachments: 5652.txt startNativeTransport, can send back org.jboss.netty.channel.ChannelException which causes jconsole to puke with a bad message such as Problem invoking startNativeTransport: java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.EOFException convert to RuntimeException so you get something like: org.jboss.netty.channel.ChannelException: Failed to bind to: localhost/127.0.0.1:9042 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5604) Vnodes decrease Hadoop performances cause it creates too many small splits
[ https://issues.apache.org/jira/browse/CASSANDRA-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686571#comment-13686571 ] Cyril Scetbon edited comment on CASSANDRA-5604 at 6/18/13 12:39 PM: I've made some tests with a program which lasts 8 minutes before [CASSANDRA-5544|https://issues.apache.org/jira/browse/CASSANDRA-5544] and 1h17 with multiple mappers ! was (Author: cscetbon): I've made some tests with a program which lasts 8 minutes before [CASSANDRA-5544|https://issues.apache.org/jira/browse/CASSANDRA-5544] and 56 minutes after to complete only 25% of the task ! Vnodes decrease Hadoop performances cause it creates too many small splits -- Key: CASSANDRA-5604 URL: https://issues.apache.org/jira/browse/CASSANDRA-5604 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.4 Environment: Linux Ubuntu 12.04 LTS x86_64 Reporter: Cyril Scetbon Priority: Trivial -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Suppress custom exceptions thru jmx patch by dbrosius reviewed by slebresne for cassandra 5652
Updated Branches: refs/heads/cassandra-1.2 f1004e9b1 - f30015c86 Suppress custom exceptions thru jmx patch by dbrosius reviewed by slebresne for cassandra 5652 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f30015c8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f30015c8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f30015c8 Branch: refs/heads/cassandra-1.2 Commit: f30015c862eb913d1f0cf8c10d201de5698a6dda Parents: f1004e9 Author: Dave Brosius dbros...@apache.org Authored: Tue Jun 18 08:44:28 2013 -0400 Committer: Dave Brosius dbros...@apache.org Committed: Tue Jun 18 08:44:28 2013 -0400 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/StorageService.java | 10 +- 2 files changed, 10 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f30015c8/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2bba0ee..65f66de 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -26,6 +26,7 @@ * fix help text for -tspw cassandra-cli (CASSANDRA-5643) * don't throw away initial causes exceptions for internode encryption issues (CASSANDRA-5644) * Fix message spelling errors for cql select statements (CASSANDRA-5647) + * Suppress custom exceptions thru jmx (CASSANDRA-5652) Merged from 1.1: * Remove buggy thrift max message length option (CASSANDRA-5529) * Fix NPE in Pig's widerow mode (CASSANDRA-5488) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f30015c8/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index 9f22318..49db272 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -336,7 +336,15 @@ public class StorageService extends NotificationBroadcasterSupport implements IE { throw new IllegalStateException(No configured daemon); } -daemon.nativeServer.start(); + +try +{ +daemon.nativeServer.start(); +} +catch (Exception e) +{ +throw new RuntimeException(Error starting native transport: + e.getMessage()); +} } public void stopNativeTransport()
[1/2] git commit: Suppress custom exceptions thru jmx patch by dbrosius reviewed by slebresne for cassandra 5652
Updated Branches: refs/heads/trunk 2b86c9a4f - 093e188a4 Suppress custom exceptions thru jmx patch by dbrosius reviewed by slebresne for cassandra 5652 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f30015c8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f30015c8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f30015c8 Branch: refs/heads/trunk Commit: f30015c862eb913d1f0cf8c10d201de5698a6dda Parents: f1004e9 Author: Dave Brosius dbros...@apache.org Authored: Tue Jun 18 08:44:28 2013 -0400 Committer: Dave Brosius dbros...@apache.org Committed: Tue Jun 18 08:44:28 2013 -0400 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/StorageService.java | 10 +- 2 files changed, 10 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f30015c8/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2bba0ee..65f66de 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -26,6 +26,7 @@ * fix help text for -tspw cassandra-cli (CASSANDRA-5643) * don't throw away initial causes exceptions for internode encryption issues (CASSANDRA-5644) * Fix message spelling errors for cql select statements (CASSANDRA-5647) + * Suppress custom exceptions thru jmx (CASSANDRA-5652) Merged from 1.1: * Remove buggy thrift max message length option (CASSANDRA-5529) * Fix NPE in Pig's widerow mode (CASSANDRA-5488) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f30015c8/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index 9f22318..49db272 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -336,7 +336,15 @@ public class StorageService extends NotificationBroadcasterSupport implements IE { throw new IllegalStateException(No configured daemon); } -daemon.nativeServer.start(); + +try +{ +daemon.nativeServer.start(); +} +catch (Exception e) +{ +throw new RuntimeException(Error starting native transport: + e.getMessage()); +} } public void stopNativeTransport()
[2/2] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/093e188a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/093e188a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/093e188a Branch: refs/heads/trunk Commit: 093e188a42523f623639d0705608b67dc2578cc4 Parents: 2b86c9a f30015c Author: Dave Brosius dbros...@apache.org Authored: Tue Jun 18 08:46:06 2013 -0400 Committer: Dave Brosius dbros...@apache.org Committed: Tue Jun 18 08:46:06 2013 -0400 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/StorageService.java | 10 +- 2 files changed, 10 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/093e188a/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/093e188a/src/java/org/apache/cassandra/service/StorageService.java --
[jira] [Resolved] (CASSANDRA-5652) Suppress custom exceptions thru jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dave Brosius resolved CASSANDRA-5652. - Resolution: Fixed Reviewer: slebresne committed to cassandra-1.2 as f30015c862eb913d1f0cf8c10d201de5698a6dda Suppress custom exceptions thru jmx --- Key: CASSANDRA-5652 URL: https://issues.apache.org/jira/browse/CASSANDRA-5652 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.5 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 1.2.6 Attachments: 5652.txt startNativeTransport, can send back org.jboss.netty.channel.ChannelException which causes jconsole to puke with a bad message such as Problem invoking startNativeTransport: java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.EOFException convert to RuntimeException so you get something like: org.jboss.netty.channel.ChannelException: Failed to bind to: localhost/127.0.0.1:9042 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-5648) Cassandra: Insert of null value not possible with CQL3?
[ https://issues.apache.org/jira/browse/CASSANDRA-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko resolved CASSANDRA-5648. -- Resolution: Not A Problem Technically, column value can't be null in Thrift. It can be set to an empty byte buffer, and it's not the same as null (there is no way to distinguish between an empty blob/string and 'null', for example - all map to an empty byte buffer). bq. Inserting an empty String works fine, but this seems a bit bloated. It's not bloated, it's actually what you want. And for blob columns, inserting 0x would be the equivalent. In CQL3, however, there *is* null, and it already represents something - absence of a cell. And using null in INSERT or UPDATE maps to DELETE of the whole cell. Overloading it for COMPACT STORAGE to mean 'empty byte buffer value' would be inconsistent and confusing - we don't need two kinds of null. That said, it's true that Thrift API allows something that CQL3 does not - namely, setting the value to an empty byte buffer for int/float/etc. values (which wasn't such a good idea, probably). You can do the same in CLQ3 using prepared statements, so there is a workaround (for strings and blobs you can just use '' and 0x, respectively). Cassandra: Insert of null value not possible with CQL3? --- Key: CASSANDRA-5648 URL: https://issues.apache.org/jira/browse/CASSANDRA-5648 Project: Cassandra Issue Type: Bug Components: API Affects Versions: 1.2.0 Reporter: Tobias Schlottke Assignee: Aleksey Yeschenko Fix For: 1.2.6 Hi there, I'm trying to migrate a project from thrift to cql3/java driver and I'm experiencing a strange problem. Schema: {code} CREATE TABLE foo ( key ascii, column1 ascii, foo ascii, PRIMARY KEY (key, column1) ) WITH COMPACT STORAGE; {code} The table just consists of a primary key and a value, sometimes all the information lies in the key though, so the value is not needed. Through the thrift interface, it just works fine to leave out foo. Executing this query: {code} INSERT INTO foo(key, column1) VALUES ('test', 'test2'); {code} Fails with Bad Request: Missing mandatory column foo though. Explicitly inserting null as a value does not store the column / deletes the old one with a null value inserted through thrift. Inserting an empty String works fine, but this seems a bit bloated. Is this intended to (not) work this way? Best, Tobias -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-5651) Custom authentication plugin should not need to prepopulate users in system_auth.users column family
[ https://issues.apache.org/jira/browse/CASSANDRA-5651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko resolved CASSANDRA-5651. -- Resolution: Not A Problem It's also worth noting that Oracle, MySQL and PostgreSQL all require registering users internally via CREATE USER .. IDENTIFIED EXTERNALLY (Oracle and MySQL) and CREATE ROLE (PostgreSQL) for pluggable authenticators, and it's not optional. Custom authentication plugin should not need to prepopulate users in system_auth.users column family Key: CASSANDRA-5651 URL: https://issues.apache.org/jira/browse/CASSANDRA-5651 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.5 Environment: RHEL 6.3, jdk 1.7 Reporter: Bao Le Current implementation in ClientState.login makes a call to Auth.isExistingUser(user.getName()) if the AuthenticatedUser is not Anonymous. This involves querying system_auth.users column family. Our custom authentication plugin does not need to pre-create and store users, and it worked fine under 1.1.5. On 1.2.5, however, we run into authentication problem because of this. I feel we should either do this isExistingUser check inside IAuthenticator.authenticate, or expose another boolean method similar to IAuthenticator.requireAuthentication() so that custom authentication plugin can skip this isExistingUser check if needed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5639) Update CREATE CUSTOM INDEX syntax to match new CREATE TRIGGER syntax
[ https://issues.apache.org/jira/browse/CASSANDRA-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5639: - Summary: Update CREATE CUSTOM INDEX syntax to match new CREATE TRIGGER syntax (was: Update CREATE CUSTOM INDEX syntax to match new CREATE INDEX syntax) Update CREATE CUSTOM INDEX syntax to match new CREATE TRIGGER syntax Key: CASSANDRA-5639 URL: https://issues.apache.org/jira/browse/CASSANDRA-5639 Project: Cassandra Issue Type: Improvement Affects Versions: 1.2.5 Reporter: Aleksey Yeschenko Assignee: Aleksey Yeschenko Labels: cql3 Fix For: 1.2.6 Attachments: 5639.txt CASSANDRA-5484 introduced CREATE CUSTOM INDEX syntax for custom 2i and CASSANDRA-5576 will add CQL3 support for creating/dropping triggers (CREATE TRIGGER name ON table USING classname). For consistency' sake, CREATE CUSTOM INDEX should be updated to also use 'USING' keyword, e.g. CREATE CUSTOM INDEX ON table(column) USING classname. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5638) Improve StorageProxy tracing
[ https://issues.apache.org/jira/browse/CASSANDRA-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686699#comment-13686699 ] Sylvain Lebresne commented on CASSANDRA-5638: - lgtm, +1. Nit: for consistency sake, we could start with a caps in {noformat} not hinting {} which has been down {}ms {noformat} Also, could be more user friendly to include the time after which we start not hinting. Improve StorageProxy tracing Key: CASSANDRA-5638 URL: https://issues.apache.org/jira/browse/CASSANDRA-5638 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 1.2.6 Attachments: 5638.txt Storage proxy includes some logger.trace methods that should be Tracing.trace, and more logger.trace that aren't useful. Also QueryProcessor is not consistent about logging what it is executing and should include ConsistencyLevel. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (CASSANDRA-5648) Cassandra: Insert of null value not possible with CQL3?
[ https://issues.apache.org/jira/browse/CASSANDRA-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reopened CASSANDRA-5648: --- Agreed that null != empty byte buffer, so the description is invalid in that respect, but it's correct in that the semantics of {{INSERT INTO foo(key, column1) VALUES ('test', 'test2');}} and {{INSERT INTO foo(key, column1, foo) VALUES ('test', 'test2', 'test3');}} are different and both are useful. Specifically, it seems quite reasonable to me to support I want to insert just the PK columns without clobbering existing values if such exist. Cassandra: Insert of null value not possible with CQL3? --- Key: CASSANDRA-5648 URL: https://issues.apache.org/jira/browse/CASSANDRA-5648 Project: Cassandra Issue Type: Bug Components: API Affects Versions: 1.2.0 Reporter: Tobias Schlottke Assignee: Aleksey Yeschenko Fix For: 1.2.6 Hi there, I'm trying to migrate a project from thrift to cql3/java driver and I'm experiencing a strange problem. Schema: {code} CREATE TABLE foo ( key ascii, column1 ascii, foo ascii, PRIMARY KEY (key, column1) ) WITH COMPACT STORAGE; {code} The table just consists of a primary key and a value, sometimes all the information lies in the key though, so the value is not needed. Through the thrift interface, it just works fine to leave out foo. Executing this query: {code} INSERT INTO foo(key, column1) VALUES ('test', 'test2'); {code} Fails with Bad Request: Missing mandatory column foo though. Explicitly inserting null as a value does not store the column / deletes the old one with a null value inserted through thrift. Inserting an empty String works fine, but this seems a bit bloated. Is this intended to (not) work this way? Best, Tobias -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5648) Cassandra: Insert of null value not possible with CQL3?
[ https://issues.apache.org/jira/browse/CASSANDRA-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686716#comment-13686716 ] Aleksey Yeschenko commented on CASSANDRA-5648: -- bq. Specifically, it seems quite reasonable to me to support I want to insert just the PK columns without clobbering existing values if such exist. And that's how it is for non-compact tables. But you just can't do the same for COMPACT (without read-before-write, at least, and r-b-w would be a deal breaker). Cassandra: Insert of null value not possible with CQL3? --- Key: CASSANDRA-5648 URL: https://issues.apache.org/jira/browse/CASSANDRA-5648 Project: Cassandra Issue Type: Bug Components: API Affects Versions: 1.2.0 Reporter: Tobias Schlottke Assignee: Aleksey Yeschenko Fix For: 1.2.6 Hi there, I'm trying to migrate a project from thrift to cql3/java driver and I'm experiencing a strange problem. Schema: {code} CREATE TABLE foo ( key ascii, column1 ascii, foo ascii, PRIMARY KEY (key, column1) ) WITH COMPACT STORAGE; {code} The table just consists of a primary key and a value, sometimes all the information lies in the key though, so the value is not needed. Through the thrift interface, it just works fine to leave out foo. Executing this query: {code} INSERT INTO foo(key, column1) VALUES ('test', 'test2'); {code} Fails with Bad Request: Missing mandatory column foo though. Explicitly inserting null as a value does not store the column / deletes the old one with a null value inserted through thrift. Inserting an empty String works fine, but this seems a bit bloated. Is this intended to (not) work this way? Best, Tobias -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5648) Cassandra: Insert of null value not possible with CQL3?
[ https://issues.apache.org/jira/browse/CASSANDRA-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686722#comment-13686722 ] Aleksey Yeschenko commented on CASSANDRA-5648: -- (row markers are what makes it possible for non-compact) Cassandra: Insert of null value not possible with CQL3? --- Key: CASSANDRA-5648 URL: https://issues.apache.org/jira/browse/CASSANDRA-5648 Project: Cassandra Issue Type: Bug Components: API Affects Versions: 1.2.0 Reporter: Tobias Schlottke Assignee: Aleksey Yeschenko Fix For: 1.2.6 Hi there, I'm trying to migrate a project from thrift to cql3/java driver and I'm experiencing a strange problem. Schema: {code} CREATE TABLE foo ( key ascii, column1 ascii, foo ascii, PRIMARY KEY (key, column1) ) WITH COMPACT STORAGE; {code} The table just consists of a primary key and a value, sometimes all the information lies in the key though, so the value is not needed. Through the thrift interface, it just works fine to leave out foo. Executing this query: {code} INSERT INTO foo(key, column1) VALUES ('test', 'test2'); {code} Fails with Bad Request: Missing mandatory column foo though. Explicitly inserting null as a value does not store the column / deletes the old one with a null value inserted through thrift. Inserting an empty String works fine, but this seems a bit bloated. Is this intended to (not) work this way? Best, Tobias -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[3/6] git commit: merge from 1.2
merge from 1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5b0d43f6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5b0d43f6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5b0d43f6 Branch: refs/heads/trunk Commit: 5b0d43f6f2485f1439eb9d6a7ad112f6c9a515f3 Parents: 093e188 0c81eae Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jun 18 08:44:17 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jun 18 08:45:57 2013 -0500 -- .../apache/cassandra/cql3/QueryProcessor.java | 6 ++-- .../apache/cassandra/db/ReadVerbHandler.java| 2 -- .../apache/cassandra/service/StorageProxy.java | 36 +++- 3 files changed, 6 insertions(+), 38 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b0d43f6/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- diff --cc src/java/org/apache/cassandra/cql3/QueryProcessor.java index ac5afbc,513c96e..1b89fe3 --- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java +++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java @@@ -109,9 -111,10 +109,10 @@@ public class QueryProcesso private static ResultMessage processStatement(CQLStatement statement, ConsistencyLevel cl, QueryState queryState, ListByteBuffer variables) throws RequestExecutionException, RequestValidationException { + logger.trace(Process {} @CL.{}, statement, cl); ClientState clientState = queryState.getClientState(); -statement.validate(clientState); statement.checkAccess(clientState); +statement.validate(clientState); ResultMessage result = statement.execute(cl, queryState, variables); return result == null ? new ResultMessage.Void() : result; } @@@ -119,22 -122,10 +120,21 @@@ public static ResultMessage process(String queryString, ConsistencyLevel cl, QueryState queryState) throws RequestExecutionException, RequestValidationException { +return process(queryString, Collections.ByteBufferemptyList(), cl, queryState); +} + +public static ResultMessage process(String queryString, ListByteBuffer variables, ConsistencyLevel cl, QueryState queryState) +throws RequestExecutionException, RequestValidationException +{ - logger.trace(CQL QUERY: {}, queryString); CQLStatement prepared = getStatement(queryString, queryState.getClientState()).statement; -if (prepared.getBoundsTerms() 0) -throw new InvalidRequestException(Cannot execute query with bind variables); -return processStatement(prepared, cl, queryState, Collections.ByteBufferemptyList()); +if (prepared.getBoundsTerms() != variables.size()) +throw new InvalidRequestException(Invalid amount of bind variables); +return processStatement(prepared, cl, queryState, variables); +} + +public static CQLStatement parseStatement(String queryStr, QueryState queryState) throws RequestValidationException +{ +return getStatement(queryStr, queryState.getClientState()).statement; } public static UntypedResultSet process(String query, ConsistencyLevel cl) throws RequestExecutionException http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b0d43f6/src/java/org/apache/cassandra/db/ReadVerbHandler.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b0d43f6/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --cc src/java/org/apache/cassandra/service/StorageProxy.java index 0203e4b,adb3f2d..9d095cc --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@@ -137,10 -136,8 +137,7 @@@ public class StorageProxy implements St AbstractWriteResponseHandler responseHandler, String localDataCenter, ConsistencyLevel consistency_level) -throws IOException { - if (logger.isTraceEnabled()) - logger.trace(insert writing local replicate + mutation.toString(true)); - Runnable runnable = counterWriteTask(mutation, targets, responseHandler, localDataCenter, consistency_level); runnable.run(); } @@@ -153,10 -150,8 +150,7 @@@ AbstractWriteResponseHandler responseHandler, String localDataCenter, ConsistencyLevel
[1/6] git commit: improve tracing patch by jbellis; reviewed by slebresne for CASSANDRA-5638
Updated Branches: refs/heads/cassandra-1.2 f30015c86 - e5c34d7c2 refs/heads/trunk 093e188a4 - 26018be22 improve tracing patch by jbellis; reviewed by slebresne for CASSANDRA-5638 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0c81eaec Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0c81eaec Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0c81eaec Branch: refs/heads/cassandra-1.2 Commit: 0c81eaecb2572d9c70e033aa2c76288611386d8f Parents: f30015c Author: Jonathan Ellis jbel...@apache.org Authored: Fri Jun 14 10:34:57 2013 -0700 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jun 18 08:40:57 2013 -0500 -- .../apache/cassandra/cql3/QueryProcessor.java | 6 +-- .../apache/cassandra/db/ReadVerbHandler.java| 2 - .../apache/cassandra/service/StorageProxy.java | 50 3 files changed, 10 insertions(+), 48 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0c81eaec/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java b/src/java/org/apache/cassandra/cql3/QueryProcessor.java index dae9cc9..513c96e 100644 --- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java +++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java @@ -111,6 +111,7 @@ public class QueryProcessor private static ResultMessage processStatement(CQLStatement statement, ConsistencyLevel cl, QueryState queryState, ListByteBuffer variables) throws RequestExecutionException, RequestValidationException { +logger.trace(Process {} @CL.{}, statement, cl); ClientState clientState = queryState.getClientState(); statement.validate(clientState); statement.checkAccess(clientState); @@ -121,7 +122,6 @@ public class QueryProcessor public static ResultMessage process(String queryString, ConsistencyLevel cl, QueryState queryState) throws RequestExecutionException, RequestValidationException { -logger.trace(CQL QUERY: {}, queryString); CQLStatement prepared = getStatement(queryString, queryState.getClientState()).statement; if (prepared.getBoundsTerms() 0) throw new InvalidRequestException(Cannot execute query with bind variables); @@ -187,8 +187,6 @@ public class QueryProcessor public static ResultMessage.Prepared prepare(String queryString, ClientState clientState, boolean forThrift) throws RequestValidationException { -logger.trace(CQL QUERY: {}, queryString); - ParsedStatement.Prepared prepared = getStatement(queryString, clientState); ResultMessage.Prepared msg = storePreparedStatement(queryString, clientState.getRawKeyspace(), prepared, forThrift); @@ -245,7 +243,7 @@ public class QueryProcessor private static ParsedStatement.Prepared getStatement(String queryStr, ClientState clientState) throws RequestValidationException { -Tracing.trace(Parsing statement); +Tracing.trace(Parsing {}, queryStr); ParsedStatement statement = parseStatement(queryStr); // Set keyspace for statement that require login http://git-wip-us.apache.org/repos/asf/cassandra/blob/0c81eaec/src/java/org/apache/cassandra/db/ReadVerbHandler.java -- diff --git a/src/java/org/apache/cassandra/db/ReadVerbHandler.java b/src/java/org/apache/cassandra/db/ReadVerbHandler.java index a06035a..a05f7a2 100644 --- a/src/java/org/apache/cassandra/db/ReadVerbHandler.java +++ b/src/java/org/apache/cassandra/db/ReadVerbHandler.java @@ -54,8 +54,6 @@ public class ReadVerbHandler implements IVerbHandlerReadCommand { if (command.isDigestQuery()) { -if (logger.isTraceEnabled()) -logger.trace(digest is + ByteBufferUtil.bytesToHex(ColumnFamily.digest(row.cf))); return new ReadResponse(ColumnFamily.digest(row.cf)); } else http://git-wip-us.apache.org/repos/asf/cassandra/blob/0c81eaec/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 5517387..adb3f2d 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -138,9 +138,6 @@ public class StorageProxy implements StorageProxyMBean ConsistencyLevel consistency_level) throws IOException { -if
[5/6] git commit: cleanup logger.debug too
cleanup logger.debug too Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5c34d7c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5c34d7c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5c34d7c Branch: refs/heads/cassandra-1.2 Commit: e5c34d7c29e9ec6dc162210b90fc69ea11a4c331 Parents: 0c81eae Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jun 18 08:51:39 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jun 18 08:51:39 2013 -0500 -- src/java/org/apache/cassandra/service/StorageProxy.java | 11 ++- 1 file changed, 6 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c34d7c/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index adb3f2d..c12cace 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -861,7 +861,7 @@ public class StorageProxy implements StorageProxyMBean ReadCallbackReadResponse, Row[] readCallbacks = new ReadCallback[commands.size()]; if (!commandsToRetry.isEmpty()) -logger.debug(Retrying {} commands, commandsToRetry.size()); +Tracing.trace(Retrying {} commands, commandsToRetry.size()); // send out read requests for (int i = 0; i commands.size(); i++) @@ -947,7 +947,7 @@ public class StorageProxy implements StorageProxyMBean } catch (DigestMismatchException ex) { -logger.debug(Digest mismatch: {}, ex.toString()); +Tracing.trace(Digest mismatch: {}, ex.toString()); ReadRepairMetrics.repairedBlocking.mark(); @@ -963,9 +963,10 @@ public class StorageProxy implements StorageProxyMBean repairCommands.add(command); repairResponseHandlers.add(repairHandler); +MessageOutReadCommand message = command.createMessage(); for (InetAddress endpoint : handler.endpoints) { -MessageOutReadCommand message = command.createMessage(); +Tracing.trace(Enqueuing full data read to {}, endpoint); MessagingService.instance().sendRR(message, endpoint, repairHandler); } } @@ -1009,7 +1010,7 @@ public class StorageProxy implements StorageProxyMBean ReadCommand retryCommand = command.maybeGenerateRetryCommand(resolver, row); if (retryCommand != null) { -logger.debug(Issuing retry for read command); +Tracing.trace(Issuing retry for read command); if (commandsToRetry == Collections.EMPTY_LIST) commandsToRetry = new ArrayListReadCommand(); commandsToRetry.add(retryCommand); @@ -1193,7 +1194,7 @@ public class StorageProxy implements StorageProxyMBean MessageOutRangeSliceCommand message = nodeCmd.createMessage(); for (InetAddress endpoint : filteredEndpoints) { -logger.trace(Enqueuing request to {}, endpoint); +Tracing.trace(Enqueuing request to {}, endpoint); MessagingService.instance().sendRR(message, endpoint, handler); } }
[2/6] git commit: improve tracing patch by jbellis; reviewed by slebresne for CASSANDRA-5638
improve tracing patch by jbellis; reviewed by slebresne for CASSANDRA-5638 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0c81eaec Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0c81eaec Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0c81eaec Branch: refs/heads/trunk Commit: 0c81eaecb2572d9c70e033aa2c76288611386d8f Parents: f30015c Author: Jonathan Ellis jbel...@apache.org Authored: Fri Jun 14 10:34:57 2013 -0700 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jun 18 08:40:57 2013 -0500 -- .../apache/cassandra/cql3/QueryProcessor.java | 6 +-- .../apache/cassandra/db/ReadVerbHandler.java| 2 - .../apache/cassandra/service/StorageProxy.java | 50 3 files changed, 10 insertions(+), 48 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0c81eaec/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java b/src/java/org/apache/cassandra/cql3/QueryProcessor.java index dae9cc9..513c96e 100644 --- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java +++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java @@ -111,6 +111,7 @@ public class QueryProcessor private static ResultMessage processStatement(CQLStatement statement, ConsistencyLevel cl, QueryState queryState, ListByteBuffer variables) throws RequestExecutionException, RequestValidationException { +logger.trace(Process {} @CL.{}, statement, cl); ClientState clientState = queryState.getClientState(); statement.validate(clientState); statement.checkAccess(clientState); @@ -121,7 +122,6 @@ public class QueryProcessor public static ResultMessage process(String queryString, ConsistencyLevel cl, QueryState queryState) throws RequestExecutionException, RequestValidationException { -logger.trace(CQL QUERY: {}, queryString); CQLStatement prepared = getStatement(queryString, queryState.getClientState()).statement; if (prepared.getBoundsTerms() 0) throw new InvalidRequestException(Cannot execute query with bind variables); @@ -187,8 +187,6 @@ public class QueryProcessor public static ResultMessage.Prepared prepare(String queryString, ClientState clientState, boolean forThrift) throws RequestValidationException { -logger.trace(CQL QUERY: {}, queryString); - ParsedStatement.Prepared prepared = getStatement(queryString, clientState); ResultMessage.Prepared msg = storePreparedStatement(queryString, clientState.getRawKeyspace(), prepared, forThrift); @@ -245,7 +243,7 @@ public class QueryProcessor private static ParsedStatement.Prepared getStatement(String queryStr, ClientState clientState) throws RequestValidationException { -Tracing.trace(Parsing statement); +Tracing.trace(Parsing {}, queryStr); ParsedStatement statement = parseStatement(queryStr); // Set keyspace for statement that require login http://git-wip-us.apache.org/repos/asf/cassandra/blob/0c81eaec/src/java/org/apache/cassandra/db/ReadVerbHandler.java -- diff --git a/src/java/org/apache/cassandra/db/ReadVerbHandler.java b/src/java/org/apache/cassandra/db/ReadVerbHandler.java index a06035a..a05f7a2 100644 --- a/src/java/org/apache/cassandra/db/ReadVerbHandler.java +++ b/src/java/org/apache/cassandra/db/ReadVerbHandler.java @@ -54,8 +54,6 @@ public class ReadVerbHandler implements IVerbHandlerReadCommand { if (command.isDigestQuery()) { -if (logger.isTraceEnabled()) -logger.trace(digest is + ByteBufferUtil.bytesToHex(ColumnFamily.digest(row.cf))); return new ReadResponse(ColumnFamily.digest(row.cf)); } else http://git-wip-us.apache.org/repos/asf/cassandra/blob/0c81eaec/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 5517387..adb3f2d 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -138,9 +138,6 @@ public class StorageProxy implements StorageProxyMBean ConsistencyLevel consistency_level) throws IOException { -if (logger.isTraceEnabled()) -logger.trace(insert writing local replicate + mutation.toString(true)); -
[4/6] git commit: cleanup logger.debug too
cleanup logger.debug too Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5c34d7c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5c34d7c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5c34d7c Branch: refs/heads/trunk Commit: e5c34d7c29e9ec6dc162210b90fc69ea11a4c331 Parents: 0c81eae Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jun 18 08:51:39 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jun 18 08:51:39 2013 -0500 -- src/java/org/apache/cassandra/service/StorageProxy.java | 11 ++- 1 file changed, 6 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c34d7c/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index adb3f2d..c12cace 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -861,7 +861,7 @@ public class StorageProxy implements StorageProxyMBean ReadCallbackReadResponse, Row[] readCallbacks = new ReadCallback[commands.size()]; if (!commandsToRetry.isEmpty()) -logger.debug(Retrying {} commands, commandsToRetry.size()); +Tracing.trace(Retrying {} commands, commandsToRetry.size()); // send out read requests for (int i = 0; i commands.size(); i++) @@ -947,7 +947,7 @@ public class StorageProxy implements StorageProxyMBean } catch (DigestMismatchException ex) { -logger.debug(Digest mismatch: {}, ex.toString()); +Tracing.trace(Digest mismatch: {}, ex.toString()); ReadRepairMetrics.repairedBlocking.mark(); @@ -963,9 +963,10 @@ public class StorageProxy implements StorageProxyMBean repairCommands.add(command); repairResponseHandlers.add(repairHandler); +MessageOutReadCommand message = command.createMessage(); for (InetAddress endpoint : handler.endpoints) { -MessageOutReadCommand message = command.createMessage(); +Tracing.trace(Enqueuing full data read to {}, endpoint); MessagingService.instance().sendRR(message, endpoint, repairHandler); } } @@ -1009,7 +1010,7 @@ public class StorageProxy implements StorageProxyMBean ReadCommand retryCommand = command.maybeGenerateRetryCommand(resolver, row); if (retryCommand != null) { -logger.debug(Issuing retry for read command); +Tracing.trace(Issuing retry for read command); if (commandsToRetry == Collections.EMPTY_LIST) commandsToRetry = new ArrayListReadCommand(); commandsToRetry.add(retryCommand); @@ -1193,7 +1194,7 @@ public class StorageProxy implements StorageProxyMBean MessageOutRangeSliceCommand message = nodeCmd.createMessage(); for (InetAddress endpoint : filteredEndpoints) { -logger.trace(Enqueuing request to {}, endpoint); +Tracing.trace(Enqueuing request to {}, endpoint); MessagingService.instance().sendRR(message, endpoint, handler); } }
[6/6] git commit: merge from 1.2
merge from 1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/26018be2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/26018be2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/26018be2 Branch: refs/heads/trunk Commit: 26018be223663989f0de87ff7a902a36be4df678 Parents: 5b0d43f e5c34d7 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jun 18 08:52:55 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jun 18 08:52:55 2013 -0500 -- .../org/apache/cassandra/service/StorageProxy.java | 13 - 1 file changed, 8 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/26018be2/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --cc src/java/org/apache/cassandra/service/StorageProxy.java index 9d095cc,c12cace..1383be7 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@@ -1137,10 -858,10 +1137,10 @@@ public class StorageProxy implements St do { ListReadCommand commands = commandsToRetry.isEmpty() ? initialCommands : commandsToRetry; -ReadCallbackReadResponse, Row[] readCallbacks = new ReadCallback[commands.size()]; +AbstractReadExecutor[] readExecutors = new AbstractReadExecutor[commands.size()]; if (!commandsToRetry.isEmpty()) - logger.debug(Retrying {} commands, commandsToRetry.size()); + Tracing.trace(Retrying {} commands, commandsToRetry.size()); // send out read requests for (int i = 0; i commands.size(); i++) @@@ -1174,25 -947,28 +1174,28 @@@ } catch (DigestMismatchException ex) { - logger.trace(Digest mismatch: {}, ex); - -Tracing.trace(Digest mismatch: {}, ex.toString()); - ++Tracing.trace(Digest mismatch: {}, ex); ++ ReadRepairMetrics.repairedBlocking.mark(); - + // Do a full data read to resolve the correct response (and repair node that need be) -RowDataResolver resolver = new RowDataResolver(command.table, command.key, command.filter()); -ReadCallbackReadResponse, Row repairHandler = handler.withNewResolver(resolver); +RowDataResolver resolver = new RowDataResolver(exec.command.table, exec.command.key, exec.command.filter()); +ReadCallbackReadResponse, Row repairHandler = exec.handler.withNewResolver(resolver); if (repairCommands == null) { repairCommands = new ArrayListReadCommand(); repairResponseHandlers = new ArrayListReadCallbackReadResponse, Row(); } -repairCommands.add(command); +repairCommands.add(exec.command); repairResponseHandlers.add(repairHandler); -MessageOutReadCommand message = command.createMessage(); -for (InetAddress endpoint : handler.endpoints) +MessageOutReadCommand message = exec.command.createMessage(); +for (InetAddress endpoint : exec.handler.endpoints) + { + Tracing.trace(Enqueuing full data read to {}, endpoint); MessagingService.instance().sendRR(message, endpoint, repairHandler); + } } }
[jira] [Commented] (CASSANDRA-5149) Respect slice count even if column expire mid-request
[ https://issues.apache.org/jira/browse/CASSANDRA-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686728#comment-13686728 ] Sylvain Lebresne commented on CASSANDRA-5149: - bq. In IDiskAtomFilter.collectReducedColumns() timestamp and gcbefore are sometimes unrelated Right, forgot about that. Last version lgtm, +1. A few optional minor nits for the commit: * In CounterColumn.reconcile (and CounterMutation and ... in fact), we don't support expiring columns in counter tables so it's ok to just use say Long.MIN_VALUE (which a comment why). * the comment inside DeletedColumn.isMarkedForDelete is obsolete (it's more confusing that helpful now :)). Respect slice count even if column expire mid-request - Key: CASSANDRA-5149 URL: https://issues.apache.org/jira/browse/CASSANDRA-5149 Project: Cassandra Issue Type: Bug Affects Versions: 0.7.0 Reporter: Sylvain Lebresne Assignee: Aleksey Yeschenko Fix For: 2.0 This is a follow-up of CASSANDRA-5099. If a column expire just while a slice query is performed, it is possible for replicas to count said column as live but to have the coordinator seeing it as dead when building the final result. The effect that the query might return strictly less columns that the requested slice count even though there is some live columns matching the slice predicate but not returned in the result. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5648) Cassandra: Insert of null value not possible with CQL3?
[ https://issues.apache.org/jira/browse/CASSANDRA-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686730#comment-13686730 ] Jonathan Ellis commented on CASSANDRA-5648: --- Ah, right. Can you edit the message to make that clear and call it good, then? Cassandra: Insert of null value not possible with CQL3? --- Key: CASSANDRA-5648 URL: https://issues.apache.org/jira/browse/CASSANDRA-5648 Project: Cassandra Issue Type: Bug Components: API Affects Versions: 1.2.0 Reporter: Tobias Schlottke Assignee: Aleksey Yeschenko Fix For: 1.2.6 Hi there, I'm trying to migrate a project from thrift to cql3/java driver and I'm experiencing a strange problem. Schema: {code} CREATE TABLE foo ( key ascii, column1 ascii, foo ascii, PRIMARY KEY (key, column1) ) WITH COMPACT STORAGE; {code} The table just consists of a primary key and a value, sometimes all the information lies in the key though, so the value is not needed. Through the thrift interface, it just works fine to leave out foo. Executing this query: {code} INSERT INTO foo(key, column1) VALUES ('test', 'test2'); {code} Fails with Bad Request: Missing mandatory column foo though. Explicitly inserting null as a value does not store the column / deletes the old one with a null value inserted through thrift. Inserting an empty String works fine, but this seems a bit bloated. Is this intended to (not) work this way? Best, Tobias -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5639) Update CREATE CUSTOM INDEX syntax to match new CREATE TRIGGER syntax
[ https://issues.apache.org/jira/browse/CASSANDRA-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686731#comment-13686731 ] Sylvain Lebresne commented on CASSANDRA-5639: - Code lgtm, though let's add a mention to the news file. And for the textile file, it might be worth a minor bump of the version with a mention in the file changelog section (Note that II think anyone is really using that syntax yet, but it'll be cleaner). Update CREATE CUSTOM INDEX syntax to match new CREATE TRIGGER syntax Key: CASSANDRA-5639 URL: https://issues.apache.org/jira/browse/CASSANDRA-5639 Project: Cassandra Issue Type: Improvement Affects Versions: 1.2.5 Reporter: Aleksey Yeschenko Assignee: Aleksey Yeschenko Labels: cql3 Fix For: 1.2.6 Attachments: 5639.txt CASSANDRA-5484 introduced CREATE CUSTOM INDEX syntax for custom 2i and CASSANDRA-5576 will add CQL3 support for creating/dropping triggers (CREATE TRIGGER name ON table USING classname). For consistency' sake, CREATE CUSTOM INDEX should be updated to also use 'USING' keyword, e.g. CREATE CUSTOM INDEX ON table(column) USING classname. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5149) Respect slice count even if column expire mid-request
[ https://issues.apache.org/jira/browse/CASSANDRA-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686732#comment-13686732 ] Aleksey Yeschenko commented on CASSANDRA-5149: -- Thanks! bq. In CounterColumn.reconcile (and CounterMutation and ... in fact), we don't support expiring columns in counter tables so it's ok to just use say Long.MIN_VALUE (which a comment why) I know. Same with Column.getString() - it's overloaded by ExpiringColumn anyway. I was debating with myself what to use - Long.MIN_VALUE, 0, or just System.currentTimeMillis() where it doesn't matter, and went with System.currentTimeMillis(). Will change to 0 with a comment in both places. Respect slice count even if column expire mid-request - Key: CASSANDRA-5149 URL: https://issues.apache.org/jira/browse/CASSANDRA-5149 Project: Cassandra Issue Type: Bug Affects Versions: 0.7.0 Reporter: Sylvain Lebresne Assignee: Aleksey Yeschenko Fix For: 2.0 This is a follow-up of CASSANDRA-5099. If a column expire just while a slice query is performed, it is possible for replicas to count said column as live but to have the coordinator seeing it as dead when building the final result. The effect that the query might return strictly less columns that the requested slice count even though there is some live columns matching the slice predicate but not returned in the result. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5149) Respect slice count even if column expire mid-request
[ https://issues.apache.org/jira/browse/CASSANDRA-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686732#comment-13686732 ] Aleksey Yeschenko edited comment on CASSANDRA-5149 at 6/18/13 2:03 PM: --- Thanks! bq. In CounterColumn.reconcile (and CounterMutation and ... in fact), we don't support expiring columns in counter tables so it's ok to just use say Long.MIN_VALUE (which a comment why) I know. Same with Column.getString() - it's overloaded by ExpiringColumn anyway. I was debating with myself what to use - Long.MIN_VALUE, 0, or just System.currentTimeMillis() where it doesn't matter, and went with System.currentTimeMillis(). Will change to Long.MIN_VALUE with a comment in both places. was (Author: iamaleksey): Thanks! bq. In CounterColumn.reconcile (and CounterMutation and ... in fact), we don't support expiring columns in counter tables so it's ok to just use say Long.MIN_VALUE (which a comment why) I know. Same with Column.getString() - it's overloaded by ExpiringColumn anyway. I was debating with myself what to use - Long.MIN_VALUE, 0, or just System.currentTimeMillis() where it doesn't matter, and went with System.currentTimeMillis(). Will change to 0 with a comment in both places. Respect slice count even if column expire mid-request - Key: CASSANDRA-5149 URL: https://issues.apache.org/jira/browse/CASSANDRA-5149 Project: Cassandra Issue Type: Bug Affects Versions: 0.7.0 Reporter: Sylvain Lebresne Assignee: Aleksey Yeschenko Fix For: 2.0 This is a follow-up of CASSANDRA-5099. If a column expire just while a slice query is performed, it is possible for replicas to count said column as live but to have the coordinator seeing it as dead when building the final result. The effect that the query might return strictly less columns that the requested slice count even though there is some live columns matching the slice predicate but not returned in the result. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5555) Allow sstableloader to handle a larger number of files
[ https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686756#comment-13686756 ] Yuki Morishita commented on CASSANDRA-: --- When you load SSTable via SSTableLoader, it creates index summary of 1 entry with 1 index interval. So the loader will send estimated number of keys of 1 (or 0) every time regardless of actual keys in range, it may create BF of small size that produces higher false positive unintentionally. Allow sstableloader to handle a larger number of files -- Key: CASSANDRA- URL: https://issues.apache.org/jira/browse/CASSANDRA- Project: Cassandra Issue Type: Improvement Components: Core, Tools Reporter: Tyler Hobbs Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: -01.txt, -02.txt, CASSANDRA-.txt, CASSANDRA-.txt With the default heap size, sstableloader will OOM when there are roughly 25k files in the directory to load. It's easy to reach this number of files in a single LCS column family. By avoiding creating all SSTableReaders up front in SSTableLoader, we should be able to increase the number of files that sstableloader can handle considerably. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4495) Don't tie client side use of AbstractType to JDBC
[ https://issues.apache.org/jira/browse/CASSANDRA-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686761#comment-13686761 ] Sylvain Lebresne commented on CASSANDRA-4495: - Haven't really look at the detail of the patch, but for what it's worth, I've somehow never been a fan of the compose/decompose terminology. I'd prefer say encode/decode or serialize/deserialize. And BooleanCodec or BooleanSerializer sounds better to my hear than BooleanComposer. But do feel free to discard that opinion if it's just me being french and if composer sounds perfectly fine to you guys. Don't tie client side use of AbstractType to JDBC - Key: CASSANDRA-4495 URL: https://issues.apache.org/jira/browse/CASSANDRA-4495 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Carl Yeksigian Priority: Minor Fix For: 2.0 Attachments: 4495.patch, 4495-v2.patch We currently expose the AbstractType to java clients that want to reuse them though the cql.jdbc.* classes. I think this shouldn't be tied to the JDBC standard. JDBC was make for SQL DB, which Cassandra is not (CQL is not SQL and will never be). Typically, there is a fair amount of the JDBC standard that cannot be implemented with C*, and there is a number of specificity of C* that are not in JDBC (typically the set and maps collections). So I propose to extract simple type classes with just a compose and decompose method (but without ties to jdbc, which would allow all the jdbc specific method those types have) in the purpose of exporting that in a separate jar for clients (we could put that in a org.apache.cassandra.type package for instance). We could then deprecate the jdbc classes with basically the same schedule than CQL2. Let me note that this is *not* saying there shouldn't be a JDBC driver for Cassandra. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5619) CAS UPDATE for a lost race: save round trip by returning column values
[ https://issues.apache.org/jira/browse/CASSANDRA-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-5619: Fix Version/s: 2.0 CAS UPDATE for a lost race: save round trip by returning column values -- Key: CASSANDRA-5619 URL: https://issues.apache.org/jira/browse/CASSANDRA-5619 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 2.0 Reporter: Blair Zajac Assignee: Sylvain Lebresne Fix For: 2.0 Looking at the new CAS CQL3 support examples [1], if one lost a race for an UPDATE, to save a round trip to get the current values to decide if you need to perform your work, could the columns that were used in the IF clause also be returned to the caller? Maybe the columns values as part of the SET part could also be returned. I don't know if this is generally useful though. In the case of creating a new user account with a given username which is the partition key, if one lost the race to another person creating an account with the same username, it doesn't matter to the loser what the column values are, just that they lost. I'm new to Cassandra, so maybe there's other use cases, such as doing incremental amount of work on a row. In pure Java projects I've done while loops around AtomicReference.html#compareAndSet() until the work was done on the referenced object to handle multiple threads each making forward progress in updating the references object. [1] https://github.com/riptano/cassandra-dtest/blob/master/cql_tests.py#L3044 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-5619) CAS UPDATE for a lost race: save round trip by returning column values
[ https://issues.apache.org/jira/browse/CASSANDRA-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne reassigned CASSANDRA-5619: --- Assignee: Sylvain Lebresne CAS UPDATE for a lost race: save round trip by returning column values -- Key: CASSANDRA-5619 URL: https://issues.apache.org/jira/browse/CASSANDRA-5619 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 2.0 Reporter: Blair Zajac Assignee: Sylvain Lebresne Looking at the new CAS CQL3 support examples [1], if one lost a race for an UPDATE, to save a round trip to get the current values to decide if you need to perform your work, could the columns that were used in the IF clause also be returned to the caller? Maybe the columns values as part of the SET part could also be returned. I don't know if this is generally useful though. In the case of creating a new user account with a given username which is the partition key, if one lost the race to another person creating an account with the same username, it doesn't matter to the loser what the column values are, just that they lost. I'm new to Cassandra, so maybe there's other use cases, such as doing incremental amount of work on a row. In pure Java projects I've done while loops around AtomicReference.html#compareAndSet() until the work was done on the referenced object to handle multiple threads each making forward progress in updating the references object. [1] https://github.com/riptano/cassandra-dtest/blob/master/cql_tests.py#L3044 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5555) Allow sstableloader to handle a larger number of files
[ https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686768#comment-13686768 ] Jonathan Ellis commented on CASSANDRA-: --- Ah, right. Related to CASSANDRA-5542. I think we can simplify a bit though. Leave out the option, and have sstableloader load if present, but just discard it after load instead of keeping it memory resident. May also be simpler to go through the generate summary if it doesn't exist path than add separate count keys if we don't have a summary code. Personally I'd prefer to just make summary required, but we probably can't do that this late in 1.2. Is this messy enough that we should just revert and do this in trunk? On the bright side we could make summary required. :) Allow sstableloader to handle a larger number of files -- Key: CASSANDRA- URL: https://issues.apache.org/jira/browse/CASSANDRA- Project: Cassandra Issue Type: Improvement Components: Core, Tools Reporter: Tyler Hobbs Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: -01.txt, -02.txt, CASSANDRA-.txt, CASSANDRA-.txt With the default heap size, sstableloader will OOM when there are roughly 25k files in the directory to load. It's easy to reach this number of files in a single LCS column family. By avoiding creating all SSTableReaders up front in SSTableLoader, we should be able to increase the number of files that sstableloader can handle considerably. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Updated CREATE CUSTOM INDEX syntax
Updated Branches: refs/heads/cassandra-1.2 e5c34d7c2 - 2397bc8c3 Updated CREATE CUSTOM INDEX syntax patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-5639 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2397bc8c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2397bc8c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2397bc8c Branch: refs/heads/cassandra-1.2 Commit: 2397bc8c334142ddaa6ef8e34f18bbecffba4f4f Parents: e5c34d7 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 17:20:59 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 17:20:59 2013 +0300 -- CHANGES.txt | 1 + NEWS.txt| 3 + bin/cqlsh | 2 +- doc/cql3/CQL.textile| 10 ++- pylib/cqlshlib/cql3handling.py | 2 +- src/java/org/apache/cassandra/cql3/Cql.g| 8 +-- .../apache/cassandra/cql3/IndexPropDefs.java| 68 .../apache/cassandra/cql3/QueryProcessor.java | 2 +- .../cql3/statements/CreateIndexStatement.java | 23 +++ 9 files changed, 26 insertions(+), 93 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2397bc8c/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 65f66de..0d42c13 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -27,6 +27,7 @@ * don't throw away initial causes exceptions for internode encryption issues (CASSANDRA-5644) * Fix message spelling errors for cql select statements (CASSANDRA-5647) * Suppress custom exceptions thru jmx (CASSANDRA-5652) + * Update CREATE CUSTOM INDEX syntax (CASSANDRA-5639) Merged from 1.1: * Remove buggy thrift max message length option (CASSANDRA-5529) * Fix NPE in Pig's widerow mode (CASSANDRA-5488) http://git-wip-us.apache.org/repos/asf/cassandra/blob/2397bc8c/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index 099e366..5cb06da 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -17,6 +17,9 @@ Upgrading proportional to the number of nodes in the cluster (see https://issues.apache.org/jira/browse/CASSANDRA-5272). +- CQL3 syntax for CREATE CUSTOM INDEX has been updated. See CQL3 + documentation for details. + 1.2.5 = http://git-wip-us.apache.org/repos/asf/cassandra/blob/2397bc8c/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index dd4c00d..70b70f5 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -32,7 +32,7 @@ exit 1 from __future__ import with_statement description = CQL Shell for Apache Cassandra -version = 3.1.1 +version = 3.1.2 from StringIO import StringIO from itertools import groupby http://git-wip-us.apache.org/repos/asf/cassandra/blob/2397bc8c/doc/cql3/CQL.textile -- diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile index 13b6f70..5fa36ab 100644 --- a/doc/cql3/CQL.textile +++ b/doc/cql3/CQL.textile @@ -1,6 +1,6 @@ link rel=StyleSheet href=CQL.css type=text/css media=screen -h1. Cassandra Query Language (CQL) v3.0.3 +h1. Cassandra Query Language (CQL) v3.0.4 span id=tableOfContents @@ -392,14 +392,14 @@ h3(#createIndexStmt). CREATE INDEX __Syntax:__ bc(syntax). create-index-stmt ::= CREATE ( CUSTOM )? INDEX identifier? ON tablename '(' identifier ')' -( WITH properties )? +( USING string )? __Sample:__ bc(sample). CREATE INDEX userIndex ON NerdMovies (user); CREATE INDEX ON Mutants (abilityId); -CREATE CUSTOM INDEX ON users (email) WITH options = {'class': 'path.to.the.IndexClass'}; +CREATE CUSTOM INDEX ON users (email) USING 'path.to.the.IndexClass'; The @CREATE INDEX@ statement is used to create a new (automatic) secondary index for a given (existing) column in a given table. A name for the index itself can be specified before the @ON@ keyword, if desired. If data already exists for the column, it will be indexed during the execution of this statement. After the index is created, new data for the column is indexed automatically at insertion time. @@ -1048,6 +1048,10 @@ h2(#changes). Changes The following describes the addition/changes brought for each version of CQL. +h3. 3.0.4 + +* Updated the syntax for custom secondary indexes:#createIndexStmt. + h3. 3.0.3 * Support for custom secondary indexes:#createIndexStmt has been added. http://git-wip-us.apache.org/repos/asf/cassandra/blob/2397bc8c/pylib/cqlshlib/cql3handling.py
[jira] [Commented] (CASSANDRA-5624) Memory leak in SerializingCache
[ https://issues.apache.org/jira/browse/CASSANDRA-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686772#comment-13686772 ] J.B. Langston commented on CASSANDRA-5624: -- I'm not sure what would qualify as weird, but here's what I know about what they're doing... In cassandra.yaml, they have 'row_cache_size_in_mb: 512', and row cache is enabled on one CF (PathInfo). The rest have key cache only. Here's the show keyspaces output: Keyspace: File: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [CDC:1, BELL:1, RDC:1] Column Families: ColumnFamily: FileData Key Validation Class: org.apache.cassandra.db.marshal.BytesType Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.AsciiType GC grace seconds: 18000 Compaction min/max thresholds: 4/32 Read repair chance: 1.0 DC Local Read repair chance: 0.0 Replicate on write: true Caching: KEYS_ONLY Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy Compression Options: sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor ColumnFamily: PathInfo Key Validation Class: org.apache.cassandra.db.marshal.BytesType Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.BytesType GC grace seconds: 18000 Compaction min/max thresholds: 4/32 Read repair chance: 1.0 DC Local Read repair chance: 0.0 Replicate on write: true Caching: ALL Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy Compression Options: sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor And the nodetool cfstats output: Keyspace: File Read Count: 53204 Read Latency: 70.72740728140742 ms. Write Count: 57783 Write Latency: 0.20002642645760862 ms. Pending Tasks: 0 Column Family: PathInfo SSTable count: 3 Space used (live): 10946052 Space used (total): 10946052 Number of Keys (estimate): 42496 Memtable Columns Count: 2116 Memtable Data Size: 467208 Memtable Switch Count: 19 Read Count: 46871 Read Latency: 62.016 ms. Write Count: 51921 Write Latency: 0.301 ms. Pending Tasks: 0 Bloom Filter False Positives: 1 Bloom Filter False Ratio: 0.0 Bloom Filter Space Used: 83472 Compacted row minimum size: 125 Compacted row maximum size: 10090808 Compacted row mean size: 696 Column Family: FileData SSTable count: 5 Space used (live): 186053241 Space used (total): 186053241 Number of Keys (estimate): 108544 Memtable Columns Count: 114 Memtable Data Size: 117337 Memtable Switch Count: 19 Read Count: 6334 Read Latency: 0.709 ms. Write Count: 5862 Write Latency: 0.022 ms. Pending Tasks: 0 Bloom Filter False Positives: 0 Bloom Filter False Ratio: 0.0 Bloom Filter Space Used: 211056 Compacted row minimum size: 104 Compacted row maximum size: 88148 Compacted row mean size: 3793 Memory leak in SerializingCache --- Key: CASSANDRA-5624 URL: https://issues.apache.org/jira/browse/CASSANDRA-5624 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.1 Reporter: Jonathan Ellis Assignee: Ryan McGuire A customer reported a memory leak when off-heap row cache is enabled. I gave them a patch against 1.1.9 to troubleshoot (https://github.com/jbellis/cassandra/commits/row-cache-finalizer). This confirms that row cache is responsible. Here is a sample of the log: {noformat} DEBUG [Finalizer] 2013-06-08 06:49:58,656 FreeableMemory.java (line 69) Unreachable memory still has nonzero refcount 1 DEBUG [Finalizer] 2013-06-08 06:49:58,656 FreeableMemory.java (line 71) Unreachable memory 140337996747792 has not been freed (will free now) DEBUG [Finalizer] 2013-06-08 06:49:58,656 FreeableMemory.java (line 69) Unreachable memory still has
[jira] [Comment Edited] (CASSANDRA-5624) Memory leak in SerializingCache
[ https://issues.apache.org/jira/browse/CASSANDRA-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686772#comment-13686772 ] J.B. Langston edited comment on CASSANDRA-5624 at 6/18/13 2:27 PM: --- I'm not sure what would qualify as weird, but here's what I know about what they're doing... In cassandra.yaml, they have 'row_cache_size_in_mb: 512', and row cache is enabled on one CF (PathInfo). The rest have key cache only. Here's the show keyspaces output: Keyspace: File: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [CDC:1, BELL:1, RDC:1] Column Families: ColumnFamily: FileData Key Validation Class: org.apache.cassandra.db.marshal.BytesType Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.AsciiType GC grace seconds: 18000 Compaction min/max thresholds: 4/32 Read repair chance: 1.0 DC Local Read repair chance: 0.0 Replicate on write: true Caching: KEYS_ONLY Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy Compression Options: sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor ColumnFamily: PathInfo Key Validation Class: org.apache.cassandra.db.marshal.BytesType Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.BytesType GC grace seconds: 18000 Compaction min/max thresholds: 4/32 Read repair chance: 1.0 DC Local Read repair chance: 0.0 Replicate on write: true Caching: ALL Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy Compression Options: sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor And the nodetool cfstats output: Keyspace: File Read Count: 53204 Read Latency: 70.72740728140742 ms. Write Count: 57783 Write Latency: 0.20002642645760862 ms. Pending Tasks: 0 Column Family: PathInfo SSTable count: 3 Space used (live): 10946052 Space used (total): 10946052 Number of Keys (estimate): 42496 Memtable Columns Count: 2116 Memtable Data Size: 467208 Memtable Switch Count: 19 Read Count: 46871 Read Latency: 62.016 ms. Write Count: 51921 Write Latency: 0.301 ms. Pending Tasks: 0 Bloom Filter False Positives: 1 Bloom Filter False Ratio: 0.0 Bloom Filter Space Used: 83472 Compacted row minimum size: 125 Compacted row maximum size: 10090808 Compacted row mean size: 696 Column Family: FileData SSTable count: 5 Space used (live): 186053241 Space used (total): 186053241 Number of Keys (estimate): 108544 Memtable Columns Count: 114 Memtable Data Size: 117337 Memtable Switch Count: 19 Read Count: 6334 Read Latency: 0.709 ms. Write Count: 5862 Write Latency: 0.022 ms. Pending Tasks: 0 Bloom Filter False Positives: 0 Bloom Filter False Ratio: 0.0 Bloom Filter Space Used: 211056 Compacted row minimum size: 104 Compacted row maximum size: 88148 Compacted row mean size: 3793 Let me know if any other information would be helpful and I will provide it. was (Author: jblangs...@datastax.com): I'm not sure what would qualify as weird, but here's what I know about what they're doing... In cassandra.yaml, they have 'row_cache_size_in_mb: 512', and row cache is enabled on one CF (PathInfo). The rest have key cache only. Here's the show keyspaces output: Keyspace: File: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [CDC:1, BELL:1, RDC:1] Column Families: ColumnFamily: FileData Key Validation Class: org.apache.cassandra.db.marshal.BytesType Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.AsciiType GC grace seconds: 18000 Compaction min/max thresholds: 4/32 Read repair chance: 1.0 DC Local Read repair chance: 0.0 Replicate on
[2/2] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Conflicts: bin/cqlsh doc/cql3/CQL.textile src/java/org/apache/cassandra/cql3/QueryProcessor.java src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7edd0e0c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7edd0e0c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7edd0e0c Branch: refs/heads/trunk Commit: 7edd0e0c7e05e87a1a48b81a6add08a6e65d73f1 Parents: 26018be 2397bc8 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 17:29:23 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 17:29:23 2013 +0300 -- CHANGES.txt | 1 + NEWS.txt| 3 + doc/cql3/CQL.textile| 8 ++- pylib/cqlshlib/cql3handling.py | 2 +- src/java/org/apache/cassandra/cql3/Cql.g| 8 +-- .../apache/cassandra/cql3/IndexPropDefs.java| 68 .../cql3/statements/CreateIndexStatement.java | 30 - 7 files changed, 26 insertions(+), 94 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7edd0e0c/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7edd0e0c/NEWS.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7edd0e0c/doc/cql3/CQL.textile -- diff --cc doc/cql3/CQL.textile index 169cf2b,5fa36ab..1648cc8 --- a/doc/cql3/CQL.textile +++ b/doc/cql3/CQL.textile @@@ -1056,11 -1048,10 +1056,15 @@@ h2(#changes). Change The following describes the addition/changes brought for each version of CQL. +h3. 3.1.0 + +* ALTER TABLE:#alterTableStmt @DROP@ option has been reenabled for CQL3 tables and has new semantics now: the space formerly used by dropped columns will now be eventually reclaimed (post-compaction). You should not readd previously dropped columns unless you use timestamps with microsecond precision (see CASSANDRA-3919:https://issues.apache.org/jira/browse/CASSANDRA-3919 for more details). +* SELECT statement now supports aliases in select clause. Aliases in WHERE and ORDER BY clauses are not supported. See the section on select#selectStmt for details. + + h3. 3.0.4 + + * Updated the syntax for custom secondary indexes:#createIndexStmt. + h3. 3.0.3 * Support for custom secondary indexes:#createIndexStmt has been added. http://git-wip-us.apache.org/repos/asf/cassandra/blob/7edd0e0c/pylib/cqlshlib/cql3handling.py -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7edd0e0c/src/java/org/apache/cassandra/cql3/Cql.g -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7edd0e0c/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java -- diff --cc src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java index 4b61ab3,b79a255..12f762f --- a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java @@@ -26,8 -28,11 +26,9 @@@ import org.apache.cassandra.auth.Permis import org.apache.cassandra.config.CFMetaData; import org.apache.cassandra.config.ColumnDefinition; import org.apache.cassandra.config.Schema; + import org.apache.cassandra.db.index.SecondaryIndex; import org.apache.cassandra.exceptions.*; import org.apache.cassandra.cql3.*; -import org.apache.cassandra.db.index.composites.CompositesIndex; -import org.apache.cassandra.db.marshal.CompositeType; import org.apache.cassandra.service.ClientState; import org.apache.cassandra.service.MigrationManager; import org.apache.cassandra.thrift.IndexType; @@@ -62,25 -67,32 +63,29 @@@ public class CreateIndexStatement exten public void validate(ClientState state) throws RequestValidationException { CFMetaData cfm = ThriftValidation.validateColumnFamily(keyspace(), columnFamily()); -CFDefinition.Name name = cfm.getCfDef().get(columnName); +ColumnDefinition cd = cfm.getColumnDefinition(columnName.key); -if (name == null) +if (cd == null) throw new InvalidRequestException(No column definition found for column + columnName); -switch (name.kind) -{ -case KEY_ALIAS: -case COLUMN_ALIAS: -
[jira] [Commented] (CASSANDRA-5624) Memory leak in SerializingCache
[ https://issues.apache.org/jira/browse/CASSANDRA-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686794#comment-13686794 ] Jonathan Ellis commented on CASSANDRA-5624: --- bq. i am looking at notifyListener() in CLHM 1.3 It looks to me like CLHM is trying to take advantage of the threads using it, to spread out the work of eviction notifications. So remove (and get!) will loop through pendingNotifications and notify listeners, but only evict adds events to the queue. Memory leak in SerializingCache --- Key: CASSANDRA-5624 URL: https://issues.apache.org/jira/browse/CASSANDRA-5624 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.1 Reporter: Jonathan Ellis Assignee: Ryan McGuire A customer reported a memory leak when off-heap row cache is enabled. I gave them a patch against 1.1.9 to troubleshoot (https://github.com/jbellis/cassandra/commits/row-cache-finalizer). This confirms that row cache is responsible. Here is a sample of the log: {noformat} DEBUG [Finalizer] 2013-06-08 06:49:58,656 FreeableMemory.java (line 69) Unreachable memory still has nonzero refcount 1 DEBUG [Finalizer] 2013-06-08 06:49:58,656 FreeableMemory.java (line 71) Unreachable memory 140337996747792 has not been freed (will free now) DEBUG [Finalizer] 2013-06-08 06:49:58,656 FreeableMemory.java (line 69) Unreachable memory still has nonzero refcount 1 DEBUG [Finalizer] 2013-06-08 06:49:58,656 FreeableMemory.java (line 71) Unreachable memory 140337989287984 has not been freed (will free now) {noformat} That is, memory is not being freed because we never got to zero references. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5655) Equals method in PermissionDetails causes StackOverflowException
Sam Tunnicliffe created CASSANDRA-5655: -- Summary: Equals method in PermissionDetails causes StackOverflowException Key: CASSANDRA-5655 URL: https://issues.apache.org/jira/browse/CASSANDRA-5655 Project: Cassandra Issue Type: Bug Reporter: Sam Tunnicliffe Priority: Minor It simply delegates to Guava's Objects.equal, which itself ends up calling back to the original caller's equals after performing some basic checks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-5655) Equals method in PermissionDetails causes StackOverflowException
[ https://issues.apache.org/jira/browse/CASSANDRA-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe reassigned CASSANDRA-5655: -- Assignee: Sam Tunnicliffe Equals method in PermissionDetails causes StackOverflowException Key: CASSANDRA-5655 URL: https://issues.apache.org/jira/browse/CASSANDRA-5655 Project: Cassandra Issue Type: Bug Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor It simply delegates to Guava's Objects.equal, which itself ends up calling back to the original caller's equals after performing some basic checks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5655) Equals method in PermissionDetails causes StackOverflowException
[ https://issues.apache.org/jira/browse/CASSANDRA-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-5655: --- Attachment: 5655.txt patch against 1.2 branch attached Equals method in PermissionDetails causes StackOverflowException Key: CASSANDRA-5655 URL: https://issues.apache.org/jira/browse/CASSANDRA-5655 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.5 Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor Attachments: 5655.txt It simply delegates to Guava's Objects.equal, which itself ends up calling back to the original caller's equals after performing some basic checks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Clarify pk-only CQL3 INSERT exception for COMPACT STORAGE tables
Updated Branches: refs/heads/cassandra-1.2 2397bc8c3 - df063449a Clarify pk-only CQL3 INSERT exception for COMPACT STORAGE tables Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df063449 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df063449 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df063449 Branch: refs/heads/cassandra-1.2 Commit: df063449a88655018a94aabf494b3e604f1e4cd9 Parents: 2397bc8 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 17:53:44 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 17:53:44 2013 +0300 -- .../org/apache/cassandra/cql3/statements/UpdateStatement.java | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/df063449/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java index 8a5595a..5f37e15 100644 --- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java @@ -244,10 +244,9 @@ public class UpdateStatement extends ModificationStatement } else { -// compact means we don't have a row marker, so don't accept to set only the PK (Note: we -// could accept it and use an empty value!?) +// compact means we don't have a row marker, so don't accept to set only the PK. See CASSANDRA-5648. if (processedColumns.isEmpty()) -throw new InvalidRequestException(String.format(Missing mandatory column %s, cfDef.value)); +throw new InvalidRequestException(String.format(Column %s is mandatory for this COMPACT STORAGE table, cfDef.value)); for (Operation op : processedColumns) op.execute(key, cf, builder.copy(), params);
[1/2] git commit: Clarify pk-only CQL3 INSERT exception for COMPACT STORAGE tables
Updated Branches: refs/heads/trunk 7edd0e0c7 - ee0f495f5 Clarify pk-only CQL3 INSERT exception for COMPACT STORAGE tables Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df063449 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df063449 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df063449 Branch: refs/heads/trunk Commit: df063449a88655018a94aabf494b3e604f1e4cd9 Parents: 2397bc8 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 17:53:44 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 17:53:44 2013 +0300 -- .../org/apache/cassandra/cql3/statements/UpdateStatement.java | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/df063449/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java index 8a5595a..5f37e15 100644 --- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java @@ -244,10 +244,9 @@ public class UpdateStatement extends ModificationStatement } else { -// compact means we don't have a row marker, so don't accept to set only the PK (Note: we -// could accept it and use an empty value!?) +// compact means we don't have a row marker, so don't accept to set only the PK. See CASSANDRA-5648. if (processedColumns.isEmpty()) -throw new InvalidRequestException(String.format(Missing mandatory column %s, cfDef.value)); +throw new InvalidRequestException(String.format(Column %s is mandatory for this COMPACT STORAGE table, cfDef.value)); for (Operation op : processedColumns) op.execute(key, cf, builder.copy(), params);
[2/2] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Conflicts: src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee0f495f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee0f495f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee0f495f Branch: refs/heads/trunk Commit: ee0f495f52e9a26d0795ff1117a54949496878e1 Parents: 7edd0e0 df06344 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 17:59:52 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 17:59:52 2013 +0300 -- .../org/apache/cassandra/cql3/statements/UpdateStatement.java | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee0f495f/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java -- diff --cc src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java index cff4105,5f37e15..3cb58ea --- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java @@@ -84,13 -244,12 +84,12 @@@ public class UpdateStatement extends Mo } else { - // compact means we don't have a row marker, so don't accept to set only the PK (Note: we - // could accept it and use an empty value!?) + // compact means we don't have a row marker, so don't accept to set only the PK. See CASSANDRA-5648. -if (processedColumns.isEmpty()) +if (updates.isEmpty()) - throw new InvalidRequestException(String.format(Missing mandatory column %s, cfDef.value)); + throw new InvalidRequestException(String.format(Column %s is mandatory for this COMPACT STORAGE table, cfDef.value)); -for (Operation op : processedColumns) -op.execute(key, cf, builder.copy(), params); +for (Operation update : updates) +update.execute(key, cf, builder.copy(), params); } } else
[jira] [Resolved] (CASSANDRA-5648) Cassandra: Insert of null value not possible with CQL3?
[ https://issues.apache.org/jira/browse/CASSANDRA-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko resolved CASSANDRA-5648. -- Resolution: Not A Problem Cassandra: Insert of null value not possible with CQL3? --- Key: CASSANDRA-5648 URL: https://issues.apache.org/jira/browse/CASSANDRA-5648 Project: Cassandra Issue Type: Bug Components: API Affects Versions: 1.2.0 Reporter: Tobias Schlottke Assignee: Aleksey Yeschenko Fix For: 1.2.6 Hi there, I'm trying to migrate a project from thrift to cql3/java driver and I'm experiencing a strange problem. Schema: {code} CREATE TABLE foo ( key ascii, column1 ascii, foo ascii, PRIMARY KEY (key, column1) ) WITH COMPACT STORAGE; {code} The table just consists of a primary key and a value, sometimes all the information lies in the key though, so the value is not needed. Through the thrift interface, it just works fine to leave out foo. Executing this query: {code} INSERT INTO foo(key, column1) VALUES ('test', 'test2'); {code} Fails with Bad Request: Missing mandatory column foo though. Explicitly inserting null as a value does not store the column / deletes the old one with a null value inserted through thrift. Inserting an empty String works fine, but this seems a bit bloated. Is this intended to (not) work this way? Best, Tobias -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hayato Shimizu updated CASSANDRA-5632: -- Attachment: fix_patch_bug.log cassandra-topology.properties Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5655) Equals method in PermissionDetails causes StackOverflowException
[ https://issues.apache.org/jira/browse/CASSANDRA-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5655: - Reviewer: iamaleksey Equals method in PermissionDetails causes StackOverflowException Key: CASSANDRA-5655 URL: https://issues.apache.org/jira/browse/CASSANDRA-5655 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.5 Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor Attachments: 5655.txt It simply delegates to Guava's Objects.equal, which itself ends up calling back to the original caller's equals after performing some basic checks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686809#comment-13686809 ] Hayato Shimizu commented on CASSANDRA-5632: --- The patch fixes the issue of bandwidth-saving. However, there seems to be two regressive issues being introduced. 1. Secondary DC coordinator node is always the same node. This introduces a bottleneck in the secondary DC. 2. When using cqlsh, with EACH_QUORUM/ALL, with tracing on, on a row insert, RPC timeout occurs from a node that is not verifiable in the trace output. Trace output has been attached for a 6 node cluster, DC1:3, DC2:3 replication factor configuration. network-topology configuration is also attached for clarity. Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5656) NPE in SSTableNamesIterator
Michael Bock created CASSANDRA-5656: --- Summary: NPE in SSTableNamesIterator Key: CASSANDRA-5656 URL: https://issues.apache.org/jira/browse/CASSANDRA-5656 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.4 Environment: SUSE Linux Enterprise Server 10, SP3. IBM Java 1.6.0 SR 13 Reporter: Michael Bock When adding a new node to our cluster we occasionally get the following error in the cassandra system log: 2013-06-18T07:13:18:942|ERROR|ReadStage:30|org.apache.cassandra.service.CassandraDaemon|Exception in thread Thread[ReadStage:30,5,main] java.lang.NullPointerException at java.util.TreeSet.iterator(TreeSet.java:230) at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:163) at org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:64) at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126) at org.apache.cassandra.db.Table.getRow(Table.java:347) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:44) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:908) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:931) at java.lang.Thread.run(Thread.java:738) The same exception then occurs repeatedly every few milliseconds and the node is not working. Calls via the API are timing out. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686809#comment-13686809 ] Hayato Shimizu edited comment on CASSANDRA-5632 at 6/18/13 3:17 PM: The patch fixes the issue of bandwidth-saving. However, there seems to be two regressive issues being introduced. 1. Secondary DC coordinator selection by the primary DC coordinator is not equal across all available nodes in secondary DC. 2. When using cqlsh, with EACH_QUORUM/ALL, with tracing on, on a row insert, RPC timeout occurs from a node that is not verifiable in the trace output. Trace output has been attached for a 6 node cluster, DC1:3, DC2:3 replication factor configuration. network-topology configuration is also attached for clarity. was (Author: hayato.shimizu): The patch fixes the issue of bandwidth-saving. However, there seems to be two regressive issues being introduced. 1. Secondary DC coordinator node is always the same node. This introduces a bottleneck in the secondary DC. 2. When using cqlsh, with EACH_QUORUM/ALL, with tracing on, on a row insert, RPC timeout occurs from a node that is not verifiable in the trace output. Trace output has been attached for a 6 node cluster, DC1:3, DC2:3 replication factor configuration. network-topology configuration is also attached for clarity. Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5657) remove deprecated metrics
Jonathan Ellis created CASSANDRA-5657: - Summary: remove deprecated metrics Key: CASSANDRA-5657 URL: https://issues.apache.org/jira/browse/CASSANDRA-5657 Project: Cassandra Issue Type: Task Components: Tools Reporter: Jonathan Ellis Assignee: Yuki Morishita Fix For: 2.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686828#comment-13686828 ] Jonathan Ellis commented on CASSANDRA-5632: --- .55 is the forwarding node in DC2. It logs that it applies the mutation and acks it: {noformat} Enqueuing response to /192.168.56.50 | 05:57:33,825 | 192.168.56.55 | 14785 {noformat} But there is no Processing response from /192.168.56.55 line logged by .50. Hmm. Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686809#comment-13686809 ] Hayato Shimizu edited comment on CASSANDRA-5632 at 6/18/13 3:18 PM: The patch fixes the issue of bandwidth-saving. However, there seems to be two regressive issues being introduced. 1. DC2 coordinator selection by the DC1 coordinator is not equal across all available nodes in DC2. Some nodes in DC2 are unused as coordinators. 2. When using cqlsh, with EACH_QUORUM/ALL, with tracing on, on a row insert, RPC timeout occurs from a node that is not verifiable in the trace output. Trace output has been attached for a 6 node cluster, DC1:3, DC2:3 replication factor configuration. network-topology configuration is also attached for clarity. was (Author: hayato.shimizu): The patch fixes the issue of bandwidth-saving. However, there seems to be two regressive issues being introduced. 1. Secondary DC coordinator selection by the primary DC coordinator is not equal across all available nodes in secondary DC. 2. When using cqlsh, with EACH_QUORUM/ALL, with tracing on, on a row insert, RPC timeout occurs from a node that is not verifiable in the trace output. Trace output has been attached for a 6 node cluster, DC1:3, DC2:3 replication factor configuration. network-topology configuration is also attached for clarity. Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686817#comment-13686817 ] Jonathan Ellis commented on CASSANDRA-5632: --- bq. Secondary DC coordinator node is always the same node. This introduces a bottleneck in the secondary DC. It's the same node for a given token range. When all token ranges are considered, it is evenly spread. bq. RPC timeout occurs from a node that is not verifiable in the trace output. Well. That's not a very useful error message, is it. :) Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[2/2] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b908c0a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b908c0a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b908c0a Branch: refs/heads/trunk Commit: 8b908c0acf62cb7c49203ffef54738634b920077 Parents: ee0f495 08878e9 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 18:19:01 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 18:19:01 2013 +0300 -- CHANGES.txt | 1 + .../org/apache/cassandra/auth/PermissionDetails.java | 11 ++- 2 files changed, 11 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b908c0a/CHANGES.txt --
git commit: Fix PermissionDetails.equals() method
Updated Branches: refs/heads/cassandra-1.2 df063449a - 08878e90a Fix PermissionDetails.equals() method patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for CASSANDRA-5655 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08878e90 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08878e90 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08878e90 Branch: refs/heads/cassandra-1.2 Commit: 08878e90abdf774315fe8580d87611f2eaa416c2 Parents: df06344 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 18:17:58 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 18:17:58 2013 +0300 -- CHANGES.txt | 1 + .../org/apache/cassandra/auth/PermissionDetails.java | 11 ++- 2 files changed, 11 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/08878e90/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0d42c13..c48eb7d 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -28,6 +28,7 @@ * Fix message spelling errors for cql select statements (CASSANDRA-5647) * Suppress custom exceptions thru jmx (CASSANDRA-5652) * Update CREATE CUSTOM INDEX syntax (CASSANDRA-5639) + * Fix PermissionDetails.equals() method (CASSANDRA-5655) Merged from 1.1: * Remove buggy thrift max message length option (CASSANDRA-5529) * Fix NPE in Pig's widerow mode (CASSANDRA-5488) http://git-wip-us.apache.org/repos/asf/cassandra/blob/08878e90/src/java/org/apache/cassandra/auth/PermissionDetails.java -- diff --git a/src/java/org/apache/cassandra/auth/PermissionDetails.java b/src/java/org/apache/cassandra/auth/PermissionDetails.java index 52a8712..c13ec4b 100644 --- a/src/java/org/apache/cassandra/auth/PermissionDetails.java +++ b/src/java/org/apache/cassandra/auth/PermissionDetails.java @@ -59,7 +59,16 @@ public class PermissionDetails implements ComparablePermissionDetails @Override public boolean equals(Object o) { -return Objects.equal(this, o); +if (this == o) +return true; + +if (!(o instanceof PermissionDetails)) +return false; + +PermissionDetails pd = (PermissionDetails) o; +return Objects.equal(this.username, pd.username) + Objects.equal(this.resource, pd.resource) + Objects.equal(this.permission, pd.permission); } @Override
[1/2] git commit: Fix PermissionDetails.equals() method
Updated Branches: refs/heads/trunk ee0f495f5 - 8b908c0ac Fix PermissionDetails.equals() method patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for CASSANDRA-5655 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08878e90 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08878e90 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08878e90 Branch: refs/heads/trunk Commit: 08878e90abdf774315fe8580d87611f2eaa416c2 Parents: df06344 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 18:17:58 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 18:17:58 2013 +0300 -- CHANGES.txt | 1 + .../org/apache/cassandra/auth/PermissionDetails.java | 11 ++- 2 files changed, 11 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/08878e90/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0d42c13..c48eb7d 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -28,6 +28,7 @@ * Fix message spelling errors for cql select statements (CASSANDRA-5647) * Suppress custom exceptions thru jmx (CASSANDRA-5652) * Update CREATE CUSTOM INDEX syntax (CASSANDRA-5639) + * Fix PermissionDetails.equals() method (CASSANDRA-5655) Merged from 1.1: * Remove buggy thrift max message length option (CASSANDRA-5529) * Fix NPE in Pig's widerow mode (CASSANDRA-5488) http://git-wip-us.apache.org/repos/asf/cassandra/blob/08878e90/src/java/org/apache/cassandra/auth/PermissionDetails.java -- diff --git a/src/java/org/apache/cassandra/auth/PermissionDetails.java b/src/java/org/apache/cassandra/auth/PermissionDetails.java index 52a8712..c13ec4b 100644 --- a/src/java/org/apache/cassandra/auth/PermissionDetails.java +++ b/src/java/org/apache/cassandra/auth/PermissionDetails.java @@ -59,7 +59,16 @@ public class PermissionDetails implements ComparablePermissionDetails @Override public boolean equals(Object o) { -return Objects.equal(this, o); +if (this == o) +return true; + +if (!(o instanceof PermissionDetails)) +return false; + +PermissionDetails pd = (PermissionDetails) o; +return Objects.equal(this.username, pd.username) + Objects.equal(this.resource, pd.resource) + Objects.equal(this.permission, pd.permission); } @Override
git commit: another try at fixing the broken testMutateLevel test
Updated Branches: refs/heads/trunk 8b908c0ac - 62295f68c another try at fixing the broken testMutateLevel test Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/62295f68 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/62295f68 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/62295f68 Branch: refs/heads/trunk Commit: 62295f68c7b7b10cc7d41d72f0817e17133a9b36 Parents: 8b908c0 Author: Marcus Eriksson marc...@spotify.com Authored: Tue Jun 18 17:57:22 2013 +0200 Committer: Marcus Eriksson marc...@spotify.com Committed: Tue Jun 18 17:57:22 2013 +0200 -- .../cassandra/db/compaction/LeveledCompactionStrategyTest.java | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/62295f68/test/unit/org/apache/cassandra/db/compaction/LeveledCompactionStrategyTest.java -- diff --git a/test/unit/org/apache/cassandra/db/compaction/LeveledCompactionStrategyTest.java b/test/unit/org/apache/cassandra/db/compaction/LeveledCompactionStrategyTest.java index b33defc..d332ec3 100644 --- a/test/unit/org/apache/cassandra/db/compaction/LeveledCompactionStrategyTest.java +++ b/test/unit/org/apache/cassandra/db/compaction/LeveledCompactionStrategyTest.java @@ -18,6 +18,7 @@ package org.apache.cassandra.db.compaction; import java.nio.ByteBuffer; +import java.util.Arrays; import java.util.Collection; import java.util.HashSet; import java.util.List; @@ -173,10 +174,11 @@ public class LeveledCompactionStrategyTest extends SchemaLoader cfs.forceBlockingFlush(); } waitForLeveling(cfs); +cfs.forceBlockingFlush(); LeveledCompactionStrategy strategy = (LeveledCompactionStrategy) cfs.getCompactionStrategy(); cfs.disableAutoCompaction(); -while(CompactionManager.instance.getActiveCompactions() 0) +while(CompactionManager.instance.isCompacting(Arrays.asList(cfs))) Thread.sleep(100); for (SSTableReader s : cfs.getSSTables())
[jira] [Updated] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10
[ https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Liu updated CASSANDRA-5234: Attachment: 5234-1-1.2-patch.txt Table created through CQL3 are not accessble to Pig 0.10 Key: CASSANDRA-5234 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.1 Environment: Red hat linux 5 Reporter: Shamim Ahmed Fix For: 1.2.6 Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 5234.tx Hi, i have faced a bug when creating table through CQL3 and trying to load data through pig 0.10 as follows: java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ' at org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112) at org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615). This effects from Simple table to table with compound key. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10
[ https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Liu updated CASSANDRA-5234: Attachment: (was: 5234-1-1.2-patch.txt) Table created through CQL3 are not accessble to Pig 0.10 Key: CASSANDRA-5234 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.1 Environment: Red hat linux 5 Reporter: Shamim Ahmed Fix For: 1.2.6 Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 5234.tx Hi, i have faced a bug when creating table through CQL3 and trying to load data through pig 0.10 as follows: java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ' at org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112) at org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615). This effects from Simple table to table with compound key. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10
[ https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Liu updated CASSANDRA-5234: Attachment: 5234-1-1.2-patch.txt Table created through CQL3 are not accessble to Pig 0.10 Key: CASSANDRA-5234 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.1 Environment: Red hat linux 5 Reporter: Shamim Ahmed Fix For: 1.2.6 Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 5234.tx Hi, i have faced a bug when creating table through CQL3 and trying to load data through pig 0.10 as follows: java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ' at org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112) at org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615). This effects from Simple table to table with compound key. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10
[ https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Liu updated CASSANDRA-5234: Attachment: (was: 5234-1-1.2-patch.txt) Table created through CQL3 are not accessble to Pig 0.10 Key: CASSANDRA-5234 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.1 Environment: Red hat linux 5 Reporter: Shamim Ahmed Fix For: 1.2.6 Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 5234.tx Hi, i have faced a bug when creating table through CQL3 and trying to load data through pig 0.10 as follows: java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ' at org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112) at org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615). This effects from Simple table to table with compound key. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686886#comment-13686886 ] Jonathan Ellis commented on CASSANDRA-5632: --- You're not running with cross_node_timeout enabled, are you? Because some of these clocks are minutes apart. {noformat} # Enable operation timeout information exchange between nodes to accurately # measure request timeouts, If disabled cassandra will assuming the request # was forwarded to the replica instantly by the coordinator # # Warning: before enabling this property make sure to ntp is installed # and the times are synchronized between the nodes. cross_node_timeout: false {noformat} Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-5632: Tester: enigmacurry Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5619) CAS UPDATE for a lost race: save round trip by returning column values
[ https://issues.apache.org/jira/browse/CASSANDRA-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-5619: Attachment: 5619.txt Attaching patch for this that implement the idea above. I do note that while coding this I realized that 'IF NOT EXIST' wasn't always working correctly in CQL3, because in the underlying SP.cas() method we were fetching the first column of the partition (internal row), but such column might not at all be part of the CQL3 row we are interested in. The patch provides a fix for this (we could create a specific ticket for the bug, but if we're fine on that ticket, not sure it's worth bothering). Talking of 'IF NOT EXISTS', there was the question of what to return. For CQL3, I've made it so that we return the full CQL3 row as it felt like it was making the more sense. For thrift however, since we don't want to return the full partition, it only returns the first live column of the partition. Note: the patch includes the change to the generated thrift files. CAS UPDATE for a lost race: save round trip by returning column values -- Key: CASSANDRA-5619 URL: https://issues.apache.org/jira/browse/CASSANDRA-5619 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 2.0 Reporter: Blair Zajac Assignee: Sylvain Lebresne Fix For: 2.0 Attachments: 5619.txt Looking at the new CAS CQL3 support examples [1], if one lost a race for an UPDATE, to save a round trip to get the current values to decide if you need to perform your work, could the columns that were used in the IF clause also be returned to the caller? Maybe the columns values as part of the SET part could also be returned. I don't know if this is generally useful though. In the case of creating a new user account with a given username which is the partition key, if one lost the race to another person creating an account with the same username, it doesn't matter to the loser what the column values are, just that they lost. I'm new to Cassandra, so maybe there's other use cases, such as doing incremental amount of work on a row. In pure Java projects I've done while loops around AtomicReference.html#compareAndSet() until the work was done on the referenced object to handle multiple threads each making forward progress in updating the references object. [1] https://github.com/riptano/cassandra-dtest/blob/master/cql_tests.py#L3044 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: cli: ninja column-cell
Updated Branches: refs/heads/cassandra-1.2 08878e90a - 26c426223 cli: ninja column-cell Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/26c42622 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/26c42622 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/26c42622 Branch: refs/heads/cassandra-1.2 Commit: 26c4262233c0fb1b4593683bf7829d42ca3e12b8 Parents: 08878e9 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 20:22:28 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 20:22:28 2013 +0300 -- .../org/apache/cassandra/cli/CliClient.java | 26 ++-- test/unit/org/apache/cassandra/cli/CliTest.java | 2 +- 2 files changed, 14 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/26c42622/src/java/org/apache/cassandra/cli/CliClient.java -- diff --git a/src/java/org/apache/cassandra/cli/CliClient.java b/src/java/org/apache/cassandra/cli/CliClient.java index fe7f02b..9209b87 100644 --- a/src/java/org/apache/cassandra/cli/CliClient.java +++ b/src/java/org/apache/cassandra/cli/CliClient.java @@ -440,7 +440,7 @@ public class CliClient SlicePredicate predicate = new SlicePredicate().setColumn_names(null).setSlice_range(range); int count = thriftClient.get_count(getKeyAsBytes(columnFamily, columnFamilySpec.getChild(1)), colParent, predicate, consistencyLevel); -sessionState.out.printf(%d columns%n, count); +sessionState.out.printf(%d cells%n, count); } private IterableCfDef currentCfDefs() @@ -526,7 +526,7 @@ public class CliClient { thriftClient.remove(key, path, FBUtilities.timestampMicros(), consistencyLevel); } -sessionState.out.println(String.format(%s removed., (columnSpecCnt == 0) ? row : column)); +sessionState.out.println(String.format(%s removed., (columnSpecCnt == 0) ? row : cell)); elapsedTime(startTime); } @@ -559,7 +559,7 @@ public class CliClient for (Column col : superColumn.getColumns()) { validator = getValidatorForValue(cfDef, col.getName()); -sessionState.out.printf(%n (column=%s, value=%s, timestamp=%d%s), formatSubcolumnName(keyspace, columnFamily, col.name), +sessionState.out.printf(%n (name=%s, value=%s, timestamp=%d%s), formatSubcolumnName(keyspace, columnFamily, col.name), validator.getString(col.value), col.timestamp, col.isSetTtl() ? String.format(, ttl=%d, col.getTtl()) : ); } @@ -575,7 +575,7 @@ public class CliClient ? formatSubcolumnName(keyspace, columnFamily, column.name) : formatColumnName(keyspace, columnFamily, column.name); -sessionState.out.printf(= (column=%s, value=%s, timestamp=%d%s)%n, +sessionState.out.printf(= (name=%s, value=%s, timestamp=%d%s)%n, formattedName, validator.getString(column.value), column.timestamp, @@ -763,7 +763,7 @@ public class CliClient : formatColumnName(keySpace, columnFamily, column.name); // print results -sessionState.out.printf(= (column=%s, value=%s, timestamp=%d%s)%n, +sessionState.out.printf(= (name=%s, value=%s, timestamp=%d%s)%n, formattedColumnName, valueAsString, column.timestamp, @@ -918,7 +918,7 @@ public class CliClient // table.cf['key'] if (columnSpecCnt == 0) { -sessionState.err.println(No column name specified, (type 'help;' or '?' for help on syntax).); +sessionState.err.println(No cell name specified, (type 'help;' or '?' for help on syntax).); return; } // table.cf['key']['column'] = 'value' @@ -1436,7 +1436,7 @@ public class CliClient { if ((child.getChildCount() 1) || (child.getChildCount() 2)) { -sessionState.err.println(Invalid columns clause.); +sessionState.err.println(Invalid cells clause.); return; } @@ -1447,7 +1447,7 @@ public class CliClient columnCount = Integer.parseInt(columns); if (columnCount 0) { -
[2/2] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/89e792fc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/89e792fc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/89e792fc Branch: refs/heads/trunk Commit: 89e792fc5930dec6d81776d1aa6c09bbc36e3b28 Parents: 1f7628c 26c4262 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 20:23:36 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 20:23:36 2013 +0300 -- .../org/apache/cassandra/cli/CliClient.java | 26 ++-- test/unit/org/apache/cassandra/cli/CliTest.java | 2 +- 2 files changed, 14 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/89e792fc/src/java/org/apache/cassandra/cli/CliClient.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/89e792fc/test/unit/org/apache/cassandra/cli/CliTest.java --
[1/2] git commit: cli: ninja column-cell
Updated Branches: refs/heads/trunk 1f7628ce7 - 89e792fc5 cli: ninja column-cell Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/26c42622 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/26c42622 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/26c42622 Branch: refs/heads/trunk Commit: 26c4262233c0fb1b4593683bf7829d42ca3e12b8 Parents: 08878e9 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 18 20:22:28 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 18 20:22:28 2013 +0300 -- .../org/apache/cassandra/cli/CliClient.java | 26 ++-- test/unit/org/apache/cassandra/cli/CliTest.java | 2 +- 2 files changed, 14 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/26c42622/src/java/org/apache/cassandra/cli/CliClient.java -- diff --git a/src/java/org/apache/cassandra/cli/CliClient.java b/src/java/org/apache/cassandra/cli/CliClient.java index fe7f02b..9209b87 100644 --- a/src/java/org/apache/cassandra/cli/CliClient.java +++ b/src/java/org/apache/cassandra/cli/CliClient.java @@ -440,7 +440,7 @@ public class CliClient SlicePredicate predicate = new SlicePredicate().setColumn_names(null).setSlice_range(range); int count = thriftClient.get_count(getKeyAsBytes(columnFamily, columnFamilySpec.getChild(1)), colParent, predicate, consistencyLevel); -sessionState.out.printf(%d columns%n, count); +sessionState.out.printf(%d cells%n, count); } private IterableCfDef currentCfDefs() @@ -526,7 +526,7 @@ public class CliClient { thriftClient.remove(key, path, FBUtilities.timestampMicros(), consistencyLevel); } -sessionState.out.println(String.format(%s removed., (columnSpecCnt == 0) ? row : column)); +sessionState.out.println(String.format(%s removed., (columnSpecCnt == 0) ? row : cell)); elapsedTime(startTime); } @@ -559,7 +559,7 @@ public class CliClient for (Column col : superColumn.getColumns()) { validator = getValidatorForValue(cfDef, col.getName()); -sessionState.out.printf(%n (column=%s, value=%s, timestamp=%d%s), formatSubcolumnName(keyspace, columnFamily, col.name), +sessionState.out.printf(%n (name=%s, value=%s, timestamp=%d%s), formatSubcolumnName(keyspace, columnFamily, col.name), validator.getString(col.value), col.timestamp, col.isSetTtl() ? String.format(, ttl=%d, col.getTtl()) : ); } @@ -575,7 +575,7 @@ public class CliClient ? formatSubcolumnName(keyspace, columnFamily, column.name) : formatColumnName(keyspace, columnFamily, column.name); -sessionState.out.printf(= (column=%s, value=%s, timestamp=%d%s)%n, +sessionState.out.printf(= (name=%s, value=%s, timestamp=%d%s)%n, formattedName, validator.getString(column.value), column.timestamp, @@ -763,7 +763,7 @@ public class CliClient : formatColumnName(keySpace, columnFamily, column.name); // print results -sessionState.out.printf(= (column=%s, value=%s, timestamp=%d%s)%n, +sessionState.out.printf(= (name=%s, value=%s, timestamp=%d%s)%n, formattedColumnName, valueAsString, column.timestamp, @@ -918,7 +918,7 @@ public class CliClient // table.cf['key'] if (columnSpecCnt == 0) { -sessionState.err.println(No column name specified, (type 'help;' or '?' for help on syntax).); +sessionState.err.println(No cell name specified, (type 'help;' or '?' for help on syntax).); return; } // table.cf['key']['column'] = 'value' @@ -1436,7 +1436,7 @@ public class CliClient { if ((child.getChildCount() 1) || (child.getChildCount() 2)) { -sessionState.err.println(Invalid columns clause.); +sessionState.err.println(Invalid cells clause.); return; } @@ -1447,7 +1447,7 @@ public class CliClient columnCount = Integer.parseInt(columns); if (columnCount 0) { -
[jira] [Comment Edited] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686986#comment-13686986 ] Jonathan Ellis edited comment on CASSANDRA-5632 at 6/18/13 5:25 PM: I note that .55 doesn't ever log Sending message to .50 either. So the message is getting dropped somewhere inside .55's MessagingService. cross_node_timeout is my best guess. Next-best guess is that there's a reconnect somehow dropping the message a la CASSANDRA-5393. was (Author: jbellis): I note that .55 doesn't ever log Sending message to .50 either. So the message gets dropped somewhere inside .55's MessagingService. cross-node_timeout is my best guess. Next-best guess is that there's a reconnect somehow dropping the message a la CASSANDRA-5393. Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686986#comment-13686986 ] Jonathan Ellis commented on CASSANDRA-5632: --- I note that .55 doesn't ever log Sending message to .50 either. So the message gets dropped somewhere inside .55's MessagingService. cross-node_timeout is my best guess. Next-best guess is that there's a reconnect somehow dropping the message a la CASSANDRA-5393. Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5632: -- Attachment: 5632-v2.txt v2 attached that rebases and does some further cleanup to improve trace messages. Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, 5632-v2.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10
[ https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-5234: Reviewer: brandon.williams Assignee: Alex Liu Table created through CQL3 are not accessble to Pig 0.10 Key: CASSANDRA-5234 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.1 Environment: Red hat linux 5 Reporter: Shamim Ahmed Assignee: Alex Liu Fix For: 1.2.6 Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 5234.tx Hi, i have faced a bug when creating table through CQL3 and trying to load data through pig 0.10 as follows: java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ' at org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112) at org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615). This effects from Simple table to table with compound key. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5649) Move resultset type information into prepare, not execute
[ https://issues.apache.org/jira/browse/CASSANDRA-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5649: -- Fix Version/s: (was: 2.0) 2.1 This doesn't look negligible at all to me. Here's the metadata encode: {code} public ChannelBuffer encode(Metadata m) { boolean globalTablesSpec = m.flags.contains(Flag.GLOBAL_TABLES_SPEC); int stringCount = globalTablesSpec ? 2 + m.names.size() : 3* m.names.size(); CBUtil.BufferBuilder builder = new CBUtil.BufferBuilder(1 + m.names.size(), stringCount, 0); ChannelBuffer header = ChannelBuffers.buffer(8); header.writeInt(Flag.serialize(m.flags)); header.writeInt(m.names.size()); builder.add(header); if (globalTablesSpec) { builder.addString(m.names.get(0).ksName); builder.addString(m.names.get(0).cfName); } for (ColumnSpecification name : m.names) { if (!globalTablesSpec) { builder.addString(name.ksName); builder.addString(name.cfName); } builder.addString(name.toString()); builder.add(DataType.codec.encodeOne(DataType.fromType(name.type))); } return builder.build(); } {code} Here's the (per-row) ResultSet encode: {code} for (ByteBuffer bb : row) builder.addValue(bb); {code} Hmm. :) Seriously, it's trivial to see how you will more often than not have more metadata than row data for typical single-row resultsets. I can put together a wrapper to prove it but it's kind of a waste of time. You're right that it's not reasonable to try for 2.0, though. Moved to 2.1. Move resultset type information into prepare, not execute - Key: CASSANDRA-5649 URL: https://issues.apache.org/jira/browse/CASSANDRA-5649 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: Sylvain Lebresne Fix For: 2.1 Native protocol 1.0 sends type information on execute. This is a minor inefficiency for large resultsets; unfortunately, single-row resultsets are common. This does represent a performance regression from Thrift; Thrift does not send type information at all. (Bad for driver complexity, but good for performance.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5649) Move resultset type information into prepare, not execute
[ https://issues.apache.org/jira/browse/CASSANDRA-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687088#comment-13687088 ] Jonathan Ellis commented on CASSANDRA-5649: --- P.S. When is globalTablesSpec false? Move resultset type information into prepare, not execute - Key: CASSANDRA-5649 URL: https://issues.apache.org/jira/browse/CASSANDRA-5649 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: Sylvain Lebresne Fix For: 2.1 Native protocol 1.0 sends type information on execute. This is a minor inefficiency for large resultsets; unfortunately, single-row resultsets are common. This does represent a performance regression from Thrift; Thrift does not send type information at all. (Bad for driver complexity, but good for performance.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5645) Display PK values along the header when using EXPAND in cqlsh
[ https://issues.apache.org/jira/browse/CASSANDRA-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687091#comment-13687091 ] Jonathan Ellis commented on CASSANDRA-5645: --- We definitely return KS/CF as part of the native protocol; is cqlsh still using Thrift? Display PK values along the header when using EXPAND in cqlsh - Key: CASSANDRA-5645 URL: https://issues.apache.org/jira/browse/CASSANDRA-5645 Project: Cassandra Issue Type: Improvement Reporter: Michał Michalski Assignee: Michał Michalski Priority: Minor Follow-up to CASSANDRA-5597 proposed by [~jjordan]. Currently cqlsh run in vertical mode prints a header like this: {noformat}cqlsh EXPAND on; Now printing expanded output cqlsh SELECT * FROM system.schema_columnfamilies limit 1; @ Row 1 -+- keyspace_name | system_auth columnfamily_name | users bloom_filter_fp_chance | 0.01 caching | KEYS_ONLY column_aliases | [] (...){noformat} The idea is to make it print header this way: {noformat}cqlsh EXPAND on; Now printing expanded output cqlsh SELECT * FROM system.schema_columnfamilies limit 1; @ Row 1: system_auth, users -+- keyspace_name | system_auth columnfamily_name | users bloom_filter_fp_chance | 0.01 caching | KEYS_ONLY column_aliases | [] (...){noformat} [~jjordan], please verify if it's what you requested for. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5649) Move resultset type information into prepare, not execute
[ https://issues.apache.org/jira/browse/CASSANDRA-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687088#comment-13687088 ] Jonathan Ellis edited comment on CASSANDRA-5649 at 6/18/13 7:19 PM: P.S. When is globalTablesSpec false? That is, how do we end up w/ a resultset containing data from more than one CF? was (Author: jbellis): P.S. When is globalTablesSpec false? Move resultset type information into prepare, not execute - Key: CASSANDRA-5649 URL: https://issues.apache.org/jira/browse/CASSANDRA-5649 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: Sylvain Lebresne Fix For: 2.1 Native protocol 1.0 sends type information on execute. This is a minor inefficiency for large resultsets; unfortunately, single-row resultsets are common. This does represent a performance regression from Thrift; Thrift does not send type information at all. (Bad for driver complexity, but good for performance.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5422) Native protocol sanity check
[ https://issues.apache.org/jira/browse/CASSANDRA-5422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687093#comment-13687093 ] Jonathan Ellis commented on CASSANDRA-5422: --- /throws up the [~slebresne] signal Native protocol sanity check Key: CASSANDRA-5422 URL: https://issues.apache.org/jira/browse/CASSANDRA-5422 Project: Cassandra Issue Type: Bug Components: API Reporter: Jonathan Ellis Assignee: Daniel Norberg Attachments: 5422-test.txt, ExecuteMessage Profiling - Call Tree.png, ExecuteMessage Profiling - Hot Spots.png With MutationStatement.execute turned into a no-op, I only get about 33k insert_prepared ops/s on my laptop. That is: this is an upper bound for our performance if Cassandra were infinitely fast, limited by netty handling the protocol + connections. This is up from about 13k/s with MS.execute running normally. ~40% overhead from netty seems awfully high to me, especially for insert_prepared where the return value is tiny. (I also used 4-byte column values to minimize that part as well.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5574) Add trigger examples
[ https://issues.apache.org/jira/browse/CASSANDRA-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687094#comment-13687094 ] Jonathan Ellis commented on CASSANDRA-5574: --- I think this needs a refresh after CASSANDRA-5576? Add trigger examples - Key: CASSANDRA-5574 URL: https://issues.apache.org/jira/browse/CASSANDRA-5574 Project: Cassandra Issue Type: Test Reporter: Vijay Assignee: Vijay Priority: Trivial Attachments: 0001-CASSANDRA-5574.patch Since 1311 is committed we need some example code to show the power and usage of triggers. Similar to the ones in examples directory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-5609) Create a dtest for CASSANDRA-5225
[ https://issues.apache.org/jira/browse/CASSANDRA-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Meyer resolved CASSANDRA-5609. - Resolution: Fixed New dtest added to cover this scenario: https://github.com/riptano/cassandra-dtest/blob/75bffeba0af410a41eb97b269ae1c94f4227c312/wide_rows_test.py Create a dtest for CASSANDRA-5225 - Key: CASSANDRA-5609 URL: https://issues.apache.org/jira/browse/CASSANDRA-5609 Project: Cassandra Issue Type: Test Reporter: Brandon Williams Assignee: Daniel Meyer Priority: Minor As the title suggests. A small complication is the the test will need to ensure it reduces the column_index_size_in_kb and then writes more columns than that, -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5225) Missing columns, errors when requesting specific columns from wide rows
[ https://issues.apache.org/jira/browse/CASSANDRA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687111#comment-13687111 ] Daniel Meyer commented on CASSANDRA-5225: - Added a dtest to cover this scenario: https://github.com/riptano/cassandra-dtest/blob/75bffeba0af410a41eb97b269ae1c94f4227c312/wide_rows_test.py Missing columns, errors when requesting specific columns from wide rows --- Key: CASSANDRA-5225 URL: https://issues.apache.org/jira/browse/CASSANDRA-5225 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Tyler Hobbs Assignee: Sylvain Lebresne Priority: Critical Fix For: 1.2.2 Attachments: 5225.txt, pycassa-repro.py With Cassandra 1.2.1 (and probably 1.2.0), I'm seeing some problems with Thrift queries that request a set of specific column names when the row is very wide. To reproduce, I'm inserting 10 million columns into a single row and then randomly requesting three columns by name in a loop. It's common for only one or two of the three columns to be returned. I'm also seeing stack traces like the following in the Cassandra log: {noformat} ERROR 13:12:01,017 Exception in thread Thread[ReadStage:76,5,main] java.lang.RuntimeException: org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 bytes remaining) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1576) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 bytes remaining) at org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69) at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1358) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1215) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1127) at org.apache.cassandra.db.Table.getRow(Table.java:355) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64) at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1572) ... 3 more {noformat} This doesn't seem to happen when the row is smaller, so it might have something to do with incremental large row compaction. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5652) Suppress custom exceptions thru jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687175#comment-13687175 ] Dave Brosius commented on CASSANDRA-5652: - really, imo mbeans should be seperate objects that impl the interface and just call into the real objects, so that these mbean objects can do the exception sanitization stuff (and other cleaning) outside of real code. but, probably just being pendantic. Suppress custom exceptions thru jmx --- Key: CASSANDRA-5652 URL: https://issues.apache.org/jira/browse/CASSANDRA-5652 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.5 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 1.2.6 Attachments: 5652.txt startNativeTransport, can send back org.jboss.netty.channel.ChannelException which causes jconsole to puke with a bad message such as Problem invoking startNativeTransport: java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.EOFException convert to RuntimeException so you get something like: org.jboss.netty.channel.ChannelException: Failed to bind to: localhost/127.0.0.1:9042 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5225) Missing columns, errors when requesting specific columns from wide rows
[ https://issues.apache.org/jira/browse/CASSANDRA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Meyer updated CASSANDRA-5225: Attachment: corrected-pycassa-repro.py Fixed a small bug in script. Missing columns, errors when requesting specific columns from wide rows --- Key: CASSANDRA-5225 URL: https://issues.apache.org/jira/browse/CASSANDRA-5225 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Tyler Hobbs Assignee: Sylvain Lebresne Priority: Critical Fix For: 1.2.2 Attachments: 5225.txt, corrected-pycassa-repro.py, pycassa-repro.py With Cassandra 1.2.1 (and probably 1.2.0), I'm seeing some problems with Thrift queries that request a set of specific column names when the row is very wide. To reproduce, I'm inserting 10 million columns into a single row and then randomly requesting three columns by name in a loop. It's common for only one or two of the three columns to be returned. I'm also seeing stack traces like the following in the Cassandra log: {noformat} ERROR 13:12:01,017 Exception in thread Thread[ReadStage:76,5,main] java.lang.RuntimeException: org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 bytes remaining) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1576) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 bytes remaining) at org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69) at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1358) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1215) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1127) at org.apache.cassandra.db.Table.getRow(Table.java:355) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64) at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1572) ... 3 more {noformat} This doesn't seem to happen when the row is smaller, so it might have something to do with incremental large row compaction. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5658) TracingStage frequently times out
[ https://issues.apache.org/jira/browse/CASSANDRA-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-5658: Attachment: trace_bug.py system.log TracingStage frequently times out - Key: CASSANDRA-5658 URL: https://issues.apache.org/jira/browse/CASSANDRA-5658 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 Reporter: Ryan McGuire Attachments: system.log, trace_bug.py I am seeing frequent timeout errors when doing programmatic traces via trace_next_query() {code} ERROR [TracingStage:1] 2013-06-18 19:10:20,669 CassandraDaemon.java (line 196) Exception in thread Thread[TracingStage:1,5,main] java.lang.RuntimeException: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at org.apache.cassandra.service.AbstractWriteResponseHandler.get(AbstractWriteResponseHandler.java:81) at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:454) at org.apache.cassandra.tracing.TraceState$1.runMayThrow(TraceState.java:100) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ... 3 more {code} Attached is the sample code which produced this error and the logs. The error occurs directly after the INSERT statement. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5225) Missing columns, errors when requesting specific columns from wide rows
[ https://issues.apache.org/jira/browse/CASSANDRA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687334#comment-13687334 ] Daniel Meyer edited comment on CASSANDRA-5225 at 6/18/13 11:12 PM: --- Fixed a small bug in the repro script. Please use the 'corrected' version. was (Author: dmeyer): Fixed a small bug in script. Missing columns, errors when requesting specific columns from wide rows --- Key: CASSANDRA-5225 URL: https://issues.apache.org/jira/browse/CASSANDRA-5225 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Tyler Hobbs Assignee: Sylvain Lebresne Priority: Critical Fix For: 1.2.2 Attachments: 5225.txt, corrected-pycassa-repro.py, pycassa-repro.py With Cassandra 1.2.1 (and probably 1.2.0), I'm seeing some problems with Thrift queries that request a set of specific column names when the row is very wide. To reproduce, I'm inserting 10 million columns into a single row and then randomly requesting three columns by name in a loop. It's common for only one or two of the three columns to be returned. I'm also seeing stack traces like the following in the Cassandra log: {noformat} ERROR 13:12:01,017 Exception in thread Thread[ReadStage:76,5,main] java.lang.RuntimeException: org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 bytes remaining) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1576) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 bytes remaining) at org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69) at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1358) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1215) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1127) at org.apache.cassandra.db.Table.getRow(Table.java:355) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64) at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1572) ... 3 more {noformat} This doesn't seem to happen when the row is smaller, so it might have something to do with incremental large row compaction. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5658) TracingStage frequently times out
Ryan McGuire created CASSANDRA-5658: --- Summary: TracingStage frequently times out Key: CASSANDRA-5658 URL: https://issues.apache.org/jira/browse/CASSANDRA-5658 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 Reporter: Ryan McGuire Attachments: system.log, trace_bug.py I am seeing frequent timeout errors when doing programmatic traces via trace_next_query() {code} ERROR [TracingStage:1] 2013-06-18 19:10:20,669 CassandraDaemon.java (line 196) Exception in thread Thread[TracingStage:1,5,main] java.lang.RuntimeException: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at org.apache.cassandra.service.AbstractWriteResponseHandler.get(AbstractWriteResponseHandler.java:81) at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:454) at org.apache.cassandra.tracing.TraceState$1.runMayThrow(TraceState.java:100) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ... 3 more {code} Attached is the sample code which produced this error and the logs. The error occurs directly after the INSERT statement. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5658) TracingStage frequently times out
[ https://issues.apache.org/jira/browse/CASSANDRA-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687368#comment-13687368 ] Ryan McGuire commented on CASSANDRA-5658: - I see the following in the system_trace keyspace: {code} cqlsh:system_traces select * FROM sessions ... ; session_id | coordinator | duration | parameters | request| started_at --+-+--+--++-- 396cb7b0-d86c-11e2-8be4-35db2404c433 | 127.0.0.1 | 10062609 | {query: INSERT INTO test.test (id, value) VALUES (1, 'one')} | execute_cql3_query | 2013-06-18 19:10:10-0400 cqlsh:system_traces select * from events WHERE session_id = 396cb7b0-d86c-11e2-8be4-35db2404c433; session_id | event_id | activity| source| source_elapsed | thread --+--+-+---++-- 396cb7b0-d86c-11e2-8be4-35db2404c433 | 396e1740-d86c-11e2-8be4-35db2404c433 | Parsing INSERT INTO test.test (id, value) VALUES (1, 'one') | 127.0.0.1 | 5622 | Thrift:1 396cb7b0-d86c-11e2-8be4-35db2404c433 | 396e3e50-d86c-11e2-8be4-35db2404c433 | Peparing statement | 127.0.0.1 | 6636 | Thrift:1 396cb7b0-d86c-11e2-8be4-35db2404c433 | 39760680-d86c-11e2-8be4-35db2404c433 | Sending message to /127.0.0.2 | 127.0.0.1 | 57681 | WRITE-/127.0.0.2 396cb7b0-d86c-11e2-8be4-35db2404c433 | 3f6c35a0-d86c-11e2-8be4-35db2404c433 | Write timeout | 127.0.0.1 | 10059608 | Thrift:1 {code} TracingStage frequently times out - Key: CASSANDRA-5658 URL: https://issues.apache.org/jira/browse/CASSANDRA-5658 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 Reporter: Ryan McGuire Attachments: system.log, trace_bug.py I am seeing frequent timeout errors when doing programmatic traces via trace_next_query() {code} ERROR [TracingStage:1] 2013-06-18 19:10:20,669 CassandraDaemon.java (line 196) Exception in thread Thread[TracingStage:1,5,main] java.lang.RuntimeException: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at org.apache.cassandra.service.AbstractWriteResponseHandler.get(AbstractWriteResponseHandler.java:81) at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:454) at org.apache.cassandra.tracing.TraceState$1.runMayThrow(TraceState.java:100) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ... 3 more {code} Attached is the sample code which produced this error and the logs. The error occurs directly after the INSERT statement. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5658) TracingStage frequently times out
[ https://issues.apache.org/jira/browse/CASSANDRA-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687379#comment-13687379 ] Ryan McGuire commented on CASSANDRA-5658: - I'm confused by the last entry in the events table. Is that saying the INSERT timed out or the TRACE timed out? The write was successful, I see no errors in node2 where the INSERT got written to. TracingStage frequently times out - Key: CASSANDRA-5658 URL: https://issues.apache.org/jira/browse/CASSANDRA-5658 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 Reporter: Ryan McGuire Attachments: system.log, trace_bug.py I am seeing frequent timeout errors when doing programmatic traces via trace_next_query() {code} ERROR [TracingStage:1] 2013-06-18 19:10:20,669 CassandraDaemon.java (line 196) Exception in thread Thread[TracingStage:1,5,main] java.lang.RuntimeException: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at org.apache.cassandra.service.AbstractWriteResponseHandler.get(AbstractWriteResponseHandler.java:81) at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:454) at org.apache.cassandra.tracing.TraceState$1.runMayThrow(TraceState.java:100) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ... 3 more {code} Attached is the sample code which produced this error and the logs. The error occurs directly after the INSERT statement. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5659) Hintedhandoff is too slow
Boole Guo created CASSANDRA-5659: Summary: Hintedhandoff is too slow Key: CASSANDRA-5659 URL: https://issues.apache.org/jira/browse/CASSANDRA-5659 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.9 Environment: cassandra 1.1.9 Reporter: Boole Guo The hintedhandoff is too slow, while there are many rows. How can we improve this? Because some time I need the cosistency. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-5659) Hintedhandoff is too slow
[ https://issues.apache.org/jira/browse/CASSANDRA-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-5659. --- Resolution: Invalid This is not a support forum; please ask on the users list. Hintedhandoff is too slow - Key: CASSANDRA-5659 URL: https://issues.apache.org/jira/browse/CASSANDRA-5659 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.9 Environment: cassandra 1.1.9 Reporter: Boole Guo Labels: features The hintedhandoff is too slow, while there are many rows. How can we improve this? Because some time I need the cosistency. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5658) TracingStage frequently times out
[ https://issues.apache.org/jira/browse/CASSANDRA-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687488#comment-13687488 ] Jonathan Ellis commented on CASSANDRA-5658: --- Is it possible one or more ccm nodes is still starting up? Tracing does RF=1 distributed writes so that could cause a problem like this. Hitting the nodes hard enough that they start dropping mutations could also do it. TracingStage frequently times out - Key: CASSANDRA-5658 URL: https://issues.apache.org/jira/browse/CASSANDRA-5658 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 Reporter: Ryan McGuire Attachments: system.log, trace_bug.py I am seeing frequent timeout errors when doing programmatic traces via trace_next_query() {code} ERROR [TracingStage:1] 2013-06-18 19:10:20,669 CassandraDaemon.java (line 196) Exception in thread Thread[TracingStage:1,5,main] java.lang.RuntimeException: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at org.apache.cassandra.service.AbstractWriteResponseHandler.get(AbstractWriteResponseHandler.java:81) at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:454) at org.apache.cassandra.tracing.TraceState$1.runMayThrow(TraceState.java:100) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ... 3 more {code} Attached is the sample code which produced this error and the logs. The error occurs directly after the INSERT statement. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5632) Cross-DC bandwidth-saving broken
[ https://issues.apache.org/jira/browse/CASSANDRA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687511#comment-13687511 ] Dave Brosius commented on CASSANDRA-5632: - other than simple FF, +LGTM -import org.apache.cassandra.tracing.Tracing; +import org.apache.cassandra.tracing.4Tracing; Cross-DC bandwidth-saving broken Key: CASSANDRA-5632 URL: https://issues.apache.org/jira/browse/CASSANDRA-5632 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Attachments: 5632.txt, 5632-v2.txt, cassandra-topology.properties, fix_patch_bug.log We group messages by destination as follows to avoid sending multiple messages to a remote datacenter: {code} // Multimap that holds onto all the messages and addresses meant for a specific datacenter MapString, MultimapMessage, InetAddress dcMessages {code} When we cleaned out the MessageProducer stuff for 2.0, this code {code} MultimapMessage, InetAddress messages = dcMessages.get(dc); ... messages.put(producer.getMessage(Gossiper.instance.getVersion(destination)), destination); {code} turned into {code} MultimapMessageOut, InetAddress messages = dcMessages.get(dc); ... messages.put(rm.createMessage(), destination); {code} Thus, we weren't actually grouping anything anymore -- each destination replica was stored under a separate Message key, unlike under the old CachingMessageProducer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5658) TracingStage frequently times out
[ https://issues.apache.org/jira/browse/CASSANDRA-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-5658: Attachment: (was: system.log) TracingStage frequently times out - Key: CASSANDRA-5658 URL: https://issues.apache.org/jira/browse/CASSANDRA-5658 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 Reporter: Ryan McGuire Attachments: 5658-logs.tar.gz I am seeing frequent timeout errors when doing programmatic traces via trace_next_query() {code} ERROR [TracingStage:1] 2013-06-18 19:10:20,669 CassandraDaemon.java (line 196) Exception in thread Thread[TracingStage:1,5,main] java.lang.RuntimeException: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at org.apache.cassandra.service.AbstractWriteResponseHandler.get(AbstractWriteResponseHandler.java:81) at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:454) at org.apache.cassandra.tracing.TraceState$1.runMayThrow(TraceState.java:100) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ... 3 more {code} Attached is the sample code which produced this error and the logs. The error occurs directly after the INSERT statement. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5658) TracingStage frequently times out
[ https://issues.apache.org/jira/browse/CASSANDRA-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-5658: Attachment: 5658-logs.tar.gz TracingStage frequently times out - Key: CASSANDRA-5658 URL: https://issues.apache.org/jira/browse/CASSANDRA-5658 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 Reporter: Ryan McGuire Attachments: 5658-logs.tar.gz I am seeing frequent timeout errors when doing programmatic traces via trace_next_query() {code} ERROR [TracingStage:1] 2013-06-18 19:10:20,669 CassandraDaemon.java (line 196) Exception in thread Thread[TracingStage:1,5,main] java.lang.RuntimeException: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at org.apache.cassandra.service.AbstractWriteResponseHandler.get(AbstractWriteResponseHandler.java:81) at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:454) at org.apache.cassandra.tracing.TraceState$1.runMayThrow(TraceState.java:100) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ... 3 more {code} Attached is the sample code which produced this error and the logs. The error occurs directly after the INSERT statement. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira