[jira] [Updated] (CASSANDRA-4803) CFRR wide row iterators improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Kołaczkowski updated CASSANDRA-4803: -- Attachment: (was: 0001-Wide-row-iterator-counts-rows-not-columns.patch) CFRR wide row iterators improvements Key: CASSANDRA-4803 URL: https://issues.apache.org/jira/browse/CASSANDRA-4803 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.1.0 Reporter: Piotr Kołaczkowski Assignee: Piotr Kołaczkowski Fix For: 1.2.0 rc1 {code} public float getProgress() { // TODO this is totally broken for wide rows // the progress is likely to be reported slightly off the actual but close enough float progress = ((float) iter.rowsRead() / totalRowCount); return progress 1.0F ? 1.0F : progress; } {code} The problem is iter.rowsRead() does not return the number of rows read from the wide row iterator, but returns number of *columns* (every row is counted multiple times). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4803) CFRR wide row iterators improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Kołaczkowski updated CASSANDRA-4803: -- Attachment: (was: 0003-Fixed-get_paged_slice-memtable-and-sstable-column-it.patch) CFRR wide row iterators improvements Key: CASSANDRA-4803 URL: https://issues.apache.org/jira/browse/CASSANDRA-4803 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.1.0 Reporter: Piotr Kołaczkowski Assignee: Piotr Kołaczkowski Fix For: 1.2.0 rc1 {code} public float getProgress() { // TODO this is totally broken for wide rows // the progress is likely to be reported slightly off the actual but close enough float progress = ((float) iter.rowsRead() / totalRowCount); return progress 1.0F ? 1.0F : progress; } {code} The problem is iter.rowsRead() does not return the number of rows read from the wide row iterator, but returns number of *columns* (every row is counted multiple times). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4803) CFRR wide row iterators improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Kołaczkowski updated CASSANDRA-4803: -- Attachment: (was: 0002-Fixed-bugs-in-describe_splits.-CFRR-uses-row-counts-.patch) CFRR wide row iterators improvements Key: CASSANDRA-4803 URL: https://issues.apache.org/jira/browse/CASSANDRA-4803 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.1.0 Reporter: Piotr Kołaczkowski Assignee: Piotr Kołaczkowski Fix For: 1.2.0 rc1 {code} public float getProgress() { // TODO this is totally broken for wide rows // the progress is likely to be reported slightly off the actual but close enough float progress = ((float) iter.rowsRead() / totalRowCount); return progress 1.0F ? 1.0F : progress; } {code} The problem is iter.rowsRead() does not return the number of rows read from the wide row iterator, but returns number of *columns* (every row is counted multiple times). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4803) CFRR wide row iterators improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Kołaczkowski updated CASSANDRA-4803: -- Attachment: (was: 0004-Better-token-range-wrap-around-handling-in-CFIF-CFRR.patch) CFRR wide row iterators improvements Key: CASSANDRA-4803 URL: https://issues.apache.org/jira/browse/CASSANDRA-4803 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.1.0 Reporter: Piotr Kołaczkowski Assignee: Piotr Kołaczkowski Fix For: 1.2.0 rc1 {code} public float getProgress() { // TODO this is totally broken for wide rows // the progress is likely to be reported slightly off the actual but close enough float progress = ((float) iter.rowsRead() / totalRowCount); return progress 1.0F ? 1.0F : progress; } {code} The problem is iter.rowsRead() does not return the number of rows read from the wide row iterator, but returns number of *columns* (every row is counted multiple times). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4803) CFRR wide row iterators improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Kołaczkowski updated CASSANDRA-4803: -- Attachment: (was: 0005-Fixed-handling-of-start_key-end_token-in-get_range_s.patch) CFRR wide row iterators improvements Key: CASSANDRA-4803 URL: https://issues.apache.org/jira/browse/CASSANDRA-4803 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.1.0 Reporter: Piotr Kołaczkowski Assignee: Piotr Kołaczkowski Fix For: 1.2.0 rc1 {code} public float getProgress() { // TODO this is totally broken for wide rows // the progress is likely to be reported slightly off the actual but close enough float progress = ((float) iter.rowsRead() / totalRowCount); return progress 1.0F ? 1.0F : progress; } {code} The problem is iter.rowsRead() does not return the number of rows read from the wide row iterator, but returns number of *columns* (every row is counted multiple times). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4803) CFRR wide row iterators improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Kołaczkowski updated CASSANDRA-4803: -- Attachment: (was: 0006-Code-cleanup-refactoring-in-CFRR.-Fixed-bug-with-mis.patch) CFRR wide row iterators improvements Key: CASSANDRA-4803 URL: https://issues.apache.org/jira/browse/CASSANDRA-4803 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.1.0 Reporter: Piotr Kołaczkowski Assignee: Piotr Kołaczkowski Fix For: 1.2.0 rc1 {code} public float getProgress() { // TODO this is totally broken for wide rows // the progress is likely to be reported slightly off the actual but close enough float progress = ((float) iter.rowsRead() / totalRowCount); return progress 1.0F ? 1.0F : progress; } {code} The problem is iter.rowsRead() does not return the number of rows read from the wide row iterator, but returns number of *columns* (every row is counted multiple times). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4803) CFRR wide row iterators improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Kołaczkowski updated CASSANDRA-4803: -- Attachment: (was: 0007-Fallback-to-describe_splits-in-case-describe_splits_.patch) CFRR wide row iterators improvements Key: CASSANDRA-4803 URL: https://issues.apache.org/jira/browse/CASSANDRA-4803 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.1.0 Reporter: Piotr Kołaczkowski Assignee: Piotr Kołaczkowski Fix For: 1.2.0 rc1 {code} public float getProgress() { // TODO this is totally broken for wide rows // the progress is likely to be reported slightly off the actual but close enough float progress = ((float) iter.rowsRead() / totalRowCount); return progress 1.0F ? 1.0F : progress; } {code} The problem is iter.rowsRead() does not return the number of rows read from the wide row iterator, but returns number of *columns* (every row is counted multiple times). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4803) CFRR wide row iterators improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Kołaczkowski updated CASSANDRA-4803: -- Attachment: 0007-Fallback-to-describe_splits-in-case-describe_splits_.patch 0006-Code-cleanup-refactoring-in-CFRR.-Fixed-bug-with-mis.patch 0004-Better-token-range-wrap-around-handling-in-CFIF-CFRR.patch Rebased patches for recent 1.1. Some patches have been already applied, so removed them from the list. CFRR wide row iterators improvements Key: CASSANDRA-4803 URL: https://issues.apache.org/jira/browse/CASSANDRA-4803 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.1.0 Reporter: Piotr Kołaczkowski Assignee: Piotr Kołaczkowski Fix For: 1.2.0 rc1 Attachments: 0004-Better-token-range-wrap-around-handling-in-CFIF-CFRR.patch, 0006-Code-cleanup-refactoring-in-CFRR.-Fixed-bug-with-mis.patch, 0007-Fallback-to-describe_splits-in-case-describe_splits_.patch {code} public float getProgress() { // TODO this is totally broken for wide rows // the progress is likely to be reported slightly off the actual but close enough float progress = ((float) iter.rowsRead() / totalRowCount); return progress 1.0F ? 1.0F : progress; } {code} The problem is iter.rowsRead() does not return the number of rows read from the wide row iterator, but returns number of *columns* (every row is counted multiple times). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-1311) Triggers
[ https://issues.apache.org/jira/browse/CASSANDRA-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495196#comment-13495196 ] Vijay commented on CASSANDRA-1311: -- I pushed the initial version of triggers to https://github.com/Vijay2win/cassandra/tree/1311 for a review... 1) User can implement Itriggers and drop the jar into $CASSANDRA_HOME/triggers 2) Patch implements a custom Class loader, which will load the classes in an order, it first looks for the trigger classes in triggers Directory and if it cannot find the required classes needed to complete the the operation (iTrigger.agument) it looks for the class in the parent Class loader. * This buys us 2 things, user can drop all his dependencies in the directory (kind of sandboxed). 3) Every time we want to load a new jar, a new CustomCL will be loaded and the old one is left for GC (So classes associated with the old CL can be freed up). * This should help a bit in avoiding OOM in the perm gen. 4) Batches with contains both batches and Counters will throw an exception, because the MutateAtomic is not allowed on counters anyways... 5) Currently there is a JMX to load the new jars and we also watch triggers directory every minute to looking for new JAR's, I am inclined to removing the watch part for safety and let the user call the JMX. TODO: Need to write more test cases Working on it. Triggers Key: CASSANDRA-1311 URL: https://issues.apache.org/jira/browse/CASSANDRA-1311 Project: Cassandra Issue Type: New Feature Reporter: Maxim Grinev Assignee: Vijay Fix For: 1.3 Attachments: HOWTO-PatchAndRunTriggerExample.txt, HOWTO-PatchAndRunTriggerExample-update1.txt, ImplementationDetails.pdf, ImplementationDetails-update1.pdf, trunk-967053.txt, trunk-984391-update1.txt, trunk-984391-update2.txt Asynchronous triggers is a basic mechanism to implement various use cases of asynchronous execution of application code at database side. For example to support indexes and materialized views, online analytics, push-based data propagation. Please find the motivation, triggers description and list of applications: http://maxgrinev.com/2010/07/23/extending-cassandra-with-asynchronous-triggers/ An example of using triggers for indexing: http://maxgrinev.com/2010/07/23/managing-indexes-in-cassandra-using-async-triggers/ Implementation details are attached. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-1311) Triggers
[ https://issues.apache.org/jira/browse/CASSANDRA-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495196#comment-13495196 ] Vijay edited comment on CASSANDRA-1311 at 11/12/12 10:27 AM: - I pushed the initial version of triggers to https://github.com/Vijay2win/cassandra/tree/1311 for a review... * User can implement Itriggers and drop the jar into $CASSANDRA_HOME/triggers * Patch implements a custom Class loader, which will load the classes in an order, it first looks for the trigger classes in triggers Directory and if it cannot find the required classes needed to complete the the operation (iTrigger.agument) it looks for the class in the parent Class loader. ** This buys us 2 things, user can drop all his dependencies in the directory (kind of sandboxed). ** Every time we want to load a new jar, a new CustomCL will be loaded and the old one is left for GC (So classes associated with the old CL can be freed up). ** This should help a bit in avoiding OOM in the perm gen. * Batches with contains both batches and Counters will throw an exception, because the MutateAtomic is not allowed on counters anyways... * Currently there is a JMX to load the new jars and we also watch triggers directory every minute to looking for new JAR's, I am inclined to removing the watch part for safety and let the user call the JMX. TODO: Need to write more test cases Working on it. was (Author: vijay2...@yahoo.com): I pushed the initial version of triggers to https://github.com/Vijay2win/cassandra/tree/1311 for a review... 1) User can implement Itriggers and drop the jar into $CASSANDRA_HOME/triggers 2) Patch implements a custom Class loader, which will load the classes in an order, it first looks for the trigger classes in triggers Directory and if it cannot find the required classes needed to complete the the operation (iTrigger.agument) it looks for the class in the parent Class loader. * This buys us 2 things, user can drop all his dependencies in the directory (kind of sandboxed). 3) Every time we want to load a new jar, a new CustomCL will be loaded and the old one is left for GC (So classes associated with the old CL can be freed up). * This should help a bit in avoiding OOM in the perm gen. 4) Batches with contains both batches and Counters will throw an exception, because the MutateAtomic is not allowed on counters anyways... 5) Currently there is a JMX to load the new jars and we also watch triggers directory every minute to looking for new JAR's, I am inclined to removing the watch part for safety and let the user call the JMX. TODO: Need to write more test cases Working on it. Triggers Key: CASSANDRA-1311 URL: https://issues.apache.org/jira/browse/CASSANDRA-1311 Project: Cassandra Issue Type: New Feature Reporter: Maxim Grinev Assignee: Vijay Fix For: 1.3 Attachments: HOWTO-PatchAndRunTriggerExample.txt, HOWTO-PatchAndRunTriggerExample-update1.txt, ImplementationDetails.pdf, ImplementationDetails-update1.pdf, trunk-967053.txt, trunk-984391-update1.txt, trunk-984391-update2.txt Asynchronous triggers is a basic mechanism to implement various use cases of asynchronous execution of application code at database side. For example to support indexes and materialized views, online analytics, push-based data propagation. Please find the motivation, triggers description and list of applications: http://maxgrinev.com/2010/07/23/extending-cassandra-with-asynchronous-triggers/ An example of using triggers for indexing: http://maxgrinev.com/2010/07/23/managing-indexes-in-cassandra-using-async-triggers/ Implementation details are attached. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-1311) Triggers
[ https://issues.apache.org/jira/browse/CASSANDRA-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495196#comment-13495196 ] Vijay edited comment on CASSANDRA-1311 at 11/12/12 10:31 AM: - I pushed the initial version of triggers to https://github.com/Vijay2win/cassandra/tree/1311 for a review... * User can implement Itriggers and drop the jar into $CASSANDRA_HOME/triggers * Patch implements a custom Class loader, which will load the classes in an order, it first looks for the trigger classes in triggers Directory and if it cannot find the required classes needed to complete the the operation (iTrigger.agument) it looks for the class in the parent Class loader. ** This buys us 2 things, user can drop all his dependencies in the directory (kind of sandboxed). ** Every time we want to load a new jar, a new CustomCL will be loaded and the old one is left for GC (So classes associated with the old CL can be freed up). ** This should help a bit in avoiding OOM in the perm gen. * Batches with both RowMutations and Counters will throw an exception, because the MutateAtomic is not allowed on counters anyways... * Currently there is a JMX to load the new jars and we also watch triggers directory every minute to looking for new JAR's, I am inclined to removing the watch part for safety and let the user call the JMX to reload the jar's. TODO: Need to write more test cases Working on it. was (Author: vijay2...@yahoo.com): I pushed the initial version of triggers to https://github.com/Vijay2win/cassandra/tree/1311 for a review... * User can implement Itriggers and drop the jar into $CASSANDRA_HOME/triggers * Patch implements a custom Class loader, which will load the classes in an order, it first looks for the trigger classes in triggers Directory and if it cannot find the required classes needed to complete the the operation (iTrigger.agument) it looks for the class in the parent Class loader. ** This buys us 2 things, user can drop all his dependencies in the directory (kind of sandboxed). ** Every time we want to load a new jar, a new CustomCL will be loaded and the old one is left for GC (So classes associated with the old CL can be freed up). ** This should help a bit in avoiding OOM in the perm gen. * Batches with contains both batches and Counters will throw an exception, because the MutateAtomic is not allowed on counters anyways... * Currently there is a JMX to load the new jars and we also watch triggers directory every minute to looking for new JAR's, I am inclined to removing the watch part for safety and let the user call the JMX. TODO: Need to write more test cases Working on it. Triggers Key: CASSANDRA-1311 URL: https://issues.apache.org/jira/browse/CASSANDRA-1311 Project: Cassandra Issue Type: New Feature Reporter: Maxim Grinev Assignee: Vijay Fix For: 1.3 Attachments: HOWTO-PatchAndRunTriggerExample.txt, HOWTO-PatchAndRunTriggerExample-update1.txt, ImplementationDetails.pdf, ImplementationDetails-update1.pdf, trunk-967053.txt, trunk-984391-update1.txt, trunk-984391-update2.txt Asynchronous triggers is a basic mechanism to implement various use cases of asynchronous execution of application code at database side. For example to support indexes and materialized views, online analytics, push-based data propagation. Please find the motivation, triggers description and list of applications: http://maxgrinev.com/2010/07/23/extending-cassandra-with-asynchronous-triggers/ An example of using triggers for indexing: http://maxgrinev.com/2010/07/23/managing-indexes-in-cassandra-using-async-triggers/ Implementation details are attached. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4945) CQL3 does handle List append or prepend with a Prepared list
[ https://issues.apache.org/jira/browse/CASSANDRA-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-4945: Attachment: 4945.txt Hum, apparently CASSANDRA-4739 hasn't been generic enough. Attaching patch to generalize the approach to support all of this. CQL3 does handle List append or prepend with a Prepared list -- Key: CASSANDRA-4945 URL: https://issues.apache.org/jira/browse/CASSANDRA-4945 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 beta 2 Environment: CQL3 Thrift methods (new) Reporter: Rick Shaw Priority: Minor Attachments: 4945.txt I can successfully update a List using the literal syntax: {code} UPDATE testcollection SET L = [98,99,100] + L WHERE k = 1; {code} And I can successfully upsert a List using the Prepared syntax: {code} UPDATE testcollection SET L = ? WHERE k = 1 {code} by providing a decoded ListInteger in the bind values. But using the prepared syntax for an prepend like: {code} UPDATE testcollection SET L = ? + L WHERE k = 1 {code} fails with the following message: {code} java.sql.SQLSyntaxErrorException: InvalidRequestException(why:line 1:33 mismatched input '+' expecting K_WHERE) at org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92) ... ... {code} and an append of a prepared syntax like: {code} UPDATE testcollection SET L = L + ? WHERE k = 1 {code} fails as follows: {code} java.sql.SQLSyntaxErrorException: InvalidRequestException(why:invalid operation for non commutative columnfamily testcollection) at org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92) ... ... {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4945) CQL3 does handle List append or prepend with a Prepared list
[ https://issues.apache.org/jira/browse/CASSANDRA-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-4945: Attachment: (was: 4945.txt) CQL3 does handle List append or prepend with a Prepared list -- Key: CASSANDRA-4945 URL: https://issues.apache.org/jira/browse/CASSANDRA-4945 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 beta 2 Environment: CQL3 Thrift methods (new) Reporter: Rick Shaw Assignee: Sylvain Lebresne Priority: Minor Fix For: 1.2.0 rc1 I can successfully update a List using the literal syntax: {code} UPDATE testcollection SET L = [98,99,100] + L WHERE k = 1; {code} And I can successfully upsert a List using the Prepared syntax: {code} UPDATE testcollection SET L = ? WHERE k = 1 {code} by providing a decoded ListInteger in the bind values. But using the prepared syntax for an prepend like: {code} UPDATE testcollection SET L = ? + L WHERE k = 1 {code} fails with the following message: {code} java.sql.SQLSyntaxErrorException: InvalidRequestException(why:line 1:33 mismatched input '+' expecting K_WHERE) at org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92) ... ... {code} and an append of a prepared syntax like: {code} UPDATE testcollection SET L = L + ? WHERE k = 1 {code} fails as follows: {code} java.sql.SQLSyntaxErrorException: InvalidRequestException(why:invalid operation for non commutative columnfamily testcollection) at org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92) ... ... {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4945) CQL3 does handle List append or prepend with a Prepared list
[ https://issues.apache.org/jira/browse/CASSANDRA-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-4945: Attachment: 4945.txt CQL3 does handle List append or prepend with a Prepared list -- Key: CASSANDRA-4945 URL: https://issues.apache.org/jira/browse/CASSANDRA-4945 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 beta 2 Environment: CQL3 Thrift methods (new) Reporter: Rick Shaw Assignee: Sylvain Lebresne Priority: Minor Fix For: 1.2.0 rc1 Attachments: 4945.txt I can successfully update a List using the literal syntax: {code} UPDATE testcollection SET L = [98,99,100] + L WHERE k = 1; {code} And I can successfully upsert a List using the Prepared syntax: {code} UPDATE testcollection SET L = ? WHERE k = 1 {code} by providing a decoded ListInteger in the bind values. But using the prepared syntax for an prepend like: {code} UPDATE testcollection SET L = ? + L WHERE k = 1 {code} fails with the following message: {code} java.sql.SQLSyntaxErrorException: InvalidRequestException(why:line 1:33 mismatched input '+' expecting K_WHERE) at org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92) ... ... {code} and an append of a prepared syntax like: {code} UPDATE testcollection SET L = L + ? WHERE k = 1 {code} fails as follows: {code} java.sql.SQLSyntaxErrorException: InvalidRequestException(why:invalid operation for non commutative columnfamily testcollection) at org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92) ... ... {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Don't modify thrift list directly (as this doesn't work)
Updated Branches: refs/heads/cassandra-1.2.0 020a83770 - 91187643b Don't modify thrift list directly (as this doesn't work) Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91187643 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91187643 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91187643 Branch: refs/heads/cassandra-1.2.0 Commit: 91187643babfa9d52ba64161cd109765a1a3a3ad Parents: 020a837 Author: Sylvain Lebresne sylv...@datastax.com Authored: Mon Nov 12 12:18:48 2012 +0100 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Mon Nov 12 12:18:48 2012 +0100 -- src/java/org/apache/cassandra/cli/CliClient.java |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/91187643/src/java/org/apache/cassandra/cli/CliClient.java -- diff --git a/src/java/org/apache/cassandra/cli/CliClient.java b/src/java/org/apache/cassandra/cli/CliClient.java index 3060359..a4b4483 100644 --- a/src/java/org/apache/cassandra/cli/CliClient.java +++ b/src/java/org/apache/cassandra/cli/CliClient.java @@ -2796,7 +2796,6 @@ public class CliClient */ private void updateColumnMetaData(CfDef columnFamily, ByteBuffer columnName, String validationClass) { -ListColumnDef columnMetaData = columnFamily.getColumn_metadata(); ColumnDef column = getColumnDefByName(columnFamily, columnName); if (column != null) @@ -2810,7 +2809,9 @@ public class CliClient } else { +ListColumnDef columnMetaData = new ArrayListColumnDef(columnFamily.getColumn_metadata()); columnMetaData.add(new ColumnDef(columnName, validationClass)); +columnFamily.setColumn_metadata(columnMetaData); } }
[jira] [Commented] (CASSANDRA-4939) Add debug logging to list filenames processed by o.a.c.db.Directories.migrateFile method
[ https://issues.apache.org/jira/browse/CASSANDRA-4939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495210#comment-13495210 ] Sylvain Lebresne commented on CASSANDRA-4939: - +1 Add debug logging to list filenames processed by o.a.c.db.Directories.migrateFile method Key: CASSANDRA-4939 URL: https://issues.apache.org/jira/browse/CASSANDRA-4939 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.6 Reporter: J.B. Langston Assignee: Dave Brosius Priority: Minor Fix For: 1.2.1 Attachments: 4939.txt Customer is getting the following error when starting Cassandra: ERROR 20:20:06,635 Exception encountered during startup java.lang.StringIndexOutOfBoundsException: String index out of range: -1 at java.lang.String.substring(Unknown Source) at org.apache.cassandra.db.Directories.migrateFile(Directories.java:556) at org.apache.cassandra.db.Directories.migrateSSTables(Directories.java:493) It looks like this is caused by an file with an unexpected name in one of his keyspace directories. However, because we don't log the name of the file as it is processed, it is hard to tell which file is causing it to choke. It would be good to add a logger.debug statement at the beginning of the method to list the file currently being processed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[1/2] git commit: Don't modify thrift list directly (as this doesn't work)
Updated Branches: refs/heads/cassandra-1.2 de6260ddc - 91187643b Don't modify thrift list directly (as this doesn't work) Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91187643 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91187643 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91187643 Branch: refs/heads/cassandra-1.2 Commit: 91187643babfa9d52ba64161cd109765a1a3a3ad Parents: 020a837 Author: Sylvain Lebresne sylv...@datastax.com Authored: Mon Nov 12 12:18:48 2012 +0100 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Mon Nov 12 12:18:48 2012 +0100 -- src/java/org/apache/cassandra/cli/CliClient.java |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/91187643/src/java/org/apache/cassandra/cli/CliClient.java -- diff --git a/src/java/org/apache/cassandra/cli/CliClient.java b/src/java/org/apache/cassandra/cli/CliClient.java index 3060359..a4b4483 100644 --- a/src/java/org/apache/cassandra/cli/CliClient.java +++ b/src/java/org/apache/cassandra/cli/CliClient.java @@ -2796,7 +2796,6 @@ public class CliClient */ private void updateColumnMetaData(CfDef columnFamily, ByteBuffer columnName, String validationClass) { -ListColumnDef columnMetaData = columnFamily.getColumn_metadata(); ColumnDef column = getColumnDefByName(columnFamily, columnName); if (column != null) @@ -2810,7 +2809,9 @@ public class CliClient } else { +ListColumnDef columnMetaData = new ArrayListColumnDef(columnFamily.getColumn_metadata()); columnMetaData.add(new ColumnDef(columnName, validationClass)); +columnFamily.setColumn_metadata(columnMetaData); } }
[2/2] git commit: Update CQL example in readme file
Update CQL example in readme file Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/020a8377 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/020a8377 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/020a8377 Branch: refs/heads/cassandra-1.2 Commit: 020a83770f65a2d7f4c5739bcc578e973923f4bb Parents: de6260d Author: Sylvain Lebresne sylv...@datastax.com Authored: Mon Nov 12 08:25:27 2012 +0100 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Mon Nov 12 08:25:27 2012 +0100 -- README.txt | 11 +-- 1 files changed, 5 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/020a8377/README.txt -- diff --git a/README.txt b/README.txt index 3391f64..1356c9c 100644 --- a/README.txt +++ b/README.txt @@ -52,24 +52,23 @@ Similarly, uninstall will remove the service. Now let's try to read and write some data using the Cassandra Query Language: - * bin/cqlsh --cql3 + * bin/cqlsh The command line client is interactive so if everything worked you should be sitting in front of a prompt... Connected to Test Cluster at localhost:9160. - [cqlsh 2.2.0 | Cassandra 1.1.3 | CQL spec 3.0.0 | Thrift protocol 19.32.0] + [cqlsh 2.2.0 | Cassandra 1.2.0 | CQL spec 3.0.0 | Thrift protocol 19.35.0] Use HELP for help. cqlsh - + As the banner says, you can use 'help;' or '?' to see what CQL has to offer, and 'quit;' or 'exit;' when you've had enough fun. But lets try something slightly more interesting: cqlsh CREATE SCHEMA schema1 - WITH strategy_class = 'SimpleStrategy' - AND strategy_options:replication_factor='1'; + WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }; cqlsh USE schema1; cqlsh:Schema1 CREATE TABLE users ( user_id varchar PRIMARY KEY, @@ -89,7 +88,7 @@ something slightly more interesting: If your session looks similar to what's above, congrats, your single node cluster is operational! -For more on what commands are supported by CQL, see +For more on what commands are supported by CQL, see https://github.com/apache/cassandra/blob/trunk/doc/cql3/CQL.textile. A reasonable way to think of it is as, SQL minus joins and subqueries.
[1/2] git commit: Merge branch 'cassandra-1.2' into trunk
Updated Branches: refs/heads/trunk 563417451 - a007e1b5f Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a007e1b5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a007e1b5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a007e1b5 Branch: refs/heads/trunk Commit: a007e1b5f8fd71d572041fdff5abb2d70d66363b Parents: 5634174 9118764 Author: Sylvain Lebresne sylv...@datastax.com Authored: Mon Nov 12 12:20:06 2012 +0100 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Mon Nov 12 12:20:06 2012 +0100 -- src/java/org/apache/cassandra/cli/CliClient.java |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) --
[2/2] git commit: Don't modify thrift list directly (as this doesn't work)
Don't modify thrift list directly (as this doesn't work) Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91187643 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91187643 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91187643 Branch: refs/heads/trunk Commit: 91187643babfa9d52ba64161cd109765a1a3a3ad Parents: 020a837 Author: Sylvain Lebresne sylv...@datastax.com Authored: Mon Nov 12 12:18:48 2012 +0100 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Mon Nov 12 12:18:48 2012 +0100 -- src/java/org/apache/cassandra/cli/CliClient.java |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/91187643/src/java/org/apache/cassandra/cli/CliClient.java -- diff --git a/src/java/org/apache/cassandra/cli/CliClient.java b/src/java/org/apache/cassandra/cli/CliClient.java index 3060359..a4b4483 100644 --- a/src/java/org/apache/cassandra/cli/CliClient.java +++ b/src/java/org/apache/cassandra/cli/CliClient.java @@ -2796,7 +2796,6 @@ public class CliClient */ private void updateColumnMetaData(CfDef columnFamily, ByteBuffer columnName, String validationClass) { -ListColumnDef columnMetaData = columnFamily.getColumn_metadata(); ColumnDef column = getColumnDefByName(columnFamily, columnName); if (column != null) @@ -2810,7 +2809,9 @@ public class CliClient } else { +ListColumnDef columnMetaData = new ArrayListColumnDef(columnFamily.getColumn_metadata()); columnMetaData.add(new ColumnDef(columnName, validationClass)); +columnFamily.setColumn_metadata(columnMetaData); } }
[jira] [Created] (CASSANDRA-4948) inserting data into СF with cassandra compound key from PIG
Shamim Ahmed created CASSANDRA-4948: --- Summary: inserting data into СF with cassandra compound key from PIG Key: CASSANDRA-4948 URL: https://issues.apache.org/jira/browse/CASSANDRA-4948 Project: Cassandra Issue Type: Improvement Components: Core, Hadoop Affects Versions: 1.1.5 Reporter: Shamim Ahmed Fix For: 1.1.7 Support inserting data into CF with cassandra compound key. For example we have the CF like these CREATE TABLE clicks ( user_id varchar, time timestamp, url varchar, PRIMARY KEY (user_id, time) ) WITH COMPACT STORAGE; Want to insert data into above CF from PIG. Currently Cassandra returing following exception java.io.IOException: InvalidRequestException(why:Expected 8 or 0 byte long for date (4)) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3719) Upgrade thrift to latest release version (0.9.x)
[ https://issues.apache.org/jira/browse/CASSANDRA-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495242#comment-13495242 ] T Jake Luciani commented on CASSANDRA-3719: --- My only concern about this is the rest of the Apache projects (hadoop,hive,etc) are built on 0.7.0 Is there a compelling reason to upgrade this? Upgrade thrift to latest release version (0.9.x) Key: CASSANDRA-3719 URL: https://issues.apache.org/jira/browse/CASSANDRA-3719 Project: Cassandra Issue Type: Task Reporter: Jake Farrell Assignee: Jake Farrell Priority: Minor Attachments: Cassandra-3719-v1-001-updated-source.patch, Cassandra-3719-v1-002-thrift-jar-and-license.patch In Cassandra-3213 thrift was upgraded to thrift 0.7 and not the latest 0.8 release. This is due to THRIFT-1167 where the TNonblockingTransport in TNonblockingServer.FrameBuffer was moved into AbstractNonblockingServer.FrameBuffer and was changed from public to private. This causes the transport to not be available for SocketSessionManagementService as noted above. There is no short term workaround for this. I have everything ready for patching but with the above mentioned issue it will be impossible to use Thrift 0.8.0. The fix for this is committed (THRIFT-1464) and will be available in the next Thrift release 0.9. Adding this to keep track of and will update with patches for the current version of Thrift when pushing out the next release -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4949) Expose reload trigger functionality to nodetool to orchestrate trigger (re)loads across the cluster
Nate McCall created CASSANDRA-4949: -- Summary: Expose reload trigger functionality to nodetool to orchestrate trigger (re)loads across the cluster Key: CASSANDRA-4949 URL: https://issues.apache.org/jira/browse/CASSANDRA-4949 Project: Cassandra Issue Type: New Feature Affects Versions: 1.3 Reporter: Nate McCall Pending completion of CASSANDRA-1311, this would expose the load/reload trigger library functionality to nodetool -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-1311) Triggers
[ https://issues.apache.org/jira/browse/CASSANDRA-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495247#comment-13495247 ] Nate McCall edited comment on CASSANDRA-1311 at 11/12/12 1:31 PM: -- bq. Currently there is a JMX to load the new jars and we also watch triggers directory every minute to looking for new JAR's, I am inclined to removing the watch part for safety and let the user call the JMX to reload the jar's. In the use cases I see for this, a timer would not give me enough control in orchestrating an update to the trigger code. I would much prefer JMX - could be more easily hooked into nodetool for all at once as well. (Edit) thought about this update orchestration for a minute, created CASSANDRA-4949 for using nodetool for such. was (Author: zznate): bq. Currently there is a JMX to load the new jars and we also watch triggers directory every minute to looking for new JAR's, I am inclined to removing the watch part for safety and let the user call the JMX to reload the jar's. In the use cases I see for this, a timer would not give me enough control in orchestrating an update to the trigger code. I would much prefer JMX - could be more easily hooked into nodetool for all at once as well. Triggers Key: CASSANDRA-1311 URL: https://issues.apache.org/jira/browse/CASSANDRA-1311 Project: Cassandra Issue Type: New Feature Reporter: Maxim Grinev Assignee: Vijay Fix For: 1.3 Attachments: HOWTO-PatchAndRunTriggerExample.txt, HOWTO-PatchAndRunTriggerExample-update1.txt, ImplementationDetails.pdf, ImplementationDetails-update1.pdf, trunk-967053.txt, trunk-984391-update1.txt, trunk-984391-update2.txt Asynchronous triggers is a basic mechanism to implement various use cases of asynchronous execution of application code at database side. For example to support indexes and materialized views, online analytics, push-based data propagation. Please find the motivation, triggers description and list of applications: http://maxgrinev.com/2010/07/23/extending-cassandra-with-asynchronous-triggers/ An example of using triggers for indexing: http://maxgrinev.com/2010/07/23/managing-indexes-in-cassandra-using-async-triggers/ Implementation details are attached. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4940) Truncate doesn't clear row cache
[ https://issues.apache.org/jira/browse/CASSANDRA-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495271#comment-13495271 ] Sylvain Lebresne commented on CASSANDRA-4940: - +1 (I've pushed a test in dtest for this) Truncate doesn't clear row cache Key: CASSANDRA-4940 URL: https://issues.apache.org/jira/browse/CASSANDRA-4940 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremiah Jordan Assignee: Jonathan Ellis Priority: Minor Fix For: 1.1.7 Attachments: 4940.txt Truncate doesn't clear the row cache. select * from table which skips the row cache returns no data, but selecting by key does. cqlsh:temp select v1..v3 from temp2 where k in (3,2,1); v1 | v2 | v3 ++ 16 | 17 | 18 12 | 13 | 14 8 | 9 | 10 cqlsh:temp truncate temp2; cqlsh:temp select v1..v3 from temp2 where k in (3,2,1); v1 | v2 | v3 ++ 16 | 17 | 18 12 | 13 | 14 8 | 9 | 10 cqlsh:temp select * from temp2; cqlsh:temp select v1..v3 from temp2 where k in (3,2,1); v1 | v2 | v3 ++ 16 | 17 | 18 12 | 13 | 14 8 | 9 | 10 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[5/8] git commit: expunge row cache post-truncate patch by jbellis; reviewed by slebresne for CASSANDRA-4940
expunge row cache post-truncate patch by jbellis; reviewed by slebresne for CASSANDRA-4940 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a05f6766 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a05f6766 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a05f6766 Branch: refs/heads/trunk Commit: a05f6766e292769fde3b7536e5586cf324a1cb17 Parents: f32110c Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 08:27:16 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 08:27:16 2012 -0600 -- CHANGES.txt|1 + .../cassandra/db/compaction/CompactionManager.java |8 2 files changed, 9 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a05f6766/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2f945bb..088daa7 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.1.7 + * expunge row cache post-truncate (CASSANDRA-4940) * remove IAuthority2 (CASSANDRA-4875) * add get[Row|Key]CacheEntries to CacheServiceMBean (CASSANDRA-4859) * fix get_paged_slice to wrap to next row correctly (CASSANDRA-4816) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a05f6766/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index fcdf45b..edfea0a 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -32,6 +32,7 @@ import javax.management.MBeanServer; import javax.management.ObjectName; import org.apache.cassandra.cache.AutoSavingCache; +import org.apache.cassandra.cache.RowCacheKey; import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor; import org.apache.cassandra.concurrent.NamedThreadFactory; import org.apache.cassandra.config.CFMetaData; @@ -48,6 +49,7 @@ import org.apache.cassandra.io.sstable.*; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.io.util.RandomAccessReader; import org.apache.cassandra.service.AntiEntropyService; +import org.apache.cassandra.service.CacheService; import org.apache.cassandra.service.StorageService; import org.apache.cassandra.utils.*; import org.slf4j.Logger; @@ -813,6 +815,12 @@ public class CompactionManager implements CompactionManagerMBean for (SecondaryIndex index : main.indexManager.getIndexes()) index.truncate(truncatedAt); + +for (RowCacheKey key : CacheService.instance.rowCache.getKeySet()) +{ +if (key.cfId == main.metadata.cfId) +CacheService.instance.rowCache.remove(key); +} } finally {
[3/8] git commit: merge from 1.1
merge from 1.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93f8fec9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93f8fec9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93f8fec9 Branch: refs/heads/cassandra-1.2 Commit: 93f8fec9d21e4fbe48241fb096892bd24080207b Parents: 9118764 a05f676 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 08:28:27 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 08:28:27 2012 -0600 -- CHANGES.txt|2 ++ .../cassandra/db/compaction/CompactionManager.java |8 2 files changed, 10 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/93f8fec9/CHANGES.txt -- diff --cc CHANGES.txt index 43fc53a,088daa7..ff79b9a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,71 -1,6 +1,73 @@@ -1.1.7 +1.2-rc1 + * Move CompressionMetadata off-heap (CASSANDRA-4937) + * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) + * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402) + * acquire references to overlapping sstables during compaction so bloom filter + doesn't get free'd prematurely (CASSANDRA-4934) + * Don't share slice query filter in CQL3 SelectStatement (CASSANDRA-4928) + * Separate tracing from Log4J (CASSANDRA-4861) + * Exclude gcable tombstones from merkle-tree computation (CASSANDRA-4905) + * Better printing of AbstractBounds for tracing (CASSANDRA-4931) ++Merged from 1.1: + * expunge row cache post-truncate (CASSANDRA-4940) - * remove IAuthority2 (CASSANDRA-4875) + + +1.2-beta2 + * fp rate of 1.0 disables BF entirely; LCS defaults to 1.0 (CASSANDRA-4876) + * off-heap bloom filters for row keys (CASSANDRA_4865) + * add extension point for sstable components (CASSANDRA-4049) + * improve tracing output (CASSANDRA-4852, 4862) + * make TRACE verb droppable (CASSANDRA-4672) + * fix BulkLoader recognition of CQL3 columnfamilies (CASSANDRA-4755) + * Sort commitlog segments for replay by id instead of mtime (CASSANDRA-4793) + * Make hint delivery asynchronous (CASSANDRA-4761) + * Pluggable Thrift transport factories for CLI and cqlsh (CASSANDRA-4609, 4610) + * cassandra-cli: allow Double value type to be inserted to a column (CASSANDRA-4661) + * Add ability to use custom TServerFactory implementations (CASSANDRA-4608) + * optimize batchlog flushing to skip successful batches (CASSANDRA-4667) + * include metadata for system keyspace itself in schema tables (CASSANDRA-4416) + * add check to PropertyFileSnitch to verify presence of location for + local node (CASSANDRA-4728) + * add PBSPredictor consistency modeler (CASSANDRA-4261) + * remove vestiges of Thrift unframed mode (CASSANDRA-4729) + * optimize single-row PK lookups (CASSANDRA-4710) + * adjust blockFor calculation to account for pending ranges due to node + movement (CASSANDRA-833) + * Change CQL version to 3.0.0 and stop accepting 3.0.0-beta1 (CASSANDRA-4649) + * (CQL3) Make prepared statement global instead of per connection + (CASSANDRA-4449) + * Fix scrubbing of CQL3 created tables (CASSANDRA-4685) + * (CQL3) Fix validation when using counter and regular columns in the same + table (CASSANDRA-4706) + * Fix bug starting Cassandra with simple authentication (CASSANDRA-4648) + * Add support for batchlog in CQL3 (CASSANDRA-4545, 4738) + * Add support for multiple column family outputs in CFOF (CASSANDRA-4208) + * Support repairing only the local DC nodes (CASSANDRA-4747) + * Use rpc_address for binary protocol and change default port (CASSANRA-4751) + * Fix use of collections in prepared statements (CASSANDRA-4739) + * Store more information into peers table (CASSANDRA-4351, 4814) + * Configurable bucket size for size tiered compaction (CASSANDRA-4704) + * Run leveled compaction in parallel (CASSANDRA-4310) + * Fix potential NPE during CFS reload (CASSANDRA-4786) + * Composite indexes may miss results (CASSANDRA-4796) + * Move consistency level to the protocol level (CASSANDRA-4734, 4824) + * Fix Subcolumn slice ends not respected (CASSANDRA-4826) + * Fix Assertion error in cql3 select (CASSANDRA-4783) + * Fix list prepend logic (CQL3) (CASSANDRA-4835) + * Add booleans as literals in CQL3 (CASSANDRA-4776) + * Allow renaming PK columns in CQL3 (CASSANDRA-4822) + * Fix binary protocol NEW_NODE event (CASSANDRA-4679) + * Fix potential infinite loop in tombstone compaction (CASSANDRA-4781) + * Remove system tables accounting from schema (CASSANDRA-4850) + * Force provided columns in clustering key order in 'CLUSTERING ORDER BY' (CASSANDRA-4881) + * Fix composite index bug (CASSANDRA-4884) + * Fix short
[2/8] git commit: merge from 1.1
merge from 1.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93f8fec9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93f8fec9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93f8fec9 Branch: refs/heads/cassandra-1.2.0 Commit: 93f8fec9d21e4fbe48241fb096892bd24080207b Parents: 9118764 a05f676 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 08:28:27 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 08:28:27 2012 -0600 -- CHANGES.txt|2 ++ .../cassandra/db/compaction/CompactionManager.java |8 2 files changed, 10 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/93f8fec9/CHANGES.txt -- diff --cc CHANGES.txt index 43fc53a,088daa7..ff79b9a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,71 -1,6 +1,73 @@@ -1.1.7 +1.2-rc1 + * Move CompressionMetadata off-heap (CASSANDRA-4937) + * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) + * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402) + * acquire references to overlapping sstables during compaction so bloom filter + doesn't get free'd prematurely (CASSANDRA-4934) + * Don't share slice query filter in CQL3 SelectStatement (CASSANDRA-4928) + * Separate tracing from Log4J (CASSANDRA-4861) + * Exclude gcable tombstones from merkle-tree computation (CASSANDRA-4905) + * Better printing of AbstractBounds for tracing (CASSANDRA-4931) ++Merged from 1.1: + * expunge row cache post-truncate (CASSANDRA-4940) - * remove IAuthority2 (CASSANDRA-4875) + + +1.2-beta2 + * fp rate of 1.0 disables BF entirely; LCS defaults to 1.0 (CASSANDRA-4876) + * off-heap bloom filters for row keys (CASSANDRA_4865) + * add extension point for sstable components (CASSANDRA-4049) + * improve tracing output (CASSANDRA-4852, 4862) + * make TRACE verb droppable (CASSANDRA-4672) + * fix BulkLoader recognition of CQL3 columnfamilies (CASSANDRA-4755) + * Sort commitlog segments for replay by id instead of mtime (CASSANDRA-4793) + * Make hint delivery asynchronous (CASSANDRA-4761) + * Pluggable Thrift transport factories for CLI and cqlsh (CASSANDRA-4609, 4610) + * cassandra-cli: allow Double value type to be inserted to a column (CASSANDRA-4661) + * Add ability to use custom TServerFactory implementations (CASSANDRA-4608) + * optimize batchlog flushing to skip successful batches (CASSANDRA-4667) + * include metadata for system keyspace itself in schema tables (CASSANDRA-4416) + * add check to PropertyFileSnitch to verify presence of location for + local node (CASSANDRA-4728) + * add PBSPredictor consistency modeler (CASSANDRA-4261) + * remove vestiges of Thrift unframed mode (CASSANDRA-4729) + * optimize single-row PK lookups (CASSANDRA-4710) + * adjust blockFor calculation to account for pending ranges due to node + movement (CASSANDRA-833) + * Change CQL version to 3.0.0 and stop accepting 3.0.0-beta1 (CASSANDRA-4649) + * (CQL3) Make prepared statement global instead of per connection + (CASSANDRA-4449) + * Fix scrubbing of CQL3 created tables (CASSANDRA-4685) + * (CQL3) Fix validation when using counter and regular columns in the same + table (CASSANDRA-4706) + * Fix bug starting Cassandra with simple authentication (CASSANDRA-4648) + * Add support for batchlog in CQL3 (CASSANDRA-4545, 4738) + * Add support for multiple column family outputs in CFOF (CASSANDRA-4208) + * Support repairing only the local DC nodes (CASSANDRA-4747) + * Use rpc_address for binary protocol and change default port (CASSANRA-4751) + * Fix use of collections in prepared statements (CASSANDRA-4739) + * Store more information into peers table (CASSANDRA-4351, 4814) + * Configurable bucket size for size tiered compaction (CASSANDRA-4704) + * Run leveled compaction in parallel (CASSANDRA-4310) + * Fix potential NPE during CFS reload (CASSANDRA-4786) + * Composite indexes may miss results (CASSANDRA-4796) + * Move consistency level to the protocol level (CASSANDRA-4734, 4824) + * Fix Subcolumn slice ends not respected (CASSANDRA-4826) + * Fix Assertion error in cql3 select (CASSANDRA-4783) + * Fix list prepend logic (CQL3) (CASSANDRA-4835) + * Add booleans as literals in CQL3 (CASSANDRA-4776) + * Allow renaming PK columns in CQL3 (CASSANDRA-4822) + * Fix binary protocol NEW_NODE event (CASSANDRA-4679) + * Fix potential infinite loop in tombstone compaction (CASSANDRA-4781) + * Remove system tables accounting from schema (CASSANDRA-4850) + * Force provided columns in clustering key order in 'CLUSTERING ORDER BY' (CASSANDRA-4881) + * Fix composite index bug (CASSANDRA-4884) + * Fix short
[4/8] git commit: merge from 1.1
merge from 1.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93f8fec9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93f8fec9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93f8fec9 Branch: refs/heads/trunk Commit: 93f8fec9d21e4fbe48241fb096892bd24080207b Parents: 9118764 a05f676 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 08:28:27 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 08:28:27 2012 -0600 -- CHANGES.txt|2 ++ .../cassandra/db/compaction/CompactionManager.java |8 2 files changed, 10 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/93f8fec9/CHANGES.txt -- diff --cc CHANGES.txt index 43fc53a,088daa7..ff79b9a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,71 -1,6 +1,73 @@@ -1.1.7 +1.2-rc1 + * Move CompressionMetadata off-heap (CASSANDRA-4937) + * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) + * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402) + * acquire references to overlapping sstables during compaction so bloom filter + doesn't get free'd prematurely (CASSANDRA-4934) + * Don't share slice query filter in CQL3 SelectStatement (CASSANDRA-4928) + * Separate tracing from Log4J (CASSANDRA-4861) + * Exclude gcable tombstones from merkle-tree computation (CASSANDRA-4905) + * Better printing of AbstractBounds for tracing (CASSANDRA-4931) ++Merged from 1.1: + * expunge row cache post-truncate (CASSANDRA-4940) - * remove IAuthority2 (CASSANDRA-4875) + + +1.2-beta2 + * fp rate of 1.0 disables BF entirely; LCS defaults to 1.0 (CASSANDRA-4876) + * off-heap bloom filters for row keys (CASSANDRA_4865) + * add extension point for sstable components (CASSANDRA-4049) + * improve tracing output (CASSANDRA-4852, 4862) + * make TRACE verb droppable (CASSANDRA-4672) + * fix BulkLoader recognition of CQL3 columnfamilies (CASSANDRA-4755) + * Sort commitlog segments for replay by id instead of mtime (CASSANDRA-4793) + * Make hint delivery asynchronous (CASSANDRA-4761) + * Pluggable Thrift transport factories for CLI and cqlsh (CASSANDRA-4609, 4610) + * cassandra-cli: allow Double value type to be inserted to a column (CASSANDRA-4661) + * Add ability to use custom TServerFactory implementations (CASSANDRA-4608) + * optimize batchlog flushing to skip successful batches (CASSANDRA-4667) + * include metadata for system keyspace itself in schema tables (CASSANDRA-4416) + * add check to PropertyFileSnitch to verify presence of location for + local node (CASSANDRA-4728) + * add PBSPredictor consistency modeler (CASSANDRA-4261) + * remove vestiges of Thrift unframed mode (CASSANDRA-4729) + * optimize single-row PK lookups (CASSANDRA-4710) + * adjust blockFor calculation to account for pending ranges due to node + movement (CASSANDRA-833) + * Change CQL version to 3.0.0 and stop accepting 3.0.0-beta1 (CASSANDRA-4649) + * (CQL3) Make prepared statement global instead of per connection + (CASSANDRA-4449) + * Fix scrubbing of CQL3 created tables (CASSANDRA-4685) + * (CQL3) Fix validation when using counter and regular columns in the same + table (CASSANDRA-4706) + * Fix bug starting Cassandra with simple authentication (CASSANDRA-4648) + * Add support for batchlog in CQL3 (CASSANDRA-4545, 4738) + * Add support for multiple column family outputs in CFOF (CASSANDRA-4208) + * Support repairing only the local DC nodes (CASSANDRA-4747) + * Use rpc_address for binary protocol and change default port (CASSANRA-4751) + * Fix use of collections in prepared statements (CASSANDRA-4739) + * Store more information into peers table (CASSANDRA-4351, 4814) + * Configurable bucket size for size tiered compaction (CASSANDRA-4704) + * Run leveled compaction in parallel (CASSANDRA-4310) + * Fix potential NPE during CFS reload (CASSANDRA-4786) + * Composite indexes may miss results (CASSANDRA-4796) + * Move consistency level to the protocol level (CASSANDRA-4734, 4824) + * Fix Subcolumn slice ends not respected (CASSANDRA-4826) + * Fix Assertion error in cql3 select (CASSANDRA-4783) + * Fix list prepend logic (CQL3) (CASSANDRA-4835) + * Add booleans as literals in CQL3 (CASSANDRA-4776) + * Allow renaming PK columns in CQL3 (CASSANDRA-4822) + * Fix binary protocol NEW_NODE event (CASSANDRA-4679) + * Fix potential infinite loop in tombstone compaction (CASSANDRA-4781) + * Remove system tables accounting from schema (CASSANDRA-4850) + * Force provided columns in clustering key order in 'CLUSTERING ORDER BY' (CASSANDRA-4881) + * Fix composite index bug (CASSANDRA-4884) + * Fix short read
[1/8] git commit: Merge branch 'cassandra-1.2' into trunk
Updated Branches: refs/heads/cassandra-1.1 f32110c6c - a05f6766e refs/heads/cassandra-1.2 91187643b - 93f8fec9d refs/heads/cassandra-1.2.0 91187643b - 93f8fec9d refs/heads/trunk a007e1b5f - d104ca636 Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d104ca63 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d104ca63 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d104ca63 Branch: refs/heads/trunk Commit: d104ca636c48433d8d435fa932ff92b6afd409b2 Parents: a007e1b 93f8fec Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 08:28:50 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 08:28:50 2012 -0600 -- CHANGES.txt|2 ++ .../cassandra/db/compaction/CompactionManager.java |8 2 files changed, 10 insertions(+), 0 deletions(-) --
[6/8] git commit: expunge row cache post-truncate patch by jbellis; reviewed by slebresne for CASSANDRA-4940
expunge row cache post-truncate patch by jbellis; reviewed by slebresne for CASSANDRA-4940 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a05f6766 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a05f6766 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a05f6766 Branch: refs/heads/cassandra-1.2.0 Commit: a05f6766e292769fde3b7536e5586cf324a1cb17 Parents: f32110c Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 08:27:16 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 08:27:16 2012 -0600 -- CHANGES.txt|1 + .../cassandra/db/compaction/CompactionManager.java |8 2 files changed, 9 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a05f6766/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2f945bb..088daa7 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.1.7 + * expunge row cache post-truncate (CASSANDRA-4940) * remove IAuthority2 (CASSANDRA-4875) * add get[Row|Key]CacheEntries to CacheServiceMBean (CASSANDRA-4859) * fix get_paged_slice to wrap to next row correctly (CASSANDRA-4816) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a05f6766/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index fcdf45b..edfea0a 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -32,6 +32,7 @@ import javax.management.MBeanServer; import javax.management.ObjectName; import org.apache.cassandra.cache.AutoSavingCache; +import org.apache.cassandra.cache.RowCacheKey; import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor; import org.apache.cassandra.concurrent.NamedThreadFactory; import org.apache.cassandra.config.CFMetaData; @@ -48,6 +49,7 @@ import org.apache.cassandra.io.sstable.*; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.io.util.RandomAccessReader; import org.apache.cassandra.service.AntiEntropyService; +import org.apache.cassandra.service.CacheService; import org.apache.cassandra.service.StorageService; import org.apache.cassandra.utils.*; import org.slf4j.Logger; @@ -813,6 +815,12 @@ public class CompactionManager implements CompactionManagerMBean for (SecondaryIndex index : main.indexManager.getIndexes()) index.truncate(truncatedAt); + +for (RowCacheKey key : CacheService.instance.rowCache.getKeySet()) +{ +if (key.cfId == main.metadata.cfId) +CacheService.instance.rowCache.remove(key); +} } finally {
[7/8] git commit: expunge row cache post-truncate patch by jbellis; reviewed by slebresne for CASSANDRA-4940
expunge row cache post-truncate patch by jbellis; reviewed by slebresne for CASSANDRA-4940 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a05f6766 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a05f6766 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a05f6766 Branch: refs/heads/cassandra-1.2 Commit: a05f6766e292769fde3b7536e5586cf324a1cb17 Parents: f32110c Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 08:27:16 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 08:27:16 2012 -0600 -- CHANGES.txt|1 + .../cassandra/db/compaction/CompactionManager.java |8 2 files changed, 9 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a05f6766/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2f945bb..088daa7 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.1.7 + * expunge row cache post-truncate (CASSANDRA-4940) * remove IAuthority2 (CASSANDRA-4875) * add get[Row|Key]CacheEntries to CacheServiceMBean (CASSANDRA-4859) * fix get_paged_slice to wrap to next row correctly (CASSANDRA-4816) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a05f6766/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index fcdf45b..edfea0a 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -32,6 +32,7 @@ import javax.management.MBeanServer; import javax.management.ObjectName; import org.apache.cassandra.cache.AutoSavingCache; +import org.apache.cassandra.cache.RowCacheKey; import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor; import org.apache.cassandra.concurrent.NamedThreadFactory; import org.apache.cassandra.config.CFMetaData; @@ -48,6 +49,7 @@ import org.apache.cassandra.io.sstable.*; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.io.util.RandomAccessReader; import org.apache.cassandra.service.AntiEntropyService; +import org.apache.cassandra.service.CacheService; import org.apache.cassandra.service.StorageService; import org.apache.cassandra.utils.*; import org.slf4j.Logger; @@ -813,6 +815,12 @@ public class CompactionManager implements CompactionManagerMBean for (SecondaryIndex index : main.indexManager.getIndexes()) index.truncate(truncatedAt); + +for (RowCacheKey key : CacheService.instance.rowCache.getKeySet()) +{ +if (key.cfId == main.metadata.cfId) +CacheService.instance.rowCache.remove(key); +} } finally {
[8/8] git commit: expunge row cache post-truncate patch by jbellis; reviewed by slebresne for CASSANDRA-4940
expunge row cache post-truncate patch by jbellis; reviewed by slebresne for CASSANDRA-4940 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a05f6766 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a05f6766 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a05f6766 Branch: refs/heads/cassandra-1.1 Commit: a05f6766e292769fde3b7536e5586cf324a1cb17 Parents: f32110c Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 08:27:16 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 08:27:16 2012 -0600 -- CHANGES.txt|1 + .../cassandra/db/compaction/CompactionManager.java |8 2 files changed, 9 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a05f6766/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2f945bb..088daa7 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.1.7 + * expunge row cache post-truncate (CASSANDRA-4940) * remove IAuthority2 (CASSANDRA-4875) * add get[Row|Key]CacheEntries to CacheServiceMBean (CASSANDRA-4859) * fix get_paged_slice to wrap to next row correctly (CASSANDRA-4816) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a05f6766/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index fcdf45b..edfea0a 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -32,6 +32,7 @@ import javax.management.MBeanServer; import javax.management.ObjectName; import org.apache.cassandra.cache.AutoSavingCache; +import org.apache.cassandra.cache.RowCacheKey; import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor; import org.apache.cassandra.concurrent.NamedThreadFactory; import org.apache.cassandra.config.CFMetaData; @@ -48,6 +49,7 @@ import org.apache.cassandra.io.sstable.*; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.io.util.RandomAccessReader; import org.apache.cassandra.service.AntiEntropyService; +import org.apache.cassandra.service.CacheService; import org.apache.cassandra.service.StorageService; import org.apache.cassandra.utils.*; import org.slf4j.Logger; @@ -813,6 +815,12 @@ public class CompactionManager implements CompactionManagerMBean for (SecondaryIndex index : main.indexManager.getIndexes()) index.truncate(truncatedAt); + +for (RowCacheKey key : CacheService.instance.rowCache.getKeySet()) +{ +if (key.cfId == main.metadata.cfId) +CacheService.instance.rowCache.remove(key); +} } finally {
[jira] [Resolved] (CASSANDRA-4940) Truncate doesn't clear row cache
[ https://issues.apache.org/jira/browse/CASSANDRA-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-4940. --- Resolution: Fixed Fix Version/s: 1.2.0 rc1 committed Truncate doesn't clear row cache Key: CASSANDRA-4940 URL: https://issues.apache.org/jira/browse/CASSANDRA-4940 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremiah Jordan Assignee: Jonathan Ellis Priority: Minor Fix For: 1.1.7, 1.2.0 rc1 Attachments: 4940.txt Truncate doesn't clear the row cache. select * from table which skips the row cache returns no data, but selecting by key does. cqlsh:temp select v1..v3 from temp2 where k in (3,2,1); v1 | v2 | v3 ++ 16 | 17 | 18 12 | 13 | 14 8 | 9 | 10 cqlsh:temp truncate temp2; cqlsh:temp select v1..v3 from temp2 where k in (3,2,1); v1 | v2 | v3 ++ 16 | 17 | 18 12 | 13 | 14 8 | 9 | 10 cqlsh:temp select * from temp2; cqlsh:temp select v1..v3 from temp2 where k in (3,2,1); v1 | v2 | v3 ++ 16 | 17 | 18 12 | 13 | 14 8 | 9 | 10 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3974) Per-CF TTL
[ https://issues.apache.org/jira/browse/CASSANDRA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495293#comment-13495293 ] Sylvain Lebresne commented on CASSANDRA-3974: - bq. Should we make the constructors private and expose a factory method that requires the metadata? I like that idea. Per-CF TTL -- Key: CASSANDRA-3974 URL: https://issues.apache.org/jira/browse/CASSANDRA-3974 Project: Cassandra Issue Type: New Feature Affects Versions: 1.2.0 beta 1 Reporter: Jonathan Ellis Assignee: Kirk True Priority: Minor Fix For: 1.2.0 rc1 Attachments: trunk-3974.txt, trunk-3974v2.txt, trunk-3974v3.txt, trunk-3974v4.txt, trunk-3974v5.txt, trunk-3974v6.txt Per-CF TTL would allow compaction optimizations (drop an entire sstable's worth of expired data) that we can't do with per-column. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-4947) Disallow client-provided timestamps in cql3
[ https://issues.apache.org/jira/browse/CASSANDRA-4947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-4947. --- Resolution: Won't Fix Assignee: (was: Sylvain Lebresne) Ben Coverston: Client provided timestamps are helpful if a write fails and you want to replay the writes later. Sylvain: I remember having used once a client timestamp in a hand-made 'repair' job, where I repaired some denormalized table from the data of another one, and setting the timestamp manually was useful to make I wasn't breaking stuff due to a concurrent/newer update. Disallow client-provided timestamps in cql3 --- Key: CASSANDRA-4947 URL: https://issues.apache.org/jira/browse/CASSANDRA-4947 Project: Cassandra Issue Type: New Feature Components: API Reporter: Jonathan Ellis Priority: Blocker Fix For: 1.2.0 Client-provided timestamps cause a lot of pain since we can't make any assumptions as to client:server clock agreement. Is this worth the pain? If not we should rip out {{WITH TIMESTAMP}} before 1.2.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4788) streaming can put files in the wrong location
[ https://issues.apache.org/jira/browse/CASSANDRA-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-4788: Affects Version/s: (was: 1.2.0 beta 1) 1.1.0 streaming can put files in the wrong location - Key: CASSANDRA-4788 URL: https://issues.apache.org/jira/browse/CASSANDRA-4788 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brandon Williams Assignee: Yuki Morishita Fix For: 1.2.0 beta 2 Attachments: 4788.txt Some, but not all streaming incorrectly puts files in the top level data directory. Easiest way to repro that I've seen is bootstrap where it happens 100% of the time, but other operations like move and repair seem to do the right thing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4788) streaming can put files in the wrong location
[ https://issues.apache.org/jira/browse/CASSANDRA-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-4788: Affects Version/s: (was: 1.1.0) 1.2.0 beta 1 streaming can put files in the wrong location - Key: CASSANDRA-4788 URL: https://issues.apache.org/jira/browse/CASSANDRA-4788 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 beta 1 Reporter: Brandon Williams Assignee: Yuki Morishita Fix For: 1.2.0 beta 2 Attachments: 4788.txt Some, but not all streaming incorrectly puts files in the top level data directory. Easiest way to repro that I've seen is bootstrap where it happens 100% of the time, but other operations like move and repair seem to do the right thing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4788) streaming can put files in the wrong location
[ https://issues.apache.org/jira/browse/CASSANDRA-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495319#comment-13495319 ] Jonathan Ellis commented on CASSANDRA-4788: --- What would have caused this that is 1.2-specific? Superficially this sounds like it is related to CASSANDRA-2749. streaming can put files in the wrong location - Key: CASSANDRA-4788 URL: https://issues.apache.org/jira/browse/CASSANDRA-4788 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 beta 1 Reporter: Brandon Williams Assignee: Yuki Morishita Fix For: 1.2.0 beta 2 Attachments: 4788.txt Some, but not all streaming incorrectly puts files in the top level data directory. Easiest way to repro that I've seen is bootstrap where it happens 100% of the time, but other operations like move and repair seem to do the right thing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4788) streaming can put files in the wrong location
[ https://issues.apache.org/jira/browse/CASSANDRA-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495328#comment-13495328 ] Yuki Morishita commented on CASSANDRA-4788: --- This came from CASSANDRA-4292, so it was only affected on 1.2. streaming can put files in the wrong location - Key: CASSANDRA-4788 URL: https://issues.apache.org/jira/browse/CASSANDRA-4788 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 beta 1 Reporter: Brandon Williams Assignee: Yuki Morishita Fix For: 1.2.0 beta 2 Attachments: 4788.txt Some, but not all streaming incorrectly puts files in the top level data directory. Easiest way to repro that I've seen is bootstrap where it happens 100% of the time, but other operations like move and repair seem to do the right thing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3719) Upgrade thrift to latest release version (0.9.x)
[ https://issues.apache.org/jira/browse/CASSANDRA-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495330#comment-13495330 ] Jake Farrell commented on CASSANDRA-3719: - Have started converting over other projects as well, Cassandra and HBase where just the first projects I did. HBase upgraded to Thrift 0.9.0 in ticket HBASE-7005 Upgrade thrift to latest release version (0.9.x) Key: CASSANDRA-3719 URL: https://issues.apache.org/jira/browse/CASSANDRA-3719 Project: Cassandra Issue Type: Task Reporter: Jake Farrell Assignee: Jake Farrell Priority: Minor Attachments: Cassandra-3719-v1-001-updated-source.patch, Cassandra-3719-v1-002-thrift-jar-and-license.patch In Cassandra-3213 thrift was upgraded to thrift 0.7 and not the latest 0.8 release. This is due to THRIFT-1167 where the TNonblockingTransport in TNonblockingServer.FrameBuffer was moved into AbstractNonblockingServer.FrameBuffer and was changed from public to private. This causes the transport to not be available for SocketSessionManagementService as noted above. There is no short term workaround for this. I have everything ready for patching but with the above mentioned issue it will be impossible to use Thrift 0.8.0. The fix for this is committed (THRIFT-1464) and will be available in the next Thrift release 0.9. Adding this to keep track of and will update with patches for the current version of Thrift when pushing out the next release -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3719) Upgrade thrift to latest release version (0.9.x)
[ https://issues.apache.org/jira/browse/CASSANDRA-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-3719: -- Fix Version/s: 1.3 We've branched 1.2 so it should be safe to commit this to trunk for 1.3. Upgrade thrift to latest release version (0.9.x) Key: CASSANDRA-3719 URL: https://issues.apache.org/jira/browse/CASSANDRA-3719 Project: Cassandra Issue Type: Task Reporter: Jake Farrell Assignee: Jake Farrell Priority: Minor Fix For: 1.3 Attachments: Cassandra-3719-v1-001-updated-source.patch, Cassandra-3719-v1-002-thrift-jar-and-license.patch In Cassandra-3213 thrift was upgraded to thrift 0.7 and not the latest 0.8 release. This is due to THRIFT-1167 where the TNonblockingTransport in TNonblockingServer.FrameBuffer was moved into AbstractNonblockingServer.FrameBuffer and was changed from public to private. This causes the transport to not be available for SocketSessionManagementService as noted above. There is no short term workaround for this. I have everything ready for patching but with the above mentioned issue it will be impossible to use Thrift 0.8.0. The fix for this is committed (THRIFT-1464) and will be available in the next Thrift release 0.9. Adding this to keep track of and will update with patches for the current version of Thrift when pushing out the next release -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-1311) Triggers
[ https://issues.apache.org/jira/browse/CASSANDRA-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495359#comment-13495359 ] Edward Capriolo commented on CASSANDRA-1311: What are the semantics for triggers when the commit log is being skipped? Triggers Key: CASSANDRA-1311 URL: https://issues.apache.org/jira/browse/CASSANDRA-1311 Project: Cassandra Issue Type: New Feature Reporter: Maxim Grinev Assignee: Vijay Fix For: 1.3 Attachments: HOWTO-PatchAndRunTriggerExample.txt, HOWTO-PatchAndRunTriggerExample-update1.txt, ImplementationDetails.pdf, ImplementationDetails-update1.pdf, trunk-967053.txt, trunk-984391-update1.txt, trunk-984391-update2.txt Asynchronous triggers is a basic mechanism to implement various use cases of asynchronous execution of application code at database side. For example to support indexes and materialized views, online analytics, push-based data propagation. Please find the motivation, triggers description and list of applications: http://maxgrinev.com/2010/07/23/extending-cassandra-with-asynchronous-triggers/ An example of using triggers for indexing: http://maxgrinev.com/2010/07/23/managing-indexes-in-cassandra-using-async-triggers/ Implementation details are attached. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-1311) Triggers
[ https://issues.apache.org/jira/browse/CASSANDRA-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495364#comment-13495364 ] Jonathan Ellis commented on CASSANDRA-1311: --- The same as any other atomic batch when you've disabled commitlog for any of the CFs involved. (Atomic batch guarantees that the writes are sent to the replicas, but if you've disabled commitlog then it cannot guarantee that it is durable.) Triggers Key: CASSANDRA-1311 URL: https://issues.apache.org/jira/browse/CASSANDRA-1311 Project: Cassandra Issue Type: New Feature Reporter: Maxim Grinev Assignee: Vijay Fix For: 1.3 Attachments: HOWTO-PatchAndRunTriggerExample.txt, HOWTO-PatchAndRunTriggerExample-update1.txt, ImplementationDetails.pdf, ImplementationDetails-update1.pdf, trunk-967053.txt, trunk-984391-update1.txt, trunk-984391-update2.txt Asynchronous triggers is a basic mechanism to implement various use cases of asynchronous execution of application code at database side. For example to support indexes and materialized views, online analytics, push-based data propagation. Please find the motivation, triggers description and list of applications: http://maxgrinev.com/2010/07/23/extending-cassandra-with-asynchronous-triggers/ An example of using triggers for indexing: http://maxgrinev.com/2010/07/23/managing-indexes-in-cassandra-using-async-triggers/ Implementation details are attached. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4948) inserting data into СF with cassandra compound key from PIG
[ https://issues.apache.org/jira/browse/CASSANDRA-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-4948: Fix Version/s: (was: 1.1.7) inserting data into СF with cassandra compound key from PIG Key: CASSANDRA-4948 URL: https://issues.apache.org/jira/browse/CASSANDRA-4948 Project: Cassandra Issue Type: Improvement Components: Core, Hadoop Affects Versions: 1.1.5 Reporter: Shamim Ahmed Support inserting data into CF with cassandra compound key. For example we have the CF like these CREATE TABLE clicks ( user_id varchar, time timestamp, url varchar, PRIMARY KEY (user_id, time) ) WITH COMPACT STORAGE; Want to insert data into above CF from PIG. Currently Cassandra returing following exception java.io.IOException: InvalidRequestException(why:Expected 8 or 0 byte long for date (4)) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4948) inserting data into СF with cassandra compound key from PIG
[ https://issues.apache.org/jira/browse/CASSANDRA-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495373#comment-13495373 ] Brandon Williams commented on CASSANDRA-4948: - If you want to write composites you'll have to serialize them yourself in a UDF (see AbstractCompositeType) otherwise waiting for a cql IF/OF is probably your best bet. inserting data into СF with cassandra compound key from PIG Key: CASSANDRA-4948 URL: https://issues.apache.org/jira/browse/CASSANDRA-4948 Project: Cassandra Issue Type: Improvement Components: Core, Hadoop Affects Versions: 1.1.5 Reporter: Shamim Ahmed Support inserting data into CF with cassandra compound key. For example we have the CF like these CREATE TABLE clicks ( user_id varchar, time timestamp, url varchar, PRIMARY KEY (user_id, time) ) WITH COMPACT STORAGE; Want to insert data into above CF from PIG. Currently Cassandra returing following exception java.io.IOException: InvalidRequestException(why:Expected 8 or 0 byte long for date (4)) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-4923) cqlsh COPY FROM command requires primary key in first column of CSV
[ https://issues.apache.org/jira/browse/CASSANDRA-4923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko resolved CASSANDRA-4923. -- Resolution: Won't Fix The issue is caused by a CQL2 limitation (the primary key must come first in the INSERT query) and doesn't affect cqlsh in CQL3 mode. Making this work with CQL2 will require some nasty special-casing. It's not worth the extra complexity. Users affected by this will have to rearrange the csv file before importing it. cqlsh COPY FROM command requires primary key in first column of CSV --- Key: CASSANDRA-4923 URL: https://issues.apache.org/jira/browse/CASSANDRA-4923 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.5, 1.2.0 beta 1 Reporter: J.B. Langston Assignee: Aleksey Yeschenko Labels: cqlsh The cqlsh COPY FROM command requires the primary key to be in the first column of the CSV, even if the field list shows that the primary key is in a different position. Test data available from ftp://ftp.census.gov/Econ2001_And_Earlier/CBP_CSV/cbp01us.txt CREATE TABLE cbp01us ( naics text PRIMARY KEY ) WITH comment='' AND comparator=text AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND default_validation=text AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; copy cbp01us (uscode,naics,empflag,emp,qp1,ap,est,f1_4,e1_4,q1_4,a1_4,n1_4,f5_9,e5_9,q5_9,a5_9,n5_9,f10_19,e10_19,q10_19,a10_19,n10_19,f20_49,e20_49,q20_49,a20_49,n20_49,f50_99,e50_99,q50_99,a50_99,n50_99,f100_249,e100_249,q100_249,a100_249,n100_249,f250_499,e250_499,q250_499,a250_499,n250_499,f500_999,e500_999,q500_999,a500_999,n500_999,f1000,e1000,q1000,a1000,n1000) from 'cbp01us.txt' with header=true; Bad Request: Expected key 'NAICS' to be present in WHERE clause for 'cbp01us' Aborting import at record #0 (line 1). Previously-inserted values still present. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4813) Problem using BulkOutputFormat while streaming several SSTables simultaneously from a given node.
[ https://issues.apache.org/jira/browse/CASSANDRA-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495461#comment-13495461 ] Yuki Morishita commented on CASSANDRA-4813: --- [~mkjellman] Hmm, I tried standalone and psuedo-cluster hadoop on my machine and haven't seen that error. I will try in fully distributed mode. By the way, what kind of error did you see on cassandra side? Can you post stacktrace? Problem using BulkOutputFormat while streaming several SSTables simultaneously from a given node. - Key: CASSANDRA-4813 URL: https://issues.apache.org/jira/browse/CASSANDRA-4813 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.0 Environment: I am using SLES 10 SP3, Java 6, 4 Cassandra + Hadoop nodes, 3 Hadoop only nodes (datanodes/tasktrackers), 1 namenode/jobtracker. The machines used are Six-Core AMD Opteron(tm) Processor 8431, 24 cores and 33 GB of RAM. I get the issue on both cassandra 1.1.3, 1.1.5 and I am using Hadoop 0.20.2. Reporter: Ralph Romanos Assignee: Yuki Morishita Priority: Minor Labels: Bulkoutputformat, Hadoop, SSTables Fix For: 1.2.0 rc1 Attachments: 4813.txt The issue occurs when streaming simultaneously SSTables from the same node to a cassandra cluster using SSTableloader. It seems to me that Cassandra cannot handle receiving simultaneously SSTables from the same node. However, when it receives simultaneously SSTables from two different nodes, everything works fine. As a consequence, when using BulkOutputFormat to generate SSTables and stream them to a cassandra cluster, I cannot use more than one reducer per node otherwise I get a java.io.EOFException in the tasktracker's logs and a java.io.IOException: Broken pipe in the Cassandra logs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4906) Avoid flushing other columnfamilies on truncate
[ https://issues.apache.org/jira/browse/CASSANDRA-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495513#comment-13495513 ] Yuki Morishita commented on CASSANDRA-4906: --- +1 Avoid flushing other columnfamilies on truncate --- Key: CASSANDRA-4906 URL: https://issues.apache.org/jira/browse/CASSANDRA-4906 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 1.2.0 Attachments: 4906.txt, 4906-v2.txt Currently truncate flushes *all* columnfamilies so it can get rid of the commitlog segments containing truncated data. Otherwise, it could be replayed on restart since the replay position is contained in the sstables we're trying to delete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4813) Problem using BulkOutputFormat while streaming several SSTables simultaneously from a given node.
[ https://issues.apache.org/jira/browse/CASSANDRA-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495517#comment-13495517 ] Michael Kjellman commented on CASSANDRA-4813: - [~yukim] I can produce it in local mode (standalone) and distributed mode with the current revision of the patch. I haven't ran it in psuedo-cluster mode. Problem using BulkOutputFormat while streaming several SSTables simultaneously from a given node. - Key: CASSANDRA-4813 URL: https://issues.apache.org/jira/browse/CASSANDRA-4813 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.0 Environment: I am using SLES 10 SP3, Java 6, 4 Cassandra + Hadoop nodes, 3 Hadoop only nodes (datanodes/tasktrackers), 1 namenode/jobtracker. The machines used are Six-Core AMD Opteron(tm) Processor 8431, 24 cores and 33 GB of RAM. I get the issue on both cassandra 1.1.3, 1.1.5 and I am using Hadoop 0.20.2. Reporter: Ralph Romanos Assignee: Yuki Morishita Priority: Minor Labels: Bulkoutputformat, Hadoop, SSTables Fix For: 1.2.0 rc1 Attachments: 4813.txt The issue occurs when streaming simultaneously SSTables from the same node to a cassandra cluster using SSTableloader. It seems to me that Cassandra cannot handle receiving simultaneously SSTables from the same node. However, when it receives simultaneously SSTables from two different nodes, everything works fine. As a consequence, when using BulkOutputFormat to generate SSTables and stream them to a cassandra cluster, I cannot use more than one reducer per node otherwise I get a java.io.EOFException in the tasktracker's logs and a java.io.IOException: Broken pipe in the Cassandra logs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4906) Avoid flushing other columnfamilies on truncate
[ https://issues.apache.org/jira/browse/CASSANDRA-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495524#comment-13495524 ] Chris Herron commented on CASSANDRA-4906: - Would it be possible to backport this to Cassandra 1.1? Avoid flushing other columnfamilies on truncate --- Key: CASSANDRA-4906 URL: https://issues.apache.org/jira/browse/CASSANDRA-4906 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 1.2.0 Attachments: 4906.txt, 4906-v2.txt Currently truncate flushes *all* columnfamilies so it can get rid of the commitlog segments containing truncated data. Otherwise, it could be replayed on restart since the replay position is contained in the sstables we're trying to delete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-4813) Problem using BulkOutputFormat while streaming several SSTables simultaneously from a given node.
[ https://issues.apache.org/jira/browse/CASSANDRA-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495517#comment-13495517 ] Michael Kjellman edited comment on CASSANDRA-4813 at 11/12/12 7:35 PM: --- [~yukim] I can produce it in local mode (standalone) and distributed mode with the current revision of the patch. I haven't ran it in psuedo-cluster mode. Also should mention I have reproduced it even when I limit to 1 reducer. was (Author: mkjellman): [~yukim] I can produce it in local mode (standalone) and distributed mode with the current revision of the patch. I haven't ran it in psuedo-cluster mode. Problem using BulkOutputFormat while streaming several SSTables simultaneously from a given node. - Key: CASSANDRA-4813 URL: https://issues.apache.org/jira/browse/CASSANDRA-4813 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.0 Environment: I am using SLES 10 SP3, Java 6, 4 Cassandra + Hadoop nodes, 3 Hadoop only nodes (datanodes/tasktrackers), 1 namenode/jobtracker. The machines used are Six-Core AMD Opteron(tm) Processor 8431, 24 cores and 33 GB of RAM. I get the issue on both cassandra 1.1.3, 1.1.5 and I am using Hadoop 0.20.2. Reporter: Ralph Romanos Assignee: Yuki Morishita Priority: Minor Labels: Bulkoutputformat, Hadoop, SSTables Fix For: 1.2.0 rc1 Attachments: 4813.txt The issue occurs when streaming simultaneously SSTables from the same node to a cassandra cluster using SSTableloader. It seems to me that Cassandra cannot handle receiving simultaneously SSTables from the same node. However, when it receives simultaneously SSTables from two different nodes, everything works fine. As a consequence, when using BulkOutputFormat to generate SSTables and stream them to a cassandra cluster, I cannot use more than one reducer per node otherwise I get a java.io.EOFException in the tasktracker's logs and a java.io.IOException: Broken pipe in the Cassandra logs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4912) BulkOutputFormat should support Hadoop MultipleOutput
[ https://issues.apache.org/jira/browse/CASSANDRA-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Kjellman updated CASSANDRA-4912: Attachment: 4912.txt BulkOutputFormat should support Hadoop MultipleOutput - Key: CASSANDRA-4912 URL: https://issues.apache.org/jira/browse/CASSANDRA-4912 Project: Cassandra Issue Type: New Feature Components: Hadoop Affects Versions: 1.2.0 beta 1 Reporter: Michael Kjellman Attachments: 4912.txt, Example.java Much like CASSANDRA-4208 BOF should support outputting to Multiple Column Families. The current approach takken in the patch for COF results in only one stream being sent. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4912) BulkOutputFormat should support Hadoop MultipleOutput
[ https://issues.apache.org/jira/browse/CASSANDRA-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495591#comment-13495591 ] Brandon Williams commented on CASSANDRA-4912: - There is no particular reason that I recall, it was just a convenient place at the time. BulkOutputFormat should support Hadoop MultipleOutput - Key: CASSANDRA-4912 URL: https://issues.apache.org/jira/browse/CASSANDRA-4912 Project: Cassandra Issue Type: New Feature Components: Hadoop Affects Versions: 1.2.0 beta 1, 1.2.0 beta 2 Reporter: Michael Kjellman Attachments: 4912.txt, Example.java Much like CASSANDRA-4208 BOF should support outputting to Multiple Column Families. The current approach takken in the patch for COF results in only one stream being sent. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4913) DESC KEYSPACE ks from cqlsh won't show cql3 cfs
[ https://issues.apache.org/jira/browse/CASSANDRA-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495617#comment-13495617 ] Brandon Williams commented on CASSANDRA-4913: - lgtm, +1 DESC KEYSPACE ks from cqlsh won't show cql3 cfs - Key: CASSANDRA-4913 URL: https://issues.apache.org/jira/browse/CASSANDRA-4913 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 beta 1 Reporter: Nick Bailey Assignee: Aleksey Yeschenko Fix For: 1.2.0 Attachments: 4913.txt I'm assuming because we made 'describe_keyspaces' from thrift not return cql3 cfs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[1/4] git commit: Merge branch 'cassandra-1.2' into trunk
Updated Branches: refs/heads/cassandra-1.2 93f8fec9d - 38bfc6dca refs/heads/cassandra-1.2.0 93f8fec9d - 38bfc6dca refs/heads/trunk d104ca636 - 0353330ab Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0353330a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0353330a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0353330a Branch: refs/heads/trunk Commit: 0353330ab0d3ce1cf0f982aa2da21a6775031c41 Parents: d104ca6 38bfc6d Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 15:06:21 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:06:21 2012 -0600 -- CHANGES.txt|1 + .../org/apache/cassandra/config/CFMetaData.java|3 +- .../apache/cassandra/cql3/UntypedResultSet.java|9 ++- .../org/apache/cassandra/db/ColumnFamilyStore.java | 97 --- src/java/org/apache/cassandra/db/SystemTable.java | 64 ++ .../cassandra/db/commitlog/CommitLogReplayer.java | 21 ++-- .../cassandra/db/compaction/CompactionManager.java |5 +- 7 files changed, 118 insertions(+), 82 deletions(-) --
[3/4] git commit: avoid flushing everyone on truncate; save truncation position in system table instead patch by jbellis; reviewed by yukim for CASSANDRA-4906
avoid flushing everyone on truncate; save truncation position in system table instead patch by jbellis; reviewed by yukim for CASSANDRA-4906 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/38bfc6dc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/38bfc6dc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/38bfc6dc Branch: refs/heads/cassandra-1.2.0 Commit: 38bfc6dca06bd0192167ae5e9bd51d593542f03e Parents: 93f8fec Author: Jonathan Ellis jbel...@apache.org Authored: Sat Oct 27 11:18:31 2012 -0700 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:05:36 2012 -0600 -- CHANGES.txt|1 + .../org/apache/cassandra/config/CFMetaData.java|3 +- .../apache/cassandra/cql3/UntypedResultSet.java|9 ++- .../org/apache/cassandra/db/ColumnFamilyStore.java | 97 --- src/java/org/apache/cassandra/db/SystemTable.java | 64 ++ .../cassandra/db/commitlog/CommitLogReplayer.java | 21 ++-- .../cassandra/db/compaction/CompactionManager.java |5 +- 7 files changed, 118 insertions(+), 82 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ff79b9a..9ac6227 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2-rc1 + * save truncation position in system table (CASSANDRA-4906) * Move CompressionMetadata off-heap (CASSANDRA-4937) * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402) http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index 921242a..5f0e93a 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -181,7 +181,8 @@ public final class CFMetaData + thrift_version text, + cql_version text, + data_center text, - + rack text + + rack text, + + truncated_at mapuuid, blob + ) WITH COMMENT='information about the local node'); public static final CFMetaData TraceSessionsCf = compile(14, CREATE TABLE + Tracing.SESSIONS_CF + ( http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/src/java/org/apache/cassandra/cql3/UntypedResultSet.java -- diff --git a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java index ca3acf5..b6fcb55 100644 --- a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java +++ b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java @@ -133,7 +133,14 @@ public class UntypedResultSet implements IterableUntypedResultSet.Row public T SetT getSet(String column, AbstractTypeT type) { -return SetType.getInstance(type).compose(data.get(column)); +ByteBuffer raw = data.get(column); +return raw == null ? null : SetType.getInstance(type).compose(raw); +} + +public K, V MapK, V getMap(String column, AbstractTypeK keyType, AbstractTypeV valueType) +{ +ByteBuffer raw = data.get(column); +return raw == null ? null : MapType.getInstance(keyType, valueType).compose(raw); } @Override http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a91af8c..439ef5f 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1720,38 +1720,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean } /** - * Waits for flushes started BEFORE THIS METHOD IS CALLED to finish. - * Does NOT guarantee that no flush is active when it returns. - */ -private void waitForActiveFlushes() -{
[2/4] git commit: avoid flushing everyone on truncate; save truncation position in system table instead patch by jbellis; reviewed by yukim for CASSANDRA-4906
avoid flushing everyone on truncate; save truncation position in system table instead patch by jbellis; reviewed by yukim for CASSANDRA-4906 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/38bfc6dc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/38bfc6dc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/38bfc6dc Branch: refs/heads/trunk Commit: 38bfc6dca06bd0192167ae5e9bd51d593542f03e Parents: 93f8fec Author: Jonathan Ellis jbel...@apache.org Authored: Sat Oct 27 11:18:31 2012 -0700 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:05:36 2012 -0600 -- CHANGES.txt|1 + .../org/apache/cassandra/config/CFMetaData.java|3 +- .../apache/cassandra/cql3/UntypedResultSet.java|9 ++- .../org/apache/cassandra/db/ColumnFamilyStore.java | 97 --- src/java/org/apache/cassandra/db/SystemTable.java | 64 ++ .../cassandra/db/commitlog/CommitLogReplayer.java | 21 ++-- .../cassandra/db/compaction/CompactionManager.java |5 +- 7 files changed, 118 insertions(+), 82 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ff79b9a..9ac6227 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2-rc1 + * save truncation position in system table (CASSANDRA-4906) * Move CompressionMetadata off-heap (CASSANDRA-4937) * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402) http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index 921242a..5f0e93a 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -181,7 +181,8 @@ public final class CFMetaData + thrift_version text, + cql_version text, + data_center text, - + rack text + + rack text, + + truncated_at mapuuid, blob + ) WITH COMMENT='information about the local node'); public static final CFMetaData TraceSessionsCf = compile(14, CREATE TABLE + Tracing.SESSIONS_CF + ( http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/src/java/org/apache/cassandra/cql3/UntypedResultSet.java -- diff --git a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java index ca3acf5..b6fcb55 100644 --- a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java +++ b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java @@ -133,7 +133,14 @@ public class UntypedResultSet implements IterableUntypedResultSet.Row public T SetT getSet(String column, AbstractTypeT type) { -return SetType.getInstance(type).compose(data.get(column)); +ByteBuffer raw = data.get(column); +return raw == null ? null : SetType.getInstance(type).compose(raw); +} + +public K, V MapK, V getMap(String column, AbstractTypeK keyType, AbstractTypeV valueType) +{ +ByteBuffer raw = data.get(column); +return raw == null ? null : MapType.getInstance(keyType, valueType).compose(raw); } @Override http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a91af8c..439ef5f 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1720,38 +1720,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean } /** - * Waits for flushes started BEFORE THIS METHOD IS CALLED to finish. - * Does NOT guarantee that no flush is active when it returns. - */ -private void waitForActiveFlushes() -{ -
[4/4] git commit: avoid flushing everyone on truncate; save truncation position in system table instead patch by jbellis; reviewed by yukim for CASSANDRA-4906
avoid flushing everyone on truncate; save truncation position in system table instead patch by jbellis; reviewed by yukim for CASSANDRA-4906 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/38bfc6dc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/38bfc6dc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/38bfc6dc Branch: refs/heads/cassandra-1.2 Commit: 38bfc6dca06bd0192167ae5e9bd51d593542f03e Parents: 93f8fec Author: Jonathan Ellis jbel...@apache.org Authored: Sat Oct 27 11:18:31 2012 -0700 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:05:36 2012 -0600 -- CHANGES.txt|1 + .../org/apache/cassandra/config/CFMetaData.java|3 +- .../apache/cassandra/cql3/UntypedResultSet.java|9 ++- .../org/apache/cassandra/db/ColumnFamilyStore.java | 97 --- src/java/org/apache/cassandra/db/SystemTable.java | 64 ++ .../cassandra/db/commitlog/CommitLogReplayer.java | 21 ++-- .../cassandra/db/compaction/CompactionManager.java |5 +- 7 files changed, 118 insertions(+), 82 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ff79b9a..9ac6227 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2-rc1 + * save truncation position in system table (CASSANDRA-4906) * Move CompressionMetadata off-heap (CASSANDRA-4937) * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402) http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index 921242a..5f0e93a 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -181,7 +181,8 @@ public final class CFMetaData + thrift_version text, + cql_version text, + data_center text, - + rack text + + rack text, + + truncated_at mapuuid, blob + ) WITH COMMENT='information about the local node'); public static final CFMetaData TraceSessionsCf = compile(14, CREATE TABLE + Tracing.SESSIONS_CF + ( http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/src/java/org/apache/cassandra/cql3/UntypedResultSet.java -- diff --git a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java index ca3acf5..b6fcb55 100644 --- a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java +++ b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java @@ -133,7 +133,14 @@ public class UntypedResultSet implements IterableUntypedResultSet.Row public T SetT getSet(String column, AbstractTypeT type) { -return SetType.getInstance(type).compose(data.get(column)); +ByteBuffer raw = data.get(column); +return raw == null ? null : SetType.getInstance(type).compose(raw); +} + +public K, V MapK, V getMap(String column, AbstractTypeK keyType, AbstractTypeV valueType) +{ +ByteBuffer raw = data.get(column); +return raw == null ? null : MapType.getInstance(keyType, valueType).compose(raw); } @Override http://git-wip-us.apache.org/repos/asf/cassandra/blob/38bfc6dc/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a91af8c..439ef5f 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1720,38 +1720,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean } /** - * Waits for flushes started BEFORE THIS METHOD IS CALLED to finish. - * Does NOT guarantee that no flush is active when it returns. - */ -private void waitForActiveFlushes() -{ -
[jira] [Resolved] (CASSANDRA-4906) Avoid flushing other columnfamilies on truncate
[ https://issues.apache.org/jira/browse/CASSANDRA-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-4906. --- Resolution: Fixed Yes, backporting is probably straightforward. But we shouldn't risk 1.1 stability with a new approach at this point. Avoid flushing other columnfamilies on truncate --- Key: CASSANDRA-4906 URL: https://issues.apache.org/jira/browse/CASSANDRA-4906 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 1.2.0 Attachments: 4906.txt, 4906-v2.txt Currently truncate flushes *all* columnfamilies so it can get rid of the commitlog segments containing truncated data. Otherwise, it could be replayed on restart since the replay position is contained in the sstables we're trying to delete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4906) Avoid flushing other columnfamilies on truncate
[ https://issues.apache.org/jira/browse/CASSANDRA-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4906: -- Fix Version/s: (was: 1.2.0) 1.2.0 rc1 (committed to 1.2.0.) Avoid flushing other columnfamilies on truncate --- Key: CASSANDRA-4906 URL: https://issues.apache.org/jira/browse/CASSANDRA-4906 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 1.2.0 rc1 Attachments: 4906.txt, 4906-v2.txt Currently truncate flushes *all* columnfamilies so it can get rid of the commitlog segments containing truncated data. Otherwise, it could be replayed on restart since the replay position is contained in the sstables we're trying to delete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3719) Upgrade thrift to latest release version (0.9.x)
[ https://issues.apache.org/jira/browse/CASSANDRA-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495658#comment-13495658 ] T Jake Luciani commented on CASSANDRA-3719: --- Jake can you update your patches and I'll add it to trunk, thx! Upgrade thrift to latest release version (0.9.x) Key: CASSANDRA-3719 URL: https://issues.apache.org/jira/browse/CASSANDRA-3719 Project: Cassandra Issue Type: Task Reporter: Jake Farrell Assignee: Jake Farrell Priority: Minor Fix For: 1.3 Attachments: Cassandra-3719-v1-001-updated-source.patch, Cassandra-3719-v1-002-thrift-jar-and-license.patch In Cassandra-3213 thrift was upgraded to thrift 0.7 and not the latest 0.8 release. This is due to THRIFT-1167 where the TNonblockingTransport in TNonblockingServer.FrameBuffer was moved into AbstractNonblockingServer.FrameBuffer and was changed from public to private. This causes the transport to not be available for SocketSessionManagementService as noted above. There is no short term workaround for this. I have everything ready for patching but with the above mentioned issue it will be impossible to use Thrift 0.8.0. The fix for this is committed (THRIFT-1464) and will be available in the next Thrift release 0.9. Adding this to keep track of and will update with patches for the current version of Thrift when pushing out the next release -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4021) CFS.scrubDataDirectories tries to delete nonexistent orphans
[ https://issues.apache.org/jira/browse/CASSANDRA-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495662#comment-13495662 ] Jonathan Ellis commented on CASSANDRA-4021: --- That's odd, because here's the code causing that assertion: {code} . File dataFile = new File(desc.filenameFor(Component.DATA)); if (components.contains(Component.DATA) dataFile.length() 0) // everything appears to be in order... moving on. continue; // missing the DATA file! all components are orphaned logger.warn(Removing orphans for {}: {}, desc, components); for (Component component : components) { FileUtils.deleteWithConfirm(desc.filenameFor(component)); } {code} I must be missing something because these are the possibilities I see: # .db exists and is non-empty. we don't try to delete it. # .db exists and is empty. we delete it, and do not get a file does not exist failure # .db does not exist (is not part of components), so we do not try to delete it CFS.scrubDataDirectories tries to delete nonexistent orphans Key: CASSANDRA-4021 URL: https://issues.apache.org/jira/browse/CASSANDRA-4021 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7 beta 2 Reporter: Brandon Williams Assignee: Brandon Williams Priority: Minor Labels: datastax_qa Attachments: 4021.txt, node1.log The check only looks for a missing data file, then deletes all other components, however it's possible for the data file and another component to be missing, causing an error: {noformat} WARN 17:19:28,765 Removing orphans for /var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-24492: [Index.db, Filter.db, Digest.sha1, Statistics.db, Data.db] ERROR 17:19:28,766 Exception encountered during startup java.lang.AssertionError: attempted to delete non-existing file system-HintsColumnFamily-hd-24492-Index.db at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:49) at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:44) at org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:357) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:167) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:352) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:105) java.lang.AssertionError: attempted to delete non-existing file system-HintsColumnFamily-hd-24492-Index.db at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:49) at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:44) at org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:357) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:167) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:352) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:105) Exception encountered during startup: attempted to delete non-existing file system-HintsColumnFamily-hd-24492-Index.db {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4950) DEB package missing dependency
John Manero created CASSANDRA-4950: -- Summary: DEB package missing dependency Key: CASSANDRA-4950 URL: https://issues.apache.org/jira/browse/CASSANDRA-4950 Project: Cassandra Issue Type: Bug Components: Packaging Environment: * Ubuntu 12.10 * 32 Bit, virtualized * Linux ## 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:32:08 UTC 2012 i686 i686 i686 GNU/Linux Reporter: John Manero The DEB installer at [http://www.apache.org/dist/cassandra/debian 10x main|http://www.apache.org/dist/cassandra/debian] should include libcap2 as a dependency. Without libcap installed, the daemon fails to start start, generating the following error: {noformat} failed loading capabilities library -- libcap.so.1: cannot open shared object file: No such file or directory. Cannot set group id for user 'cassandra' set_user_group failed for user 'cassandra' Service exit with a return value of 4 {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[1/8] git commit: Merge branch 'cassandra-1.2' into trunk
Updated Branches: refs/heads/cassandra-1.1 a05f6766e - a2ca30e79 refs/heads/cassandra-1.2 38bfc6dca - 3e7708289 refs/heads/cassandra-1.2.0 38bfc6dca - 3e7708289 refs/heads/trunk 0353330ab - bce1f2b8e Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bce1f2b8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bce1f2b8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bce1f2b8 Branch: refs/heads/trunk Commit: bce1f2b8ed9c574d0323b9cdb8fa827244b11df6 Parents: 0353330 3e77082 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 15:59:43 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:59:43 2012 -0600 -- CHANGES.txt|2 + .../org/apache/cassandra/service/StorageProxy.java | 17 ++- 2 files changed, 18 insertions(+), 1 deletions(-) --
[2/8] git commit: merge from 1.1
merge from 1.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3e770828 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3e770828 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3e770828 Branch: refs/heads/cassandra-1.2.0 Commit: 3e7708289be12d6bacf0ea4c1adc822a63f75f15 Parents: 38bfc6d a2ca30e Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 15:59:28 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:59:28 2012 -0600 -- CHANGES.txt|2 + .../org/apache/cassandra/service/StorageProxy.java | 17 ++- 2 files changed, 18 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3e770828/CHANGES.txt -- diff --cc CHANGES.txt index 9ac6227,b80c60f..0ac5b66 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,74 -1,8 +1,76 @@@ -1.1.7 +1.2-rc1 + * save truncation position in system table (CASSANDRA-4906) + * Move CompressionMetadata off-heap (CASSANDRA-4937) + * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) + * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402) + * acquire references to overlapping sstables during compaction so bloom filter + doesn't get free'd prematurely (CASSANDRA-4934) + * Don't share slice query filter in CQL3 SelectStatement (CASSANDRA-4928) + * Separate tracing from Log4J (CASSANDRA-4861) + * Exclude gcable tombstones from merkle-tree computation (CASSANDRA-4905) + * Better printing of AbstractBounds for tracing (CASSANDRA-4931) +Merged from 1.1: + * reset getRangeSlice filter after finishing a row for get_paged_slice +(CASSANDRA-4919) * expunge row cache post-truncate (CASSANDRA-4940) - * remove IAuthority2 (CASSANDRA-4875) + + +1.2-beta2 + * fp rate of 1.0 disables BF entirely; LCS defaults to 1.0 (CASSANDRA-4876) + * off-heap bloom filters for row keys (CASSANDRA_4865) + * add extension point for sstable components (CASSANDRA-4049) + * improve tracing output (CASSANDRA-4852, 4862) + * make TRACE verb droppable (CASSANDRA-4672) + * fix BulkLoader recognition of CQL3 columnfamilies (CASSANDRA-4755) + * Sort commitlog segments for replay by id instead of mtime (CASSANDRA-4793) + * Make hint delivery asynchronous (CASSANDRA-4761) + * Pluggable Thrift transport factories for CLI and cqlsh (CASSANDRA-4609, 4610) + * cassandra-cli: allow Double value type to be inserted to a column (CASSANDRA-4661) + * Add ability to use custom TServerFactory implementations (CASSANDRA-4608) + * optimize batchlog flushing to skip successful batches (CASSANDRA-4667) + * include metadata for system keyspace itself in schema tables (CASSANDRA-4416) + * add check to PropertyFileSnitch to verify presence of location for + local node (CASSANDRA-4728) + * add PBSPredictor consistency modeler (CASSANDRA-4261) + * remove vestiges of Thrift unframed mode (CASSANDRA-4729) + * optimize single-row PK lookups (CASSANDRA-4710) + * adjust blockFor calculation to account for pending ranges due to node + movement (CASSANDRA-833) + * Change CQL version to 3.0.0 and stop accepting 3.0.0-beta1 (CASSANDRA-4649) + * (CQL3) Make prepared statement global instead of per connection + (CASSANDRA-4449) + * Fix scrubbing of CQL3 created tables (CASSANDRA-4685) + * (CQL3) Fix validation when using counter and regular columns in the same + table (CASSANDRA-4706) + * Fix bug starting Cassandra with simple authentication (CASSANDRA-4648) + * Add support for batchlog in CQL3 (CASSANDRA-4545, 4738) + * Add support for multiple column family outputs in CFOF (CASSANDRA-4208) + * Support repairing only the local DC nodes (CASSANDRA-4747) + * Use rpc_address for binary protocol and change default port (CASSANRA-4751) + * Fix use of collections in prepared statements (CASSANDRA-4739) + * Store more information into peers table (CASSANDRA-4351, 4814) + * Configurable bucket size for size tiered compaction (CASSANDRA-4704) + * Run leveled compaction in parallel (CASSANDRA-4310) + * Fix potential NPE during CFS reload (CASSANDRA-4786) + * Composite indexes may miss results (CASSANDRA-4796) + * Move consistency level to the protocol level (CASSANDRA-4734, 4824) + * Fix Subcolumn slice ends not respected (CASSANDRA-4826) + * Fix Assertion error in cql3 select (CASSANDRA-4783) + * Fix list prepend logic (CQL3) (CASSANDRA-4835) + * Add booleans as literals in CQL3 (CASSANDRA-4776) + * Allow renaming PK columns in CQL3 (CASSANDRA-4822) + * Fix binary protocol NEW_NODE event (CASSANDRA-4679) + * Fix potential infinite loop in tombstone compaction (CASSANDRA-4781) + * Remove system tables accounting from schema
[6/8] git commit: reset getRangeSlice filter after finishing a row forget_paged_slice patch by Piotr Kołaczkowski; reviewed by jbellis for CASSANDRA-4919
reset getRangeSlice filter after finishing a row forget_paged_slice patch by Piotr KoÅaczkowski; reviewed by jbellis for CASSANDRA-4919 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2ca30e7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2ca30e7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2ca30e7 Branch: refs/heads/cassandra-1.2.0 Commit: a2ca30e79510a04ca1f6b9c5a342ee9467176a9f Parents: a05f676 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 15:57:15 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:57:15 2012 -0600 -- CHANGES.txt|2 + .../org/apache/cassandra/service/StorageProxy.java | 17 ++- 2 files changed, 18 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2ca30e7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 088daa7..b80c60f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 1.1.7 + * reset getRangeSlice filter after finishing a row for get_paged_slice + (CASSANDRA-4919) * expunge row cache post-truncate (CASSANDRA-4940) * remove IAuthority2 (CASSANDRA-4875) * add get[Row|Key]CacheEntries to CacheServiceMBean (CASSANDRA-4859) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2ca30e7/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 14466ee..453e2b2 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -845,6 +845,9 @@ public class StorageProxy implements StorageProxyMBean // now scan until we have enough results try { +final SlicePredicate emptyPredicate = getEmptySlicePredicate(); +SlicePredicate commandPredicate = command.predicate; + int columnsCount = 0; rows = new ArrayListRow(); ListAbstractBoundsRowPosition ranges = getRestrictedRanges(command.range); @@ -853,7 +856,7 @@ public class StorageProxy implements StorageProxyMBean RangeSliceCommand nodeCmd = new RangeSliceCommand(command.keyspace, command.column_family, command.super_column, - command.predicate, + commandPredicate, range, command.row_filter, command.maxResults, @@ -923,6 +926,11 @@ public class StorageProxy implements StorageProxyMBean int count = nodeCmd.maxIsColumns ? columnsCount : rows.size(); if (count = nodeCmd.maxResults) break; + +// if we are paging and already got some rows, reset the column filter predicate, +// so we start iterating the next row from the first column +if (!rows.isEmpty() command.isPaging) +commandPredicate = emptyPredicate; } } finally @@ -932,6 +940,13 @@ public class StorageProxy implements StorageProxyMBean return trim(command, rows); } +private static SlicePredicate getEmptySlicePredicate() +{ +final SliceRange emptySliceRange = +new SliceRange(ByteBufferUtil.EMPTY_BYTE_BUFFER, ByteBufferUtil.EMPTY_BYTE_BUFFER, false, -1); +return new SlicePredicate().setSlice_range(emptySliceRange); +} + private static ListRow trim(RangeSliceCommand command, ListRow rows) { // When maxIsColumns, we let the caller trim the result.
[5/8] git commit: reset getRangeSlice filter after finishing a row forget_paged_slice patch by Piotr Kołaczkowski; reviewed by jbellis for CASSANDRA-4919
reset getRangeSlice filter after finishing a row forget_paged_slice patch by Piotr KoÅaczkowski; reviewed by jbellis for CASSANDRA-4919 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2ca30e7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2ca30e7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2ca30e7 Branch: refs/heads/trunk Commit: a2ca30e79510a04ca1f6b9c5a342ee9467176a9f Parents: a05f676 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 15:57:15 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:57:15 2012 -0600 -- CHANGES.txt|2 + .../org/apache/cassandra/service/StorageProxy.java | 17 ++- 2 files changed, 18 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2ca30e7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 088daa7..b80c60f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 1.1.7 + * reset getRangeSlice filter after finishing a row for get_paged_slice + (CASSANDRA-4919) * expunge row cache post-truncate (CASSANDRA-4940) * remove IAuthority2 (CASSANDRA-4875) * add get[Row|Key]CacheEntries to CacheServiceMBean (CASSANDRA-4859) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2ca30e7/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 14466ee..453e2b2 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -845,6 +845,9 @@ public class StorageProxy implements StorageProxyMBean // now scan until we have enough results try { +final SlicePredicate emptyPredicate = getEmptySlicePredicate(); +SlicePredicate commandPredicate = command.predicate; + int columnsCount = 0; rows = new ArrayListRow(); ListAbstractBoundsRowPosition ranges = getRestrictedRanges(command.range); @@ -853,7 +856,7 @@ public class StorageProxy implements StorageProxyMBean RangeSliceCommand nodeCmd = new RangeSliceCommand(command.keyspace, command.column_family, command.super_column, - command.predicate, + commandPredicate, range, command.row_filter, command.maxResults, @@ -923,6 +926,11 @@ public class StorageProxy implements StorageProxyMBean int count = nodeCmd.maxIsColumns ? columnsCount : rows.size(); if (count = nodeCmd.maxResults) break; + +// if we are paging and already got some rows, reset the column filter predicate, +// so we start iterating the next row from the first column +if (!rows.isEmpty() command.isPaging) +commandPredicate = emptyPredicate; } } finally @@ -932,6 +940,13 @@ public class StorageProxy implements StorageProxyMBean return trim(command, rows); } +private static SlicePredicate getEmptySlicePredicate() +{ +final SliceRange emptySliceRange = +new SliceRange(ByteBufferUtil.EMPTY_BYTE_BUFFER, ByteBufferUtil.EMPTY_BYTE_BUFFER, false, -1); +return new SlicePredicate().setSlice_range(emptySliceRange); +} + private static ListRow trim(RangeSliceCommand command, ListRow rows) { // When maxIsColumns, we let the caller trim the result.
[8/8] git commit: reset getRangeSlice filter after finishing a row forget_paged_slice patch by Piotr Kołaczkowski; reviewed by jbellis for CASSANDRA-4919
reset getRangeSlice filter after finishing a row forget_paged_slice patch by Piotr KoÅaczkowski; reviewed by jbellis for CASSANDRA-4919 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2ca30e7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2ca30e7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2ca30e7 Branch: refs/heads/cassandra-1.1 Commit: a2ca30e79510a04ca1f6b9c5a342ee9467176a9f Parents: a05f676 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 15:57:15 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:57:15 2012 -0600 -- CHANGES.txt|2 + .../org/apache/cassandra/service/StorageProxy.java | 17 ++- 2 files changed, 18 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2ca30e7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 088daa7..b80c60f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 1.1.7 + * reset getRangeSlice filter after finishing a row for get_paged_slice + (CASSANDRA-4919) * expunge row cache post-truncate (CASSANDRA-4940) * remove IAuthority2 (CASSANDRA-4875) * add get[Row|Key]CacheEntries to CacheServiceMBean (CASSANDRA-4859) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2ca30e7/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 14466ee..453e2b2 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -845,6 +845,9 @@ public class StorageProxy implements StorageProxyMBean // now scan until we have enough results try { +final SlicePredicate emptyPredicate = getEmptySlicePredicate(); +SlicePredicate commandPredicate = command.predicate; + int columnsCount = 0; rows = new ArrayListRow(); ListAbstractBoundsRowPosition ranges = getRestrictedRanges(command.range); @@ -853,7 +856,7 @@ public class StorageProxy implements StorageProxyMBean RangeSliceCommand nodeCmd = new RangeSliceCommand(command.keyspace, command.column_family, command.super_column, - command.predicate, + commandPredicate, range, command.row_filter, command.maxResults, @@ -923,6 +926,11 @@ public class StorageProxy implements StorageProxyMBean int count = nodeCmd.maxIsColumns ? columnsCount : rows.size(); if (count = nodeCmd.maxResults) break; + +// if we are paging and already got some rows, reset the column filter predicate, +// so we start iterating the next row from the first column +if (!rows.isEmpty() command.isPaging) +commandPredicate = emptyPredicate; } } finally @@ -932,6 +940,13 @@ public class StorageProxy implements StorageProxyMBean return trim(command, rows); } +private static SlicePredicate getEmptySlicePredicate() +{ +final SliceRange emptySliceRange = +new SliceRange(ByteBufferUtil.EMPTY_BYTE_BUFFER, ByteBufferUtil.EMPTY_BYTE_BUFFER, false, -1); +return new SlicePredicate().setSlice_range(emptySliceRange); +} + private static ListRow trim(RangeSliceCommand command, ListRow rows) { // When maxIsColumns, we let the caller trim the result.
[3/8] git commit: merge from 1.1
merge from 1.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3e770828 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3e770828 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3e770828 Branch: refs/heads/cassandra-1.2 Commit: 3e7708289be12d6bacf0ea4c1adc822a63f75f15 Parents: 38bfc6d a2ca30e Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 15:59:28 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:59:28 2012 -0600 -- CHANGES.txt|2 + .../org/apache/cassandra/service/StorageProxy.java | 17 ++- 2 files changed, 18 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3e770828/CHANGES.txt -- diff --cc CHANGES.txt index 9ac6227,b80c60f..0ac5b66 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,74 -1,8 +1,76 @@@ -1.1.7 +1.2-rc1 + * save truncation position in system table (CASSANDRA-4906) + * Move CompressionMetadata off-heap (CASSANDRA-4937) + * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) + * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402) + * acquire references to overlapping sstables during compaction so bloom filter + doesn't get free'd prematurely (CASSANDRA-4934) + * Don't share slice query filter in CQL3 SelectStatement (CASSANDRA-4928) + * Separate tracing from Log4J (CASSANDRA-4861) + * Exclude gcable tombstones from merkle-tree computation (CASSANDRA-4905) + * Better printing of AbstractBounds for tracing (CASSANDRA-4931) +Merged from 1.1: + * reset getRangeSlice filter after finishing a row for get_paged_slice +(CASSANDRA-4919) * expunge row cache post-truncate (CASSANDRA-4940) - * remove IAuthority2 (CASSANDRA-4875) + + +1.2-beta2 + * fp rate of 1.0 disables BF entirely; LCS defaults to 1.0 (CASSANDRA-4876) + * off-heap bloom filters for row keys (CASSANDRA_4865) + * add extension point for sstable components (CASSANDRA-4049) + * improve tracing output (CASSANDRA-4852, 4862) + * make TRACE verb droppable (CASSANDRA-4672) + * fix BulkLoader recognition of CQL3 columnfamilies (CASSANDRA-4755) + * Sort commitlog segments for replay by id instead of mtime (CASSANDRA-4793) + * Make hint delivery asynchronous (CASSANDRA-4761) + * Pluggable Thrift transport factories for CLI and cqlsh (CASSANDRA-4609, 4610) + * cassandra-cli: allow Double value type to be inserted to a column (CASSANDRA-4661) + * Add ability to use custom TServerFactory implementations (CASSANDRA-4608) + * optimize batchlog flushing to skip successful batches (CASSANDRA-4667) + * include metadata for system keyspace itself in schema tables (CASSANDRA-4416) + * add check to PropertyFileSnitch to verify presence of location for + local node (CASSANDRA-4728) + * add PBSPredictor consistency modeler (CASSANDRA-4261) + * remove vestiges of Thrift unframed mode (CASSANDRA-4729) + * optimize single-row PK lookups (CASSANDRA-4710) + * adjust blockFor calculation to account for pending ranges due to node + movement (CASSANDRA-833) + * Change CQL version to 3.0.0 and stop accepting 3.0.0-beta1 (CASSANDRA-4649) + * (CQL3) Make prepared statement global instead of per connection + (CASSANDRA-4449) + * Fix scrubbing of CQL3 created tables (CASSANDRA-4685) + * (CQL3) Fix validation when using counter and regular columns in the same + table (CASSANDRA-4706) + * Fix bug starting Cassandra with simple authentication (CASSANDRA-4648) + * Add support for batchlog in CQL3 (CASSANDRA-4545, 4738) + * Add support for multiple column family outputs in CFOF (CASSANDRA-4208) + * Support repairing only the local DC nodes (CASSANDRA-4747) + * Use rpc_address for binary protocol and change default port (CASSANRA-4751) + * Fix use of collections in prepared statements (CASSANDRA-4739) + * Store more information into peers table (CASSANDRA-4351, 4814) + * Configurable bucket size for size tiered compaction (CASSANDRA-4704) + * Run leveled compaction in parallel (CASSANDRA-4310) + * Fix potential NPE during CFS reload (CASSANDRA-4786) + * Composite indexes may miss results (CASSANDRA-4796) + * Move consistency level to the protocol level (CASSANDRA-4734, 4824) + * Fix Subcolumn slice ends not respected (CASSANDRA-4826) + * Fix Assertion error in cql3 select (CASSANDRA-4783) + * Fix list prepend logic (CQL3) (CASSANDRA-4835) + * Add booleans as literals in CQL3 (CASSANDRA-4776) + * Allow renaming PK columns in CQL3 (CASSANDRA-4822) + * Fix binary protocol NEW_NODE event (CASSANDRA-4679) + * Fix potential infinite loop in tombstone compaction (CASSANDRA-4781) + * Remove system tables accounting from schema
[4/8] git commit: merge from 1.1
merge from 1.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3e770828 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3e770828 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3e770828 Branch: refs/heads/trunk Commit: 3e7708289be12d6bacf0ea4c1adc822a63f75f15 Parents: 38bfc6d a2ca30e Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 15:59:28 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:59:28 2012 -0600 -- CHANGES.txt|2 + .../org/apache/cassandra/service/StorageProxy.java | 17 ++- 2 files changed, 18 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3e770828/CHANGES.txt -- diff --cc CHANGES.txt index 9ac6227,b80c60f..0ac5b66 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,74 -1,8 +1,76 @@@ -1.1.7 +1.2-rc1 + * save truncation position in system table (CASSANDRA-4906) + * Move CompressionMetadata off-heap (CASSANDRA-4937) + * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) + * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402) + * acquire references to overlapping sstables during compaction so bloom filter + doesn't get free'd prematurely (CASSANDRA-4934) + * Don't share slice query filter in CQL3 SelectStatement (CASSANDRA-4928) + * Separate tracing from Log4J (CASSANDRA-4861) + * Exclude gcable tombstones from merkle-tree computation (CASSANDRA-4905) + * Better printing of AbstractBounds for tracing (CASSANDRA-4931) +Merged from 1.1: + * reset getRangeSlice filter after finishing a row for get_paged_slice +(CASSANDRA-4919) * expunge row cache post-truncate (CASSANDRA-4940) - * remove IAuthority2 (CASSANDRA-4875) + + +1.2-beta2 + * fp rate of 1.0 disables BF entirely; LCS defaults to 1.0 (CASSANDRA-4876) + * off-heap bloom filters for row keys (CASSANDRA_4865) + * add extension point for sstable components (CASSANDRA-4049) + * improve tracing output (CASSANDRA-4852, 4862) + * make TRACE verb droppable (CASSANDRA-4672) + * fix BulkLoader recognition of CQL3 columnfamilies (CASSANDRA-4755) + * Sort commitlog segments for replay by id instead of mtime (CASSANDRA-4793) + * Make hint delivery asynchronous (CASSANDRA-4761) + * Pluggable Thrift transport factories for CLI and cqlsh (CASSANDRA-4609, 4610) + * cassandra-cli: allow Double value type to be inserted to a column (CASSANDRA-4661) + * Add ability to use custom TServerFactory implementations (CASSANDRA-4608) + * optimize batchlog flushing to skip successful batches (CASSANDRA-4667) + * include metadata for system keyspace itself in schema tables (CASSANDRA-4416) + * add check to PropertyFileSnitch to verify presence of location for + local node (CASSANDRA-4728) + * add PBSPredictor consistency modeler (CASSANDRA-4261) + * remove vestiges of Thrift unframed mode (CASSANDRA-4729) + * optimize single-row PK lookups (CASSANDRA-4710) + * adjust blockFor calculation to account for pending ranges due to node + movement (CASSANDRA-833) + * Change CQL version to 3.0.0 and stop accepting 3.0.0-beta1 (CASSANDRA-4649) + * (CQL3) Make prepared statement global instead of per connection + (CASSANDRA-4449) + * Fix scrubbing of CQL3 created tables (CASSANDRA-4685) + * (CQL3) Fix validation when using counter and regular columns in the same + table (CASSANDRA-4706) + * Fix bug starting Cassandra with simple authentication (CASSANDRA-4648) + * Add support for batchlog in CQL3 (CASSANDRA-4545, 4738) + * Add support for multiple column family outputs in CFOF (CASSANDRA-4208) + * Support repairing only the local DC nodes (CASSANDRA-4747) + * Use rpc_address for binary protocol and change default port (CASSANRA-4751) + * Fix use of collections in prepared statements (CASSANDRA-4739) + * Store more information into peers table (CASSANDRA-4351, 4814) + * Configurable bucket size for size tiered compaction (CASSANDRA-4704) + * Run leveled compaction in parallel (CASSANDRA-4310) + * Fix potential NPE during CFS reload (CASSANDRA-4786) + * Composite indexes may miss results (CASSANDRA-4796) + * Move consistency level to the protocol level (CASSANDRA-4734, 4824) + * Fix Subcolumn slice ends not respected (CASSANDRA-4826) + * Fix Assertion error in cql3 select (CASSANDRA-4783) + * Fix list prepend logic (CQL3) (CASSANDRA-4835) + * Add booleans as literals in CQL3 (CASSANDRA-4776) + * Allow renaming PK columns in CQL3 (CASSANDRA-4822) + * Fix binary protocol NEW_NODE event (CASSANDRA-4679) + * Fix potential infinite loop in tombstone compaction (CASSANDRA-4781) + * Remove system tables accounting from schema (CASSANDRA-4850)
[7/8] git commit: reset getRangeSlice filter after finishing a row forget_paged_slice patch by Piotr Kołaczkowski; reviewed by jbellis for CASSANDRA-4919
reset getRangeSlice filter after finishing a row forget_paged_slice patch by Piotr KoÅaczkowski; reviewed by jbellis for CASSANDRA-4919 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2ca30e7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2ca30e7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2ca30e7 Branch: refs/heads/cassandra-1.2 Commit: a2ca30e79510a04ca1f6b9c5a342ee9467176a9f Parents: a05f676 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 15:57:15 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 15:57:15 2012 -0600 -- CHANGES.txt|2 + .../org/apache/cassandra/service/StorageProxy.java | 17 ++- 2 files changed, 18 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2ca30e7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 088daa7..b80c60f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 1.1.7 + * reset getRangeSlice filter after finishing a row for get_paged_slice + (CASSANDRA-4919) * expunge row cache post-truncate (CASSANDRA-4940) * remove IAuthority2 (CASSANDRA-4875) * add get[Row|Key]CacheEntries to CacheServiceMBean (CASSANDRA-4859) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2ca30e7/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 14466ee..453e2b2 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -845,6 +845,9 @@ public class StorageProxy implements StorageProxyMBean // now scan until we have enough results try { +final SlicePredicate emptyPredicate = getEmptySlicePredicate(); +SlicePredicate commandPredicate = command.predicate; + int columnsCount = 0; rows = new ArrayListRow(); ListAbstractBoundsRowPosition ranges = getRestrictedRanges(command.range); @@ -853,7 +856,7 @@ public class StorageProxy implements StorageProxyMBean RangeSliceCommand nodeCmd = new RangeSliceCommand(command.keyspace, command.column_family, command.super_column, - command.predicate, + commandPredicate, range, command.row_filter, command.maxResults, @@ -923,6 +926,11 @@ public class StorageProxy implements StorageProxyMBean int count = nodeCmd.maxIsColumns ? columnsCount : rows.size(); if (count = nodeCmd.maxResults) break; + +// if we are paging and already got some rows, reset the column filter predicate, +// so we start iterating the next row from the first column +if (!rows.isEmpty() command.isPaging) +commandPredicate = emptyPredicate; } } finally @@ -932,6 +940,13 @@ public class StorageProxy implements StorageProxyMBean return trim(command, rows); } +private static SlicePredicate getEmptySlicePredicate() +{ +final SliceRange emptySliceRange = +new SliceRange(ByteBufferUtil.EMPTY_BYTE_BUFFER, ByteBufferUtil.EMPTY_BYTE_BUFFER, false, -1); +return new SlicePredicate().setSlice_range(emptySliceRange); +} + private static ListRow trim(RangeSliceCommand command, ListRow rows) { // When maxIsColumns, we let the caller trim the result.
buildbot failure in ASF Buildbot on cassandra-trunk
The Buildbot has detected a new failure on builder cassandra-trunk while building cassandra. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/2076 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: portunus_ubuntu Build Reason: scheduler Build Source Stamp: [branch trunk] bce1f2b8ed9c574d0323b9cdb8fa827244b11df6 Blamelist: Jonathan Ellis jbel...@apache.org BUILD FAILED: failed compile sincerely, -The Buildbot
[1/4] git commit: Merge branch 'cassandra-1.2' into trunk
Updated Branches: refs/heads/cassandra-1.2 3e7708289 - 660016658 refs/heads/cassandra-1.2.0 3e7708289 - 660016658 refs/heads/trunk bce1f2b8e - 805e5cdd5 Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/805e5cdd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/805e5cdd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/805e5cdd Branch: refs/heads/trunk Commit: 805e5cdd53d8cb5c43cd8b686f99b99299239879 Parents: bce1f2b 6600166 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 16:14:56 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 16:14:56 2012 -0600 -- .../org/apache/cassandra/service/StorageProxy.java | 18 ++- 1 files changed, 12 insertions(+), 6 deletions(-) --
[2/4] git commit: fix merge from 1.1
fix merge from 1.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66001665 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66001665 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66001665 Branch: refs/heads/trunk Commit: 660016658f033875f4a442b5f9fdd5875f610cb9 Parents: 3e77082 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 16:14:44 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 16:14:44 2012 -0600 -- .../org/apache/cassandra/service/StorageProxy.java | 18 ++- 1 files changed, 12 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/66001665/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 33266c7..b747075 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -41,7 +41,9 @@ import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.config.Schema; import org.apache.cassandra.db.*; import org.apache.cassandra.db.Table; +import org.apache.cassandra.db.filter.IDiskAtomFilter; import org.apache.cassandra.db.filter.QueryPath; +import org.apache.cassandra.db.filter.SliceQueryFilter; import org.apache.cassandra.db.marshal.UUIDType; import org.apache.cassandra.dht.AbstractBounds; import org.apache.cassandra.dht.Bounds; @@ -1091,8 +1093,11 @@ public class StorageProxy implements StorageProxyMBean // now scan until we have enough results try { -final SlicePredicate emptyPredicate = getEmptySlicePredicate(); -SlicePredicate commandPredicate = command.predicate; +final IDiskAtomFilter emptyPredicate = new SliceQueryFilter(ByteBufferUtil.EMPTY_BYTE_BUFFER, + ByteBufferUtil.EMPTY_BYTE_BUFFER, +false, +-1); +IDiskAtomFilter commandPredicate = command.predicate; int columnsCount = 0; rows = new ArrayListRow(); @@ -1174,11 +1179,12 @@ public class StorageProxy implements StorageProxyMBean return trim(command, rows); } -private static SlicePredicate getEmptySlicePredicate() +private static IDiskAtomFilter getEmptySlicePredicate() { -final SliceRange emptySliceRange = -new SliceRange(ByteBufferUtil.EMPTY_BYTE_BUFFER, ByteBufferUtil.EMPTY_BYTE_BUFFER, false, -1); -return new SlicePredicate().setSlice_range(emptySliceRange); +return new SliceQueryFilter(ByteBufferUtil.EMPTY_BYTE_BUFFER, +ByteBufferUtil.EMPTY_BYTE_BUFFER, +false, +-1); } private static ListRow trim(RangeSliceCommand command, ListRow rows)
[3/4] git commit: fix merge from 1.1
fix merge from 1.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66001665 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66001665 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66001665 Branch: refs/heads/cassandra-1.2.0 Commit: 660016658f033875f4a442b5f9fdd5875f610cb9 Parents: 3e77082 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 16:14:44 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 16:14:44 2012 -0600 -- .../org/apache/cassandra/service/StorageProxy.java | 18 ++- 1 files changed, 12 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/66001665/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 33266c7..b747075 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -41,7 +41,9 @@ import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.config.Schema; import org.apache.cassandra.db.*; import org.apache.cassandra.db.Table; +import org.apache.cassandra.db.filter.IDiskAtomFilter; import org.apache.cassandra.db.filter.QueryPath; +import org.apache.cassandra.db.filter.SliceQueryFilter; import org.apache.cassandra.db.marshal.UUIDType; import org.apache.cassandra.dht.AbstractBounds; import org.apache.cassandra.dht.Bounds; @@ -1091,8 +1093,11 @@ public class StorageProxy implements StorageProxyMBean // now scan until we have enough results try { -final SlicePredicate emptyPredicate = getEmptySlicePredicate(); -SlicePredicate commandPredicate = command.predicate; +final IDiskAtomFilter emptyPredicate = new SliceQueryFilter(ByteBufferUtil.EMPTY_BYTE_BUFFER, + ByteBufferUtil.EMPTY_BYTE_BUFFER, +false, +-1); +IDiskAtomFilter commandPredicate = command.predicate; int columnsCount = 0; rows = new ArrayListRow(); @@ -1174,11 +1179,12 @@ public class StorageProxy implements StorageProxyMBean return trim(command, rows); } -private static SlicePredicate getEmptySlicePredicate() +private static IDiskAtomFilter getEmptySlicePredicate() { -final SliceRange emptySliceRange = -new SliceRange(ByteBufferUtil.EMPTY_BYTE_BUFFER, ByteBufferUtil.EMPTY_BYTE_BUFFER, false, -1); -return new SlicePredicate().setSlice_range(emptySliceRange); +return new SliceQueryFilter(ByteBufferUtil.EMPTY_BYTE_BUFFER, +ByteBufferUtil.EMPTY_BYTE_BUFFER, +false, +-1); } private static ListRow trim(RangeSliceCommand command, ListRow rows)
[4/4] git commit: fix merge from 1.1
fix merge from 1.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66001665 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66001665 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66001665 Branch: refs/heads/cassandra-1.2 Commit: 660016658f033875f4a442b5f9fdd5875f610cb9 Parents: 3e77082 Author: Jonathan Ellis jbel...@apache.org Authored: Mon Nov 12 16:14:44 2012 -0600 Committer: Jonathan Ellis jbel...@apache.org Committed: Mon Nov 12 16:14:44 2012 -0600 -- .../org/apache/cassandra/service/StorageProxy.java | 18 ++- 1 files changed, 12 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/66001665/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index 33266c7..b747075 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -41,7 +41,9 @@ import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.config.Schema; import org.apache.cassandra.db.*; import org.apache.cassandra.db.Table; +import org.apache.cassandra.db.filter.IDiskAtomFilter; import org.apache.cassandra.db.filter.QueryPath; +import org.apache.cassandra.db.filter.SliceQueryFilter; import org.apache.cassandra.db.marshal.UUIDType; import org.apache.cassandra.dht.AbstractBounds; import org.apache.cassandra.dht.Bounds; @@ -1091,8 +1093,11 @@ public class StorageProxy implements StorageProxyMBean // now scan until we have enough results try { -final SlicePredicate emptyPredicate = getEmptySlicePredicate(); -SlicePredicate commandPredicate = command.predicate; +final IDiskAtomFilter emptyPredicate = new SliceQueryFilter(ByteBufferUtil.EMPTY_BYTE_BUFFER, + ByteBufferUtil.EMPTY_BYTE_BUFFER, +false, +-1); +IDiskAtomFilter commandPredicate = command.predicate; int columnsCount = 0; rows = new ArrayListRow(); @@ -1174,11 +1179,12 @@ public class StorageProxy implements StorageProxyMBean return trim(command, rows); } -private static SlicePredicate getEmptySlicePredicate() +private static IDiskAtomFilter getEmptySlicePredicate() { -final SliceRange emptySliceRange = -new SliceRange(ByteBufferUtil.EMPTY_BYTE_BUFFER, ByteBufferUtil.EMPTY_BYTE_BUFFER, false, -1); -return new SlicePredicate().setSlice_range(emptySliceRange); +return new SliceQueryFilter(ByteBufferUtil.EMPTY_BYTE_BUFFER, +ByteBufferUtil.EMPTY_BYTE_BUFFER, +false, +-1); } private static ListRow trim(RangeSliceCommand command, ListRow rows)
buildbot success in ASF Buildbot on cassandra-trunk
The Buildbot has detected a restored build on builder cassandra-trunk while building cassandra. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/2077 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: portunus_ubuntu Build Reason: scheduler Build Source Stamp: [branch trunk] 805e5cdd53d8cb5c43cd8b686f99b99299239879 Blamelist: Jonathan Ellis jbel...@apache.org Build succeeded! sincerely, -The Buildbot
git commit: cqlsh: fix DESCRIBE command patch by Aleksey Yeschenko; reviewed by brandonwilliams for CASSANDRA-4913
Updated Branches: refs/heads/cassandra-1.2.0 660016658 - f56ea8b0d cqlsh: fix DESCRIBE command patch by Aleksey Yeschenko; reviewed by brandonwilliams for CASSANDRA-4913 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f56ea8b0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f56ea8b0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f56ea8b0 Branch: refs/heads/cassandra-1.2.0 Commit: f56ea8b0da4ec5f26a540363b683b2a2f8221101 Parents: 6600166 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Nov 13 01:06:57 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Nov 13 01:29:26 2012 +0300 -- CHANGES.txt |1 + bin/cqlsh | 25 +++-- 2 files changed, 12 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f56ea8b0/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0ac5b66..7d4882d 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2-rc1 + * fix cqlsh DESCRIBE command (CASSANDRA-4913) * save truncation position in system table (CASSANDRA-4906) * Move CompressionMetadata off-heap (CASSANDRA-4937) * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f56ea8b0/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 27eef7b..53eef8a 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -627,10 +627,7 @@ class Shell(cmd.Cmd): raise ColumnFamilyNotFound(Unconfigured column family %r % (cfname,)) def get_columnfamily_names(self, ksname=None): -if self.cqlver_atleast(3) and ksname not in SYSTEM_KEYSPACES: -# since cql3 tables may be left out of thrift results, but -# info on tables in system keyspaces still aren't included -# in system.schema_* +if self.cqlver_atleast(3): return self.get_columnfamily_names_cql3(ksname=ksname) return [c.name for c in self.get_columnfamilies(ksname)] @@ -730,7 +727,7 @@ class Shell(cmd.Cmd): cf_q = select columnfamily from system.schema_columnfamilies where keyspace=:ks self.cursor.execute(cf_q, {'ks': ksname}) -return [row[0] for row in self.cursor.fetchall()] +return [str(row[0]) for row in self.cursor.fetchall()] def get_columnfamily_layout(self, ksname, cfname): if ksname is None: @@ -1156,12 +1153,13 @@ class Shell(cmd.Cmd): out.write(\n AND strategy_options:%s = %s % (opname, self.cql_protect_value(opval))) out.write(';\n') -if ksdef.cf_defs: +cfs = self.get_columnfamily_names(ksname) +if cfs: out.write('\nUSE %s;\n' % ksname) -for cf in ksdef.cf_defs: +for cf in cfs: out.write('\n') # yes, cf might be looked up again. oh well. -self.print_recreate_columnfamily(ksdef.name, cf.name, out) +self.print_recreate_columnfamily(ksdef.name, cf, out) def print_recreate_columnfamily(self, ksname, cfname, out): @@ -1327,20 +1325,19 @@ class Shell(cmd.Cmd): print def describe_columnfamilies(self, ksname): +print if ksname is None: for k in self.get_keyspaces(): print 'Keyspace %s' % (k.name,) -print '-%s\n' % ('-' * len(k.name)) -cmd.Cmd.columnize(self, [c.name for c in k.cf_defs]) +print '-%s' % ('-' * len(k.name)) +cmd.Cmd.columnize(self, self.get_columnfamily_names(k.name)) print else: -names = self.get_columnfamily_names(ksname) -print -cmd.Cmd.columnize(self, names) +cmd.Cmd.columnize(self, self.get_columnfamily_names(ksname)) print def describe_cluster(self): -print 'Cluster: %s' % self.get_cluster_name() +print '\nCluster: %s' % self.get_cluster_name() p = trim_if_present(self.get_partitioner(), 'org.apache.cassandra.dht.') print 'Partitioner: %s' % p snitch = trim_if_present(self.get_snitch(), 'org.apache.cassandra.locator.')
[jira] [Created] (CASSANDRA-4951) Leveled compaction manifest sometimes references nonexistent sstables in a snapshot
Oleg Kibirev created CASSANDRA-4951: --- Summary: Leveled compaction manifest sometimes references nonexistent sstables in a snapshot Key: CASSANDRA-4951 URL: https://issues.apache.org/jira/browse/CASSANDRA-4951 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.4 Reporter: Oleg Kibirev After nodetool snapshot on a node under load, we sometimes see sstables not referenced in the leveled compaction json manifest, or sstables in the manifest which are not found on disk. There are two concerns with this: 1. What would happened to leveled compaction and to reads if the snapshot is restored with missing or extra sstables? 2. Is this a sign of a snapshot not having a complete copy of the data? To support automated restore, manifest and/or a list of links should be made correct at snapshot time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: cqlsh: fix DESCRIBE command patch by Aleksey Yeschenko; reviewed by brandonwilliams for CASSANDRA-4913
Updated Branches: refs/heads/cassandra-1.2 660016658 - c57fc3bd8 cqlsh: fix DESCRIBE command patch by Aleksey Yeschenko; reviewed by brandonwilliams for CASSANDRA-4913 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c57fc3bd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c57fc3bd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c57fc3bd Branch: refs/heads/cassandra-1.2 Commit: c57fc3bd86707a723b62492a077261b39f99c67d Parents: 6600166 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Nov 13 01:06:57 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Nov 13 01:31:01 2012 +0300 -- CHANGES.txt |1 + bin/cqlsh | 25 +++-- 2 files changed, 12 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c57fc3bd/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0ac5b66..7d4882d 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2-rc1 + * fix cqlsh DESCRIBE command (CASSANDRA-4913) * save truncation position in system table (CASSANDRA-4906) * Move CompressionMetadata off-heap (CASSANDRA-4937) * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c57fc3bd/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 27eef7b..53eef8a 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -627,10 +627,7 @@ class Shell(cmd.Cmd): raise ColumnFamilyNotFound(Unconfigured column family %r % (cfname,)) def get_columnfamily_names(self, ksname=None): -if self.cqlver_atleast(3) and ksname not in SYSTEM_KEYSPACES: -# since cql3 tables may be left out of thrift results, but -# info on tables in system keyspaces still aren't included -# in system.schema_* +if self.cqlver_atleast(3): return self.get_columnfamily_names_cql3(ksname=ksname) return [c.name for c in self.get_columnfamilies(ksname)] @@ -730,7 +727,7 @@ class Shell(cmd.Cmd): cf_q = select columnfamily from system.schema_columnfamilies where keyspace=:ks self.cursor.execute(cf_q, {'ks': ksname}) -return [row[0] for row in self.cursor.fetchall()] +return [str(row[0]) for row in self.cursor.fetchall()] def get_columnfamily_layout(self, ksname, cfname): if ksname is None: @@ -1156,12 +1153,13 @@ class Shell(cmd.Cmd): out.write(\n AND strategy_options:%s = %s % (opname, self.cql_protect_value(opval))) out.write(';\n') -if ksdef.cf_defs: +cfs = self.get_columnfamily_names(ksname) +if cfs: out.write('\nUSE %s;\n' % ksname) -for cf in ksdef.cf_defs: +for cf in cfs: out.write('\n') # yes, cf might be looked up again. oh well. -self.print_recreate_columnfamily(ksdef.name, cf.name, out) +self.print_recreate_columnfamily(ksdef.name, cf, out) def print_recreate_columnfamily(self, ksname, cfname, out): @@ -1327,20 +1325,19 @@ class Shell(cmd.Cmd): print def describe_columnfamilies(self, ksname): +print if ksname is None: for k in self.get_keyspaces(): print 'Keyspace %s' % (k.name,) -print '-%s\n' % ('-' * len(k.name)) -cmd.Cmd.columnize(self, [c.name for c in k.cf_defs]) +print '-%s' % ('-' * len(k.name)) +cmd.Cmd.columnize(self, self.get_columnfamily_names(k.name)) print else: -names = self.get_columnfamily_names(ksname) -print -cmd.Cmd.columnize(self, names) +cmd.Cmd.columnize(self, self.get_columnfamily_names(ksname)) print def describe_cluster(self): -print 'Cluster: %s' % self.get_cluster_name() +print '\nCluster: %s' % self.get_cluster_name() p = trim_if_present(self.get_partitioner(), 'org.apache.cassandra.dht.') print 'Partitioner: %s' % p snitch = trim_if_present(self.get_snitch(), 'org.apache.cassandra.locator.')
git commit: cqlsh: fix DESCRIBE command patch by Aleksey Yeschenko; reviewed by brandonwilliams for CASSANDRA-4913
Updated Branches: refs/heads/trunk 805e5cdd5 - aeef6497c cqlsh: fix DESCRIBE command patch by Aleksey Yeschenko; reviewed by brandonwilliams for CASSANDRA-4913 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aeef6497 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aeef6497 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aeef6497 Branch: refs/heads/trunk Commit: aeef6497ced0745ff1f266188a862d62326f087a Parents: 805e5cd Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Nov 13 01:06:57 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Nov 13 01:32:32 2012 +0300 -- CHANGES.txt |1 + bin/cqlsh | 25 +++-- 2 files changed, 12 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/aeef6497/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0ac5b66..7d4882d 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2-rc1 + * fix cqlsh DESCRIBE command (CASSANDRA-4913) * save truncation position in system table (CASSANDRA-4906) * Move CompressionMetadata off-heap (CASSANDRA-4937) * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) http://git-wip-us.apache.org/repos/asf/cassandra/blob/aeef6497/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 27eef7b..53eef8a 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -627,10 +627,7 @@ class Shell(cmd.Cmd): raise ColumnFamilyNotFound(Unconfigured column family %r % (cfname,)) def get_columnfamily_names(self, ksname=None): -if self.cqlver_atleast(3) and ksname not in SYSTEM_KEYSPACES: -# since cql3 tables may be left out of thrift results, but -# info on tables in system keyspaces still aren't included -# in system.schema_* +if self.cqlver_atleast(3): return self.get_columnfamily_names_cql3(ksname=ksname) return [c.name for c in self.get_columnfamilies(ksname)] @@ -730,7 +727,7 @@ class Shell(cmd.Cmd): cf_q = select columnfamily from system.schema_columnfamilies where keyspace=:ks self.cursor.execute(cf_q, {'ks': ksname}) -return [row[0] for row in self.cursor.fetchall()] +return [str(row[0]) for row in self.cursor.fetchall()] def get_columnfamily_layout(self, ksname, cfname): if ksname is None: @@ -1156,12 +1153,13 @@ class Shell(cmd.Cmd): out.write(\n AND strategy_options:%s = %s % (opname, self.cql_protect_value(opval))) out.write(';\n') -if ksdef.cf_defs: +cfs = self.get_columnfamily_names(ksname) +if cfs: out.write('\nUSE %s;\n' % ksname) -for cf in ksdef.cf_defs: +for cf in cfs: out.write('\n') # yes, cf might be looked up again. oh well. -self.print_recreate_columnfamily(ksdef.name, cf.name, out) +self.print_recreate_columnfamily(ksdef.name, cf, out) def print_recreate_columnfamily(self, ksname, cfname, out): @@ -1327,20 +1325,19 @@ class Shell(cmd.Cmd): print def describe_columnfamilies(self, ksname): +print if ksname is None: for k in self.get_keyspaces(): print 'Keyspace %s' % (k.name,) -print '-%s\n' % ('-' * len(k.name)) -cmd.Cmd.columnize(self, [c.name for c in k.cf_defs]) +print '-%s' % ('-' * len(k.name)) +cmd.Cmd.columnize(self, self.get_columnfamily_names(k.name)) print else: -names = self.get_columnfamily_names(ksname) -print -cmd.Cmd.columnize(self, names) +cmd.Cmd.columnize(self, self.get_columnfamily_names(ksname)) print def describe_cluster(self): -print 'Cluster: %s' % self.get_cluster_name() +print '\nCluster: %s' % self.get_cluster_name() p = trim_if_present(self.get_partitioner(), 'org.apache.cassandra.dht.') print 'Partitioner: %s' % p snitch = trim_if_present(self.get_snitch(), 'org.apache.cassandra.locator.')
[jira] [Updated] (CASSANDRA-4916) Starting Cassandra throws EOF while reading saved cache
[ https://issues.apache.org/jira/browse/CASSANDRA-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4916: -- Priority: Minor (was: Major) Affects Version/s: (was: 1.2.0 beta 1) Fix Version/s: 1.2.1 Assignee: Dave Brosius Wouldn't be the first time I've seen InputStream.available lie. But we don't want to write the number of items in the cache at the start of the file (the approach we usually take) because that would require making a copy of the cache's keySet which might be more memory than we can afford. Suggested workarounds: write some kind of EOF value when we're done instead of just closing the file, and check for that on read. Alternatively, just catch the EOF and log it at debug; a partially-read cache is harmless. Starting Cassandra throws EOF while reading saved cache --- Key: CASSANDRA-4916 URL: https://issues.apache.org/jira/browse/CASSANDRA-4916 Project: Cassandra Issue Type: Bug Components: Core Reporter: Michael Kjellman Assignee: Dave Brosius Priority: Minor Fix For: 1.2.1 Currently seeing nodes throw an EOF while reading a saved cache on the system schema when starting cassandra WARN 14:25:54,896 error reading saved cache /ssd/saved_caches/system-schema_columns-KeyCache-b.db java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:349) at org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:378) at org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:144) at org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:278) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:393) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:365) at org.apache.cassandra.db.Table.initCf(Table.java:334) at org.apache.cassandra.db.Table.init(Table.java:272) at org.apache.cassandra.db.Table.open(Table.java:102) at org.apache.cassandra.db.Table.open(Table.java:80) at org.apache.cassandra.db.SystemTable.checkHealth(SystemTable.java:320) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:203) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:395) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:438) to reproduce delete all data files, start a cluster, leave cluster up long enough to build a cache. nodetool drain, kill cassandra process. start cassandra process in foreground and note EOF thrown (see above for stack trace) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4914) Aggregate functions in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495743#comment-13495743 ] Jonathan Ellis commented on CASSANDRA-4914: --- bq. aggregation might lose a bit of it's usefulness without a proper support for DISTINCT How's that? Aggregation kind of distinct-ifies as part of what it does anyway. Did you mean GROUP BY? Aggregate functions in CQL -- Key: CASSANDRA-4914 URL: https://issues.apache.org/jira/browse/CASSANDRA-4914 Project: Cassandra Issue Type: Bug Reporter: Vijay Assignee: Vijay Fix For: 1.2.1 The requirement is to do aggregation of data in Cassandra (Wide row of column values of int, double, float etc). With some basic agree gate functions like AVG, SUM, Mean, Min, Max, etc (for the columns within a row). Example: SELECT * FROM emp WHERE empID IN (130) ORDER BY deptID DESC; empid | deptid | first_name | last_name | salary ---+++---+ 130 | 3 | joe| doe | 10.1 130 | 2 | joe| doe |100 130 | 1 | joe| doe | 1e+03 SELECT sum(salary), empid FROM emp WHERE empID IN (130); sum(salary) | empid -+ 1110.1| 130 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3017) add a Message size limit
[ https://issues.apache.org/jira/browse/CASSANDRA-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495750#comment-13495750 ] Jonathan Ellis commented on CASSANDRA-3017: --- IMO the goal here is to keep a malicious or foolish peer on the network from breaking things by forcing us to allocate huge buffers. So MessageOut should check and log an error if it actually hits a legitimate Message that is too large -- this is a sign that legitimate requests are getting dropped (see below). But MessageIn should only log a warning. The reason I wanted to investigate the difference between Thrift frame size and our Message size is, if a Thrift limit of (say) 100K turns into a Message of 99K that is fine, we can use the Thrift limit here. But if our Messages are larger than the corresponding Thrift frame then we could reject messages that Thrift said were fine which is bad. add a Message size limit Key: CASSANDRA-3017 URL: https://issues.apache.org/jira/browse/CASSANDRA-3017 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Kirk True Priority: Minor Labels: lhf Attachments: 0001-use-the-thrift-max-message-size-for-inter-node-messa.patch, trunk-3017.txt We protect the server from allocating huge buffers for malformed message with the Thrift frame size (CASSANDRA-475). But we don't have similar protection for the inter-node Message objects. Adding this would be good to deal with malicious adversaries as well as a malfunctioning cluster participant. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4883) Optimize mostRecentTomstone vs maxTimestamp check in CollationController.collectAllData
[ https://issues.apache.org/jira/browse/CASSANDRA-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495757#comment-13495757 ] Jonathan Ellis commented on CASSANDRA-4883: --- Any reason to not use ImmutableSet in DataTracker? +1 otherwise. Optimize mostRecentTomstone vs maxTimestamp check in CollationController.collectAllData --- Key: CASSANDRA-4883 URL: https://issues.apache.org/jira/browse/CASSANDRA-4883 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 1.2.0 rc1 Attachments: 4883.txt CollationController.collectAllData eliminates a sstable if we've already read a row tombstone more recent that its maxTimestamp. This is however done in 2 passes and can be inefficient (or rather, it's not as efficient as it could). More precisely, say we have 10 sstables s0, ... s9, where s0 is the most recent and s9 the least one (and their maxTimestamp reflect that) and s0 has a row tombstone that is more recent than all of s1-s9 maxTimestamps. Now in collectAllData(), we first iterate over sstables in a random order (because DataTracker keeps sstable in a more or less random order). Meaning that we may iterate in the order s9, s8, ... s0. In that case, we will end up reading the row header from all the sstable (hitting disk each time). Then, and only then, the 2nd pass of collectAllData will eliminate s1 to s9. However, if we were to iterate sstable in maxTimestamps order (as we do in collectTimeOrdered), we would only need one pass but more importantly we would minimize the number of row header we read to perform that sstable eliminination. In my example, we would only ever read the row tombstone from s0 and eliminate all other sstable directly, simply based on their maxTimestamp. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4848) Expose black-listed directories via JMX
[ https://issues.apache.org/jira/browse/CASSANDRA-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk True updated CASSANDRA-4848: - Attachment: trunk-4848.v2.txt v2 addresses the lack of constructor logging and makes the instance singleton private, per request. Expose black-listed directories via JMX --- Key: CASSANDRA-4848 URL: https://issues.apache.org/jira/browse/CASSANDRA-4848 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.0 beta 1 Reporter: Kirk True Assignee: Kirk True Priority: Minor Labels: lhf Fix For: 1.2.0 rc1 Attachments: trunk-4848.txt, trunk-4848.v2.txt JBOD support included failure modes (CASSANDRA-2118) that insert directories on bad disks to the {{BlacklistedDirectories}} class. However, it doesn't appear that the list of directories is exposed to administrators via JMX or other means. So unless they're scouring the logs or being notified at the OS level, they will remain unaware of the issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4952) Add blocking force compaction (and anything else) calls to NodeProbe
Michael Harris created CASSANDRA-4952: - Summary: Add blocking force compaction (and anything else) calls to NodeProbe Key: CASSANDRA-4952 URL: https://issues.apache.org/jira/browse/CASSANDRA-4952 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.6 Reporter: Michael Harris There are times when I'd like to get feedback about when compactions complete. For example, if I'm deleting data from cassandra and want to know when it is 100% removed from cassandra (tombstones collected and all). This is completely trivial to implement based on the existing code (the method called by the non-blocking version returns a future, so you could just wait on that, potentially with a timeout). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Add debug logging to list filenames processed by o.a.c.db.Directories.migrateFile method patch by dbrosius reviewed by slebresne for CASSANDRA-4939
Updated Branches: refs/heads/cassandra-1.2 c57fc3bd8 - 6677d075e Add debug logging to list filenames processed by o.a.c.db.Directories.migrateFile method patch by dbrosius reviewed by slebresne for CASSANDRA-4939 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6677d075 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6677d075 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6677d075 Branch: refs/heads/cassandra-1.2 Commit: 6677d075e9cadc073633fd84f810b9fc5174db45 Parents: c57fc3b Author: Dave Brosius dbros...@apache.org Authored: Mon Nov 12 19:57:31 2012 -0500 Committer: Dave Brosius dbros...@apache.org Committed: Mon Nov 12 19:57:31 2012 -0500 -- CHANGES.txt|4 ++ src/java/org/apache/cassandra/db/Directories.java | 42 ++- .../org/apache/cassandra/db/DirectoriesTest.java | 40 -- 3 files changed, 67 insertions(+), 19 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6677d075/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7d4882d..d7855af 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,7 @@ +1.2.1 + * Add debug logging to list filenames processed by Directories.migrateFile method (CASSANDRA-4939) + + 1.2-rc1 * fix cqlsh DESCRIBE command (CASSANDRA-4913) * save truncation position in system table (CASSANDRA-4906) http://git-wip-us.apache.org/repos/asf/cassandra/blob/6677d075/src/java/org/apache/cassandra/db/Directories.java -- diff --git a/src/java/org/apache/cassandra/db/Directories.java b/src/java/org/apache/cassandra/db/Directories.java index 877b5ed..9e53ab9 100644 --- a/src/java/org/apache/cassandra/db/Directories.java +++ b/src/java/org/apache/cassandra/db/Directories.java @@ -620,19 +620,35 @@ public class Directories if (file.isDirectory()) return; -String name = file.getName(); -boolean isManifest = name.endsWith(LeveledManifest.EXTENSION); -String cfname = isManifest - ? name.substring(0, name.length() - LeveledManifest.EXTENSION.length()) - : name.substring(0, name.indexOf(Component.separator)); - -int idx = cfname.indexOf(SECONDARY_INDEX_NAME_SEPARATOR); // idx 0 = secondary index -String dirname = idx 0 ? cfname.substring(0, idx) : cfname; -File destDir = getOrCreate(ksDir, dirname, additionalPath); - -File destFile = new File(destDir, isManifest ? name : ksDir.getName() + Component.separator + name); -logger.debug(String.format([upgrade to 1.1] Moving %s to %s, file, destFile)); -FileUtils.renameWithConfirm(file, destFile); +try +{ +String name = file.getName(); +boolean isManifest = name.endsWith(LeveledManifest.EXTENSION); +int separatorIndex = name.indexOf(Component.separator); + +if (isManifest || (separatorIndex = 0)) +{ +String cfname = isManifest + ? name.substring(0, name.length() - LeveledManifest.EXTENSION.length()) + : name.substring(0, separatorIndex); + +int idx = cfname.indexOf(SECONDARY_INDEX_NAME_SEPARATOR); // idx 0 = secondary index +String dirname = idx 0 ? cfname.substring(0, idx) : cfname; +File destDir = getOrCreate(ksDir, dirname, additionalPath); + +File destFile = new File(destDir, isManifest ? name : ksDir.getName() + Component.separator + name); +logger.debug(String.format([upgrade to 1.1] Moving %s to %s, file, destFile)); +FileUtils.renameWithConfirm(file, destFile); +} +else +{ +logger.warn(Found unrecognized file {} while migrating sstables from pre 1.1 format, ignoring., file); +} +} +catch (Exception e) +{ +throw new RuntimeException(String.format(Failed migrating file %s from pre 1.1 format., file.getPath()), e); +} } // Hack for tests, don't use otherwise http://git-wip-us.apache.org/repos/asf/cassandra/blob/6677d075/test/unit/org/apache/cassandra/db/DirectoriesTest.java -- diff --git a/test/unit/org/apache/cassandra/db/DirectoriesTest.java b/test/unit/org/apache/cassandra/db/DirectoriesTest.java index d1a44fd..ba6576d 100644 --- a/test/unit/org/apache/cassandra/db/DirectoriesTest.java +++ b/test/unit/org/apache/cassandra/db/DirectoriesTest.java @@ -22,13 +22,14 @@ import
git commit: Add debug logging to list filenames processed by o.a.c.db.Directories.migrateFile method patch by dbrosius reviewed by slebresne for CASSANDRA-4939
Updated Branches: refs/heads/trunk aeef6497c - 2964951e0 Add debug logging to list filenames processed by o.a.c.db.Directories.migrateFile method patch by dbrosius reviewed by slebresne for CASSANDRA-4939 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2964951e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2964951e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2964951e Branch: refs/heads/trunk Commit: 2964951e07a5123a9d95d29223fc0539f4baa3ed Parents: aeef649 Author: Dave Brosius dbros...@apache.org Authored: Mon Nov 12 19:57:31 2012 -0500 Committer: Dave Brosius dbros...@apache.org Committed: Mon Nov 12 20:02:27 2012 -0500 -- CHANGES.txt|4 ++ src/java/org/apache/cassandra/db/Directories.java | 42 ++- .../org/apache/cassandra/db/DirectoriesTest.java | 40 -- 3 files changed, 67 insertions(+), 19 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2964951e/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7d4882d..d7855af 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,7 @@ +1.2.1 + * Add debug logging to list filenames processed by Directories.migrateFile method (CASSANDRA-4939) + + 1.2-rc1 * fix cqlsh DESCRIBE command (CASSANDRA-4913) * save truncation position in system table (CASSANDRA-4906) http://git-wip-us.apache.org/repos/asf/cassandra/blob/2964951e/src/java/org/apache/cassandra/db/Directories.java -- diff --git a/src/java/org/apache/cassandra/db/Directories.java b/src/java/org/apache/cassandra/db/Directories.java index 877b5ed..9e53ab9 100644 --- a/src/java/org/apache/cassandra/db/Directories.java +++ b/src/java/org/apache/cassandra/db/Directories.java @@ -620,19 +620,35 @@ public class Directories if (file.isDirectory()) return; -String name = file.getName(); -boolean isManifest = name.endsWith(LeveledManifest.EXTENSION); -String cfname = isManifest - ? name.substring(0, name.length() - LeveledManifest.EXTENSION.length()) - : name.substring(0, name.indexOf(Component.separator)); - -int idx = cfname.indexOf(SECONDARY_INDEX_NAME_SEPARATOR); // idx 0 = secondary index -String dirname = idx 0 ? cfname.substring(0, idx) : cfname; -File destDir = getOrCreate(ksDir, dirname, additionalPath); - -File destFile = new File(destDir, isManifest ? name : ksDir.getName() + Component.separator + name); -logger.debug(String.format([upgrade to 1.1] Moving %s to %s, file, destFile)); -FileUtils.renameWithConfirm(file, destFile); +try +{ +String name = file.getName(); +boolean isManifest = name.endsWith(LeveledManifest.EXTENSION); +int separatorIndex = name.indexOf(Component.separator); + +if (isManifest || (separatorIndex = 0)) +{ +String cfname = isManifest + ? name.substring(0, name.length() - LeveledManifest.EXTENSION.length()) + : name.substring(0, separatorIndex); + +int idx = cfname.indexOf(SECONDARY_INDEX_NAME_SEPARATOR); // idx 0 = secondary index +String dirname = idx 0 ? cfname.substring(0, idx) : cfname; +File destDir = getOrCreate(ksDir, dirname, additionalPath); + +File destFile = new File(destDir, isManifest ? name : ksDir.getName() + Component.separator + name); +logger.debug(String.format([upgrade to 1.1] Moving %s to %s, file, destFile)); +FileUtils.renameWithConfirm(file, destFile); +} +else +{ +logger.warn(Found unrecognized file {} while migrating sstables from pre 1.1 format, ignoring., file); +} +} +catch (Exception e) +{ +throw new RuntimeException(String.format(Failed migrating file %s from pre 1.1 format., file.getPath()), e); +} } // Hack for tests, don't use otherwise http://git-wip-us.apache.org/repos/asf/cassandra/blob/2964951e/test/unit/org/apache/cassandra/db/DirectoriesTest.java -- diff --git a/test/unit/org/apache/cassandra/db/DirectoriesTest.java b/test/unit/org/apache/cassandra/db/DirectoriesTest.java index d1a44fd..ba6576d 100644 --- a/test/unit/org/apache/cassandra/db/DirectoriesTest.java +++ b/test/unit/org/apache/cassandra/db/DirectoriesTest.java @@ -22,13 +22,14 @@ import java.io.IOException; import
[jira] [Updated] (CASSANDRA-3974) Per-CF TTL
[ https://issues.apache.org/jira/browse/CASSANDRA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk True updated CASSANDRA-3974: - Attachment: trunk-3974v7.txt Rebase against trunk. Privatizing the constructors causes a lot of collateral changes and forces the creation of a factor method that is IMO not very intuitive to the caller. Per-CF TTL -- Key: CASSANDRA-3974 URL: https://issues.apache.org/jira/browse/CASSANDRA-3974 Project: Cassandra Issue Type: New Feature Affects Versions: 1.2.0 beta 1 Reporter: Jonathan Ellis Assignee: Kirk True Priority: Minor Fix For: 1.2.0 rc1 Attachments: trunk-3974.txt, trunk-3974v2.txt, trunk-3974v3.txt, trunk-3974v4.txt, trunk-3974v5.txt, trunk-3974v6.txt, trunk-3974v7.txt Per-CF TTL would allow compaction optimizations (drop an entire sstable's worth of expired data) that we can't do with per-column. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3017) add a Message size limit
[ https://issues.apache.org/jira/browse/CASSANDRA-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495852#comment-13495852 ] Kirk True commented on CASSANDRA-3017: -- I'd love to research the relationship between Thrift and intra-cluster messages. Can you provide a sentence or two about exactly what that entails? * Do I simply measure this via a series of get requests of varying column slices and column data size, or does it need to be exhaustive for every message type? * Can the request sizes be gleaned with more logging or do I need to bust out Wireshark? * Is the data size for inter-node messages constant with respect to the initial Thrift message or is some kind of compression in the mix? add a Message size limit Key: CASSANDRA-3017 URL: https://issues.apache.org/jira/browse/CASSANDRA-3017 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Kirk True Priority: Minor Labels: lhf Attachments: 0001-use-the-thrift-max-message-size-for-inter-node-messa.patch, trunk-3017.txt We protect the server from allocating huge buffers for malformed message with the Thrift frame size (CASSANDRA-475). But we don't have similar protection for the inter-node Message objects. Adding this would be good to deal with malicious adversaries as well as a malfunctioning cluster participant. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3017) add a Message size limit
[ https://issues.apache.org/jira/browse/CASSANDRA-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495866#comment-13495866 ] Jonathan Ellis commented on CASSANDRA-3017: --- We're mainly concerned about the insertion path here. So I was thinking more along the lines of just comparing (by inspection) what MessageOut sends for a row of data, to the minimum for that with Thrift. (No compression is involved.) I think we only really have two scenarios to worry about: # for an insert call of a single column, is MessageOut's overhead less than Thrift's? # as we add more columns, does our per-column overhead grow faster? We don't need to worry about batch_mutate or atomic_batch_mutate since those will be strictly higher-overhead than the equivalent insert calls. add a Message size limit Key: CASSANDRA-3017 URL: https://issues.apache.org/jira/browse/CASSANDRA-3017 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Kirk True Priority: Minor Labels: lhf Attachments: 0001-use-the-thrift-max-message-size-for-inter-node-messa.patch, trunk-3017.txt We protect the server from allocating huge buffers for malformed message with the Thrift frame size (CASSANDRA-475). But we don't have similar protection for the inter-node Message objects. Adding this would be good to deal with malicious adversaries as well as a malfunctioning cluster participant. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3017) add a Message size limit
[ https://issues.apache.org/jira/browse/CASSANDRA-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495868#comment-13495868 ] Jonathan Ellis commented on CASSANDRA-3017: --- You bring up a good point, though -- what about on the read path where the user makes a small request ({{SELECT * FROM foo}}) that results in a large dataset. Do we just fail that? add a Message size limit Key: CASSANDRA-3017 URL: https://issues.apache.org/jira/browse/CASSANDRA-3017 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Kirk True Priority: Minor Labels: lhf Attachments: 0001-use-the-thrift-max-message-size-for-inter-node-messa.patch, trunk-3017.txt We protect the server from allocating huge buffers for malformed message with the Thrift frame size (CASSANDRA-475). But we don't have similar protection for the inter-node Message objects. Adding this would be good to deal with malicious adversaries as well as a malfunctioning cluster participant. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)
[ https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vijay updated CASSANDRA-4860: - Attachment: 0001-CASSANDRA-4860.patch 0001-CASSANDRA-4860-for-11.patch Simple patch to fix the issue Looks like the issue is that we do Mesure instead of MeasureDeep which will actually measure the byte[] attached to the RK. *-for-11.patch is for 1.1 and *.patch is for 1.2. Estimated Row Cache Entry size incorrect (always 24?) - Key: CASSANDRA-4860 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.0 Reporter: Chris Burroughs Assignee: Vijay Fix For: 1.1.7, 1.2.1 Attachments: 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch After running for several hours the RowCacheSize was suspicious low (ie 70 something MB) I used CASSANDRA-4859 to measure the size and number of entries on a node: In [3]: 1560504./65021 Out[3]: 24.0 In [4]: 2149464./89561 Out[4]: 24.0 In [6]: 7216096./300785 Out[6]: 23.990877204647838 That's RowCacheSize/RowCacheNumEntires . Just to prove I don't have crazy small rows the mean size of the row *keys* in the saved cache is 67 and Compacted row mean size: 355. No jamm errors in the log Config notes: row_cache_provider: ConcurrentLinkedHashCacheProvider row_cache_size_in_mb: 2048 Version info: * C*: 1.1.6 * centos 2.6.32-220.13.1.el6.x86_64 * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Expose black-listed directories via JMX patch by Kirk True reviewed by dbrosius for cassandra-4848
Updated Branches: refs/heads/cassandra-1.2 6677d075e - de7aed52c Expose black-listed directories via JMX patch by Kirk True reviewed by dbrosius for cassandra-4848 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de7aed52 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de7aed52 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de7aed52 Branch: refs/heads/cassandra-1.2 Commit: de7aed52cf5c4f07839d953d498c6722c08bed92 Parents: 6677d07 Author: Dave Brosius dbros...@apache.org Authored: Mon Nov 12 23:04:20 2012 -0500 Committer: Dave Brosius dbros...@apache.org Committed: Mon Nov 12 23:04:20 2012 -0500 -- CHANGES.txt|2 +- .../cassandra/db/BlacklistedDirectories.java | 46 -- .../cassandra/db/BlacklistedDirectoriesMBean.java | 29 + 3 files changed, 69 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/de7aed52/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index d7855af..4a1ba94 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,6 +1,6 @@ 1.2.1 * Add debug logging to list filenames processed by Directories.migrateFile method (CASSANDRA-4939) - + * Expose black-listed directories via JMX (CASSANDRA-4848) 1.2-rc1 * fix cqlsh DESCRIBE command (CASSANDRA-4913) http://git-wip-us.apache.org/repos/asf/cassandra/blob/de7aed52/src/java/org/apache/cassandra/db/BlacklistedDirectories.java -- diff --git a/src/java/org/apache/cassandra/db/BlacklistedDirectories.java b/src/java/org/apache/cassandra/db/BlacklistedDirectories.java index 5ca7d1f..5e873c7 100644 --- a/src/java/org/apache/cassandra/db/BlacklistedDirectories.java +++ b/src/java/org/apache/cassandra/db/BlacklistedDirectories.java @@ -21,15 +21,47 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.File; +import java.lang.management.ManagementFactory; +import java.util.Collections; import java.util.Set; import java.util.concurrent.CopyOnWriteArraySet; -public class BlacklistedDirectories +import javax.management.MBeanServer; +import javax.management.ObjectName; + +public class BlacklistedDirectories implements BlacklistedDirectoriesMBean { +public static final String MBEAN_NAME = org.apache.cassandra.db:type=BlacklistedDirectories; +private static final BlacklistedDirectories instance = new BlacklistedDirectories(); private static final Logger logger = LoggerFactory.getLogger(BlacklistedDirectories.class); -private static SetFile unreadableDirectories = new CopyOnWriteArraySetFile(); -private static SetFile unwritableDirectories = new CopyOnWriteArraySetFile(); +private final SetFile unreadableDirectories = new CopyOnWriteArraySetFile(); +private final SetFile unwritableDirectories = new CopyOnWriteArraySetFile(); + +private BlacklistedDirectories() +{ +// Register this instance with JMX +try +{ +MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); +mbs.registerMBean(this, new ObjectName(MBEAN_NAME)); +} +catch (Exception e) +{ +logger.error(error registering MBean + MBEAN_NAME, e); +//Allow the server to start even if the bean can't be registered +} +} + +public SetFile getUnreadableDirectories() +{ +return Collections.unmodifiableSet(unreadableDirectories); +} + +public SetFile getUnwritableDirectories() +{ +return Collections.unmodifiableSet(unwritableDirectories); +} /** * Adds parent directory of the file (or the file itself, if it is a directory) @@ -40,7 +72,7 @@ public class BlacklistedDirectories public static File maybeMarkUnreadable(File path) { File directory = getDirectory(path); -if (unreadableDirectories.add(directory)) +if (instance.unreadableDirectories.add(directory)) { logger.warn(Blacklisting {} for reads, directory); return directory; @@ -57,7 +89,7 @@ public class BlacklistedDirectories public static File maybeMarkUnwritable(File path) { File directory = getDirectory(path); -if (unwritableDirectories.add(directory)) +if (instance.unwritableDirectories.add(directory)) { logger.warn(Blacklisting {} for writes, directory); return directory; @@ -71,7 +103,7 @@ public class BlacklistedDirectories */ public static boolean isUnreadable(File directory) { -return unreadableDirectories.contains(directory); +return
[1/4] git commit: merge from cassandra-1.2
Updated Branches: refs/heads/trunk 2964951e0 - 46689d88d merge from cassandra-1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/46689d88 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/46689d88 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/46689d88 Branch: refs/heads/trunk Commit: 46689d88d54ca4edbf8c324eeb720a522a697b70 Parents: 2964951 de7aed5 Author: Dave Brosius dbros...@apache.org Authored: Mon Nov 12 23:13:56 2012 -0500 Committer: Dave Brosius dbros...@apache.org Committed: Mon Nov 12 23:13:56 2012 -0500 -- CHANGES.txt|2 +- .../cassandra/db/BlacklistedDirectories.java | 46 -- .../cassandra/db/BlacklistedDirectoriesMBean.java | 29 + 3 files changed, 69 insertions(+), 8 deletions(-) --
[2/4] git commit: Expose black-listed directories via JMX patch by Kirk True reviewed by dbrosius for cassandra-4848
Expose black-listed directories via JMX patch by Kirk True reviewed by dbrosius for cassandra-4848 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de7aed52 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de7aed52 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de7aed52 Branch: refs/heads/trunk Commit: de7aed52cf5c4f07839d953d498c6722c08bed92 Parents: 6677d07 Author: Dave Brosius dbros...@apache.org Authored: Mon Nov 12 23:04:20 2012 -0500 Committer: Dave Brosius dbros...@apache.org Committed: Mon Nov 12 23:04:20 2012 -0500 -- CHANGES.txt|2 +- .../cassandra/db/BlacklistedDirectories.java | 46 -- .../cassandra/db/BlacklistedDirectoriesMBean.java | 29 + 3 files changed, 69 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/de7aed52/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index d7855af..4a1ba94 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,6 +1,6 @@ 1.2.1 * Add debug logging to list filenames processed by Directories.migrateFile method (CASSANDRA-4939) - + * Expose black-listed directories via JMX (CASSANDRA-4848) 1.2-rc1 * fix cqlsh DESCRIBE command (CASSANDRA-4913) http://git-wip-us.apache.org/repos/asf/cassandra/blob/de7aed52/src/java/org/apache/cassandra/db/BlacklistedDirectories.java -- diff --git a/src/java/org/apache/cassandra/db/BlacklistedDirectories.java b/src/java/org/apache/cassandra/db/BlacklistedDirectories.java index 5ca7d1f..5e873c7 100644 --- a/src/java/org/apache/cassandra/db/BlacklistedDirectories.java +++ b/src/java/org/apache/cassandra/db/BlacklistedDirectories.java @@ -21,15 +21,47 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.File; +import java.lang.management.ManagementFactory; +import java.util.Collections; import java.util.Set; import java.util.concurrent.CopyOnWriteArraySet; -public class BlacklistedDirectories +import javax.management.MBeanServer; +import javax.management.ObjectName; + +public class BlacklistedDirectories implements BlacklistedDirectoriesMBean { +public static final String MBEAN_NAME = org.apache.cassandra.db:type=BlacklistedDirectories; +private static final BlacklistedDirectories instance = new BlacklistedDirectories(); private static final Logger logger = LoggerFactory.getLogger(BlacklistedDirectories.class); -private static SetFile unreadableDirectories = new CopyOnWriteArraySetFile(); -private static SetFile unwritableDirectories = new CopyOnWriteArraySetFile(); +private final SetFile unreadableDirectories = new CopyOnWriteArraySetFile(); +private final SetFile unwritableDirectories = new CopyOnWriteArraySetFile(); + +private BlacklistedDirectories() +{ +// Register this instance with JMX +try +{ +MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); +mbs.registerMBean(this, new ObjectName(MBEAN_NAME)); +} +catch (Exception e) +{ +logger.error(error registering MBean + MBEAN_NAME, e); +//Allow the server to start even if the bean can't be registered +} +} + +public SetFile getUnreadableDirectories() +{ +return Collections.unmodifiableSet(unreadableDirectories); +} + +public SetFile getUnwritableDirectories() +{ +return Collections.unmodifiableSet(unwritableDirectories); +} /** * Adds parent directory of the file (or the file itself, if it is a directory) @@ -40,7 +72,7 @@ public class BlacklistedDirectories public static File maybeMarkUnreadable(File path) { File directory = getDirectory(path); -if (unreadableDirectories.add(directory)) +if (instance.unreadableDirectories.add(directory)) { logger.warn(Blacklisting {} for reads, directory); return directory; @@ -57,7 +89,7 @@ public class BlacklistedDirectories public static File maybeMarkUnwritable(File path) { File directory = getDirectory(path); -if (unwritableDirectories.add(directory)) +if (instance.unwritableDirectories.add(directory)) { logger.warn(Blacklisting {} for writes, directory); return directory; @@ -71,7 +103,7 @@ public class BlacklistedDirectories */ public static boolean isUnreadable(File directory) { -return unreadableDirectories.contains(directory); +return instance.unreadableDirectories.contains(directory); } /** @@ -80,7 +112,7 @@ public class
[3/4] git commit: Add debug logging to list filenames processed by o.a.c.db.Directories.migrateFile method patch by dbrosius reviewed by slebresne for CASSANDRA-4939
Add debug logging to list filenames processed by o.a.c.db.Directories.migrateFile method patch by dbrosius reviewed by slebresne for CASSANDRA-4939 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6677d075 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6677d075 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6677d075 Branch: refs/heads/trunk Commit: 6677d075e9cadc073633fd84f810b9fc5174db45 Parents: c57fc3b Author: Dave Brosius dbros...@apache.org Authored: Mon Nov 12 19:57:31 2012 -0500 Committer: Dave Brosius dbros...@apache.org Committed: Mon Nov 12 19:57:31 2012 -0500 -- CHANGES.txt|4 ++ src/java/org/apache/cassandra/db/Directories.java | 42 ++- .../org/apache/cassandra/db/DirectoriesTest.java | 40 -- 3 files changed, 67 insertions(+), 19 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6677d075/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7d4882d..d7855af 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,7 @@ +1.2.1 + * Add debug logging to list filenames processed by Directories.migrateFile method (CASSANDRA-4939) + + 1.2-rc1 * fix cqlsh DESCRIBE command (CASSANDRA-4913) * save truncation position in system table (CASSANDRA-4906) http://git-wip-us.apache.org/repos/asf/cassandra/blob/6677d075/src/java/org/apache/cassandra/db/Directories.java -- diff --git a/src/java/org/apache/cassandra/db/Directories.java b/src/java/org/apache/cassandra/db/Directories.java index 877b5ed..9e53ab9 100644 --- a/src/java/org/apache/cassandra/db/Directories.java +++ b/src/java/org/apache/cassandra/db/Directories.java @@ -620,19 +620,35 @@ public class Directories if (file.isDirectory()) return; -String name = file.getName(); -boolean isManifest = name.endsWith(LeveledManifest.EXTENSION); -String cfname = isManifest - ? name.substring(0, name.length() - LeveledManifest.EXTENSION.length()) - : name.substring(0, name.indexOf(Component.separator)); - -int idx = cfname.indexOf(SECONDARY_INDEX_NAME_SEPARATOR); // idx 0 = secondary index -String dirname = idx 0 ? cfname.substring(0, idx) : cfname; -File destDir = getOrCreate(ksDir, dirname, additionalPath); - -File destFile = new File(destDir, isManifest ? name : ksDir.getName() + Component.separator + name); -logger.debug(String.format([upgrade to 1.1] Moving %s to %s, file, destFile)); -FileUtils.renameWithConfirm(file, destFile); +try +{ +String name = file.getName(); +boolean isManifest = name.endsWith(LeveledManifest.EXTENSION); +int separatorIndex = name.indexOf(Component.separator); + +if (isManifest || (separatorIndex = 0)) +{ +String cfname = isManifest + ? name.substring(0, name.length() - LeveledManifest.EXTENSION.length()) + : name.substring(0, separatorIndex); + +int idx = cfname.indexOf(SECONDARY_INDEX_NAME_SEPARATOR); // idx 0 = secondary index +String dirname = idx 0 ? cfname.substring(0, idx) : cfname; +File destDir = getOrCreate(ksDir, dirname, additionalPath); + +File destFile = new File(destDir, isManifest ? name : ksDir.getName() + Component.separator + name); +logger.debug(String.format([upgrade to 1.1] Moving %s to %s, file, destFile)); +FileUtils.renameWithConfirm(file, destFile); +} +else +{ +logger.warn(Found unrecognized file {} while migrating sstables from pre 1.1 format, ignoring., file); +} +} +catch (Exception e) +{ +throw new RuntimeException(String.format(Failed migrating file %s from pre 1.1 format., file.getPath()), e); +} } // Hack for tests, don't use otherwise http://git-wip-us.apache.org/repos/asf/cassandra/blob/6677d075/test/unit/org/apache/cassandra/db/DirectoriesTest.java -- diff --git a/test/unit/org/apache/cassandra/db/DirectoriesTest.java b/test/unit/org/apache/cassandra/db/DirectoriesTest.java index d1a44fd..ba6576d 100644 --- a/test/unit/org/apache/cassandra/db/DirectoriesTest.java +++ b/test/unit/org/apache/cassandra/db/DirectoriesTest.java @@ -22,13 +22,14 @@ import java.io.IOException; import java.util.*; import org.junit.AfterClass; +import
[4/4] git commit: cqlsh: fix DESCRIBE command patch by Aleksey Yeschenko; reviewed by brandonwilliams for CASSANDRA-4913
cqlsh: fix DESCRIBE command patch by Aleksey Yeschenko; reviewed by brandonwilliams for CASSANDRA-4913 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c57fc3bd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c57fc3bd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c57fc3bd Branch: refs/heads/trunk Commit: c57fc3bd86707a723b62492a077261b39f99c67d Parents: 6600166 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Nov 13 01:06:57 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Nov 13 01:31:01 2012 +0300 -- CHANGES.txt |1 + bin/cqlsh | 25 +++-- 2 files changed, 12 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c57fc3bd/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0ac5b66..7d4882d 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2-rc1 + * fix cqlsh DESCRIBE command (CASSANDRA-4913) * save truncation position in system table (CASSANDRA-4906) * Move CompressionMetadata off-heap (CASSANDRA-4937) * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c57fc3bd/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 27eef7b..53eef8a 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -627,10 +627,7 @@ class Shell(cmd.Cmd): raise ColumnFamilyNotFound(Unconfigured column family %r % (cfname,)) def get_columnfamily_names(self, ksname=None): -if self.cqlver_atleast(3) and ksname not in SYSTEM_KEYSPACES: -# since cql3 tables may be left out of thrift results, but -# info on tables in system keyspaces still aren't included -# in system.schema_* +if self.cqlver_atleast(3): return self.get_columnfamily_names_cql3(ksname=ksname) return [c.name for c in self.get_columnfamilies(ksname)] @@ -730,7 +727,7 @@ class Shell(cmd.Cmd): cf_q = select columnfamily from system.schema_columnfamilies where keyspace=:ks self.cursor.execute(cf_q, {'ks': ksname}) -return [row[0] for row in self.cursor.fetchall()] +return [str(row[0]) for row in self.cursor.fetchall()] def get_columnfamily_layout(self, ksname, cfname): if ksname is None: @@ -1156,12 +1153,13 @@ class Shell(cmd.Cmd): out.write(\n AND strategy_options:%s = %s % (opname, self.cql_protect_value(opval))) out.write(';\n') -if ksdef.cf_defs: +cfs = self.get_columnfamily_names(ksname) +if cfs: out.write('\nUSE %s;\n' % ksname) -for cf in ksdef.cf_defs: +for cf in cfs: out.write('\n') # yes, cf might be looked up again. oh well. -self.print_recreate_columnfamily(ksdef.name, cf.name, out) +self.print_recreate_columnfamily(ksdef.name, cf, out) def print_recreate_columnfamily(self, ksname, cfname, out): @@ -1327,20 +1325,19 @@ class Shell(cmd.Cmd): print def describe_columnfamilies(self, ksname): +print if ksname is None: for k in self.get_keyspaces(): print 'Keyspace %s' % (k.name,) -print '-%s\n' % ('-' * len(k.name)) -cmd.Cmd.columnize(self, [c.name for c in k.cf_defs]) +print '-%s' % ('-' * len(k.name)) +cmd.Cmd.columnize(self, self.get_columnfamily_names(k.name)) print else: -names = self.get_columnfamily_names(ksname) -print -cmd.Cmd.columnize(self, names) +cmd.Cmd.columnize(self, self.get_columnfamily_names(ksname)) print def describe_cluster(self): -print 'Cluster: %s' % self.get_cluster_name() +print '\nCluster: %s' % self.get_cluster_name() p = trim_if_present(self.get_partitioner(), 'org.apache.cassandra.dht.') print 'Partitioner: %s' % p snitch = trim_if_present(self.get_snitch(), 'org.apache.cassandra.locator.')