[jira] [Updated] (CASSANDRA-5996) Remove leveled manifest json migration code
[ https://issues.apache.org/jira/browse/CASSANDRA-5996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-5996: --- Attachment: 0001-remove-old-json-manifest-migration-code-v2.patch yup, thats wrong, v2 attached and looking at that again, StandaloneScrubber is broken in 2.0, will create ticket Remove leveled manifest json migration code --- Key: CASSANDRA-5996 URL: https://issues.apache.org/jira/browse/CASSANDRA-5996 Project: Cassandra Issue Type: Improvement Reporter: Marcus Eriksson Assignee: Marcus Eriksson Priority: Minor Fix For: 2.1 Attachments: 0001-remove-old-json-manifest-migration-code.patch, 0001-remove-old-json-manifest-migration-code-v2.patch We should remove the json leveled manifest migration code from 2.1 this will require users to atleast start 2.0 before upgrading to 2.1 (manifest is migrated on startup). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-6005) StandaloneScrubber assumes old-style json leveled manifest
Marcus Eriksson created CASSANDRA-6005: -- Summary: StandaloneScrubber assumes old-style json leveled manifest Key: CASSANDRA-6005 URL: https://issues.apache.org/jira/browse/CASSANDRA-6005 Project: Cassandra Issue Type: Bug Reporter: Marcus Eriksson Assignee: Marcus Eriksson Priority: Trivial Fix For: 2.0.1 Attachments: 0001-Make-StandaloneScrubber-handle-new-leveled-manifest.patch With standalone scrubber in 2.0 we can encounter both the old-style json manifest and the new way, StandaloneScrubber needs to handle this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-6005) StandaloneScrubber assumes old-style json leveled manifest
[ https://issues.apache.org/jira/browse/CASSANDRA-6005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-6005: --- Attachment: 0001-Make-StandaloneScrubber-handle-new-leveled-manifest.patch StandaloneScrubber assumes old-style json leveled manifest -- Key: CASSANDRA-6005 URL: https://issues.apache.org/jira/browse/CASSANDRA-6005 Project: Cassandra Issue Type: Bug Reporter: Marcus Eriksson Assignee: Marcus Eriksson Priority: Trivial Fix For: 2.0.1 Attachments: 0001-Make-StandaloneScrubber-handle-new-leveled-manifest.patch With standalone scrubber in 2.0 we can encounter both the old-style json manifest and the new way, StandaloneScrubber needs to handle this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[2/2] git commit: Support null in CQL3 functions
Support null in CQL3 functions patch by slebresne; reviewed by iamaleksey for CASSANDRA-5910 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8bedb572 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8bedb572 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8bedb572 Branch: refs/heads/cassandra-1.2 Commit: 8bedb57207d54c4e88a762221d20d814fc351b1f Parents: 9b545ca Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed Sep 11 08:31:05 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed Sep 11 08:31:05 2013 +0200 -- CHANGES.txt | 1 + .../cassandra/cql3/functions/TimeuuidFcts.java | 24 .../cassandra/cql3/functions/TokenFct.java | 7 +- 3 files changed, 27 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bedb572/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index cfcb364..e420a7b 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -12,6 +12,7 @@ * Pig: handle CQL collections (CASSANDRA-5867) * Pass the updated cf to the PRSI index() method (CASSANDRA-5999) * Allow empty CQL3 batches (as no-op) (CASSANDRA-5994) + * Support null in CQL3 functions (CASSANDRA-5910) 1.2.9 http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bedb572/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java -- diff --git a/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java b/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java index e325e8f..52eca54 100644 --- a/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java +++ b/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java @@ -47,7 +47,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.minTimeUUID(DateType.instance.compose(parameters.get(0)).getTime(; +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.minTimeUUID(DateType.instance.compose(bb).getTime(; } }; @@ -55,7 +59,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.maxTimeUUID(DateType.instance.compose(parameters.get(0)).getTime(; +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.maxTimeUUID(DateType.instance.compose(bb).getTime(; } }; @@ -63,7 +71,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return DateType.instance.decompose(new Date(UUIDGen.unixTimestamp(UUIDGen.getUUID(parameters.get(0); +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return DateType.instance.decompose(new Date(UUIDGen.unixTimestamp(UUIDGen.getUUID(bb; } }; @@ -71,7 +83,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return ByteBufferUtil.bytes(UUIDGen.unixTimestamp(UUIDGen.getUUID(parameters.get(0; +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return ByteBufferUtil.bytes(UUIDGen.unixTimestamp(UUIDGen.getUUID(bb))); } }; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bedb572/src/java/org/apache/cassandra/cql3/functions/TokenFct.java -- diff --git a/src/java/org/apache/cassandra/cql3/functions/TokenFct.java b/src/java/org/apache/cassandra/cql3/functions/TokenFct.java index 21695ca..28da87a 100644 --- a/src/java/org/apache/cassandra/cql3/functions/TokenFct.java +++ b/src/java/org/apache/cassandra/cql3/functions/TokenFct.java @@ -62,8 +62,13 @@ public class TokenFct extends AbstractFunction public ByteBuffer execute(ListByteBuffer parameters) throws InvalidRequestException { ColumnNameBuilder builder = cfDef.getKeyNameBuilder(); -for (ByteBuffer bb : parameters) +for (int i = 0; i parameters.size(); i++) +{ +ByteBuffer bb = parameters.get(i); +if (bb == null) +return null; builder.add(bb); +}
[1/2] git commit: Allow empty CQL3 batches as no-op
Updated Branches: refs/heads/cassandra-1.2 fd129664c - 8bedb5720 Allow empty CQL3 batches as no-op patch by slebresne; reviewed by iamaleksey for CASSANDRA-5994 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b545caa Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b545caa Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b545caa Branch: refs/heads/cassandra-1.2 Commit: 9b545caab5376e7f6e43aba49888df662a5d6e0d Parents: fd12966 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed Sep 11 08:29:25 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed Sep 11 08:29:25 2013 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/cql3/Cql.g | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b545caa/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 12e2017..cfcb364 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -11,6 +11,7 @@ * Correctly handle limits in CompositesSearcher (CASSANDRA-5975) * Pig: handle CQL collections (CASSANDRA-5867) * Pass the updated cf to the PRSI index() method (CASSANDRA-5999) + * Allow empty CQL3 batches (as no-op) (CASSANDRA-5994) 1.2.9 http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b545caa/src/java/org/apache/cassandra/cql3/Cql.g -- diff --git a/src/java/org/apache/cassandra/cql3/Cql.g b/src/java/org/apache/cassandra/cql3/Cql.g index f59be51..2445bf2 100644 --- a/src/java/org/apache/cassandra/cql3/Cql.g +++ b/src/java/org/apache/cassandra/cql3/Cql.g @@ -386,7 +386,7 @@ batchStatement returns [BatchStatement expr] : K_BEGIN ( K_UNLOGGED { type = BatchStatement.Type.UNLOGGED; } | K_COUNTER { type = BatchStatement.Type.COUNTER; } )? K_BATCH ( usingClause[attrs] )? - ( s=batchStatementObjective ';'? { statements.add(s); } )+ + ( s=batchStatementObjective ';'? { statements.add(s); } )* K_APPLY K_BATCH { return new BatchStatement(type, statements, attrs);
[2/3] git commit: Support null in CQL3 functions
Support null in CQL3 functions patch by slebresne; reviewed by iamaleksey for CASSANDRA-5910 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8bedb572 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8bedb572 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8bedb572 Branch: refs/heads/cassandra-2.0 Commit: 8bedb57207d54c4e88a762221d20d814fc351b1f Parents: 9b545ca Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed Sep 11 08:31:05 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed Sep 11 08:31:05 2013 +0200 -- CHANGES.txt | 1 + .../cassandra/cql3/functions/TimeuuidFcts.java | 24 .../cassandra/cql3/functions/TokenFct.java | 7 +- 3 files changed, 27 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bedb572/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index cfcb364..e420a7b 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -12,6 +12,7 @@ * Pig: handle CQL collections (CASSANDRA-5867) * Pass the updated cf to the PRSI index() method (CASSANDRA-5999) * Allow empty CQL3 batches (as no-op) (CASSANDRA-5994) + * Support null in CQL3 functions (CASSANDRA-5910) 1.2.9 http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bedb572/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java -- diff --git a/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java b/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java index e325e8f..52eca54 100644 --- a/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java +++ b/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java @@ -47,7 +47,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.minTimeUUID(DateType.instance.compose(parameters.get(0)).getTime(; +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.minTimeUUID(DateType.instance.compose(bb).getTime(; } }; @@ -55,7 +59,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.maxTimeUUID(DateType.instance.compose(parameters.get(0)).getTime(; +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.maxTimeUUID(DateType.instance.compose(bb).getTime(; } }; @@ -63,7 +71,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return DateType.instance.decompose(new Date(UUIDGen.unixTimestamp(UUIDGen.getUUID(parameters.get(0); +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return DateType.instance.decompose(new Date(UUIDGen.unixTimestamp(UUIDGen.getUUID(bb; } }; @@ -71,7 +83,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return ByteBufferUtil.bytes(UUIDGen.unixTimestamp(UUIDGen.getUUID(parameters.get(0; +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return ByteBufferUtil.bytes(UUIDGen.unixTimestamp(UUIDGen.getUUID(bb))); } }; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bedb572/src/java/org/apache/cassandra/cql3/functions/TokenFct.java -- diff --git a/src/java/org/apache/cassandra/cql3/functions/TokenFct.java b/src/java/org/apache/cassandra/cql3/functions/TokenFct.java index 21695ca..28da87a 100644 --- a/src/java/org/apache/cassandra/cql3/functions/TokenFct.java +++ b/src/java/org/apache/cassandra/cql3/functions/TokenFct.java @@ -62,8 +62,13 @@ public class TokenFct extends AbstractFunction public ByteBuffer execute(ListByteBuffer parameters) throws InvalidRequestException { ColumnNameBuilder builder = cfDef.getKeyNameBuilder(); -for (ByteBuffer bb : parameters) +for (int i = 0; i parameters.size(); i++) +{ +ByteBuffer bb = parameters.get(i); +if (bb == null) +return null; builder.add(bb); +}
[1/3] git commit: Allow empty CQL3 batches as no-op
Updated Branches: refs/heads/cassandra-2.0 f6fda9c69 - 2c84b1403 Allow empty CQL3 batches as no-op patch by slebresne; reviewed by iamaleksey for CASSANDRA-5994 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b545caa Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b545caa Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b545caa Branch: refs/heads/cassandra-2.0 Commit: 9b545caab5376e7f6e43aba49888df662a5d6e0d Parents: fd12966 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed Sep 11 08:29:25 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed Sep 11 08:29:25 2013 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/cql3/Cql.g | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b545caa/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 12e2017..cfcb364 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -11,6 +11,7 @@ * Correctly handle limits in CompositesSearcher (CASSANDRA-5975) * Pig: handle CQL collections (CASSANDRA-5867) * Pass the updated cf to the PRSI index() method (CASSANDRA-5999) + * Allow empty CQL3 batches (as no-op) (CASSANDRA-5994) 1.2.9 http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b545caa/src/java/org/apache/cassandra/cql3/Cql.g -- diff --git a/src/java/org/apache/cassandra/cql3/Cql.g b/src/java/org/apache/cassandra/cql3/Cql.g index f59be51..2445bf2 100644 --- a/src/java/org/apache/cassandra/cql3/Cql.g +++ b/src/java/org/apache/cassandra/cql3/Cql.g @@ -386,7 +386,7 @@ batchStatement returns [BatchStatement expr] : K_BEGIN ( K_UNLOGGED { type = BatchStatement.Type.UNLOGGED; } | K_COUNTER { type = BatchStatement.Type.COUNTER; } )? K_BATCH ( usingClause[attrs] )? - ( s=batchStatementObjective ';'? { statements.add(s); } )+ + ( s=batchStatementObjective ';'? { statements.add(s); } )* K_APPLY K_BATCH { return new BatchStatement(type, statements, attrs);
[3/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Conflicts: src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2c84b140 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2c84b140 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2c84b140 Branch: refs/heads/cassandra-2.0 Commit: 2c84b1403f9bfdaf563f62c6d1751f80627e9915 Parents: f6fda9c 8bedb57 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed Sep 11 08:35:24 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed Sep 11 08:35:24 2013 +0200 -- CHANGES.txt | 2 ++ src/java/org/apache/cassandra/cql3/Cql.g| 2 +- .../cassandra/cql3/functions/TimeuuidFcts.java | 24 .../cassandra/cql3/functions/TokenFct.java | 7 +- 4 files changed, 29 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c84b140/CHANGES.txt -- diff --cc CHANGES.txt index 808c558,e420a7b..68829d8 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -27,54 -11,15 +27,56 @@@ Merged from 1.2 * Correctly handle limits in CompositesSearcher (CASSANDRA-5975) * Pig: handle CQL collections (CASSANDRA-5867) * Pass the updated cf to the PRSI index() method (CASSANDRA-5999) + * Allow empty CQL3 batches (as no-op) (CASSANDRA-5994) + * Support null in CQL3 functions (CASSANDRA-5910) -1.2.9 +2.0.0 + * Fix thrift validation when inserting into CQL3 tables (CASSANDRA-5138) + * Fix periodic memtable flushing behavior with clean memtables (CASSANDRA-5931) + * Fix dateOf() function for pre-2.0 timestamp columns (CASSANDRA-5928) + * Fix SSTable unintentionally loads BF when opened for batch (CASSANDRA-5938) + * Add stream session progress to JMX (CASSANDRA-4757) + * Fix NPE during CAS operation (CASSANDRA-5925) +Merged from 1.2: * Fix getBloomFilterDiskSpaceUsed for AlwaysPresentFilter (CASSANDRA-5900) - * migrate 1.1 schema_columnfamilies.key_alias column to key_aliases - (CASSANDRA-5800) - * add --migrate option to sstableupgrade and sstablescrub (CASSANDRA-5831) + * Don't announce schema version until we've loaded the changes locally + (CASSANDRA-5904) + * Fix to support off heap bloom filters size greater than 2 GB (CASSANDRA-5903) + * Properly handle parsing huge map and set literals (CASSANDRA-5893) + + +2.0.0-rc2 + * enable vnodes by default (CASSANDRA-5869) + * fix CAS contention timeout (CASSANDRA-5830) + * fix HsHa to respect max frame size (CASSANDRA-4573) + * Fix (some) 2i on composite components omissions (CASSANDRA-5851) + * cqlsh: add DESCRIBE FULL SCHEMA variant (CASSANDRA-5880) +Merged from 1.2: + * Correctly validate sparse composite cells in scrub (CASSANDRA-5855) + * Add KeyCacheHitRate metric to CF metrics (CASSANDRA-5868) + * cqlsh: add support for multiline comments (CASSANDRA-5798) + * Handle CQL3 SELECT duplicate IN restrictions on clustering columns + (CASSANDRA-5856) + + +2.0.0-rc1 + * improve DecimalSerializer performance (CASSANDRA-5837) + * fix potential spurious wakeup in AsyncOneResponse (CASSANDRA-5690) + * fix schema-related trigger issues (CASSANDRA-5774) + * Better validation when accessing CQL3 table from thrift (CASSANDRA-5138) + * Fix assertion error during repair (CASSANDRA-5801) + * Fix range tombstone bug (CASSANDRA-5805) + * DC-local CAS (CASSANDRA-5797) + * Add a native_protocol_version column to the system.local table (CASSANRDA-5819) + * Use index_interval from cassandra.yaml when upgraded (CASSANDRA-5822) + * Fix buffer underflow on socket close (CASSANDRA-5792) +Merged from 1.2: + * Fix reading DeletionTime from 1.1-format sstables (CASSANDRA-5814) + * cqlsh: add collections support to COPY (CASSANDRA-5698) + * retry important messages for any IOException (CASSANDRA-5804) + * Allow empty IN relations in SELECT/UPDATE/DELETE statements (CASSANDRA-5626) + * cqlsh: fix crashing on Windows due to libedit detection (CASSANDRA-5812) * fix bulk-loading compressed sstables (CASSANDRA-5820) * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c84b140/src/java/org/apache/cassandra/cql3/Cql.g -- diff --cc src/java/org/apache/cassandra/cql3/Cql.g index 0e1be34,2445bf2..61bf3c8 --- a/src/java/org/apache/cassandra/cql3/Cql.g +++ b/src/java/org/apache/cassandra/cql3/Cql.g @@@ -419,10 -386,10 +419,10 @@@ batchStatement returns [BatchStatement. : K_BEGIN ( K_UNLOGGED { type = BatchStatement.Type.UNLOGGED; } | K_COUNTER { type =
[3/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Conflicts: src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2c84b140 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2c84b140 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2c84b140 Branch: refs/heads/trunk Commit: 2c84b1403f9bfdaf563f62c6d1751f80627e9915 Parents: f6fda9c 8bedb57 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed Sep 11 08:35:24 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed Sep 11 08:35:24 2013 +0200 -- CHANGES.txt | 2 ++ src/java/org/apache/cassandra/cql3/Cql.g| 2 +- .../cassandra/cql3/functions/TimeuuidFcts.java | 24 .../cassandra/cql3/functions/TokenFct.java | 7 +- 4 files changed, 29 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c84b140/CHANGES.txt -- diff --cc CHANGES.txt index 808c558,e420a7b..68829d8 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -27,54 -11,15 +27,56 @@@ Merged from 1.2 * Correctly handle limits in CompositesSearcher (CASSANDRA-5975) * Pig: handle CQL collections (CASSANDRA-5867) * Pass the updated cf to the PRSI index() method (CASSANDRA-5999) + * Allow empty CQL3 batches (as no-op) (CASSANDRA-5994) + * Support null in CQL3 functions (CASSANDRA-5910) -1.2.9 +2.0.0 + * Fix thrift validation when inserting into CQL3 tables (CASSANDRA-5138) + * Fix periodic memtable flushing behavior with clean memtables (CASSANDRA-5931) + * Fix dateOf() function for pre-2.0 timestamp columns (CASSANDRA-5928) + * Fix SSTable unintentionally loads BF when opened for batch (CASSANDRA-5938) + * Add stream session progress to JMX (CASSANDRA-4757) + * Fix NPE during CAS operation (CASSANDRA-5925) +Merged from 1.2: * Fix getBloomFilterDiskSpaceUsed for AlwaysPresentFilter (CASSANDRA-5900) - * migrate 1.1 schema_columnfamilies.key_alias column to key_aliases - (CASSANDRA-5800) - * add --migrate option to sstableupgrade and sstablescrub (CASSANDRA-5831) + * Don't announce schema version until we've loaded the changes locally + (CASSANDRA-5904) + * Fix to support off heap bloom filters size greater than 2 GB (CASSANDRA-5903) + * Properly handle parsing huge map and set literals (CASSANDRA-5893) + + +2.0.0-rc2 + * enable vnodes by default (CASSANDRA-5869) + * fix CAS contention timeout (CASSANDRA-5830) + * fix HsHa to respect max frame size (CASSANDRA-4573) + * Fix (some) 2i on composite components omissions (CASSANDRA-5851) + * cqlsh: add DESCRIBE FULL SCHEMA variant (CASSANDRA-5880) +Merged from 1.2: + * Correctly validate sparse composite cells in scrub (CASSANDRA-5855) + * Add KeyCacheHitRate metric to CF metrics (CASSANDRA-5868) + * cqlsh: add support for multiline comments (CASSANDRA-5798) + * Handle CQL3 SELECT duplicate IN restrictions on clustering columns + (CASSANDRA-5856) + + +2.0.0-rc1 + * improve DecimalSerializer performance (CASSANDRA-5837) + * fix potential spurious wakeup in AsyncOneResponse (CASSANDRA-5690) + * fix schema-related trigger issues (CASSANDRA-5774) + * Better validation when accessing CQL3 table from thrift (CASSANDRA-5138) + * Fix assertion error during repair (CASSANDRA-5801) + * Fix range tombstone bug (CASSANDRA-5805) + * DC-local CAS (CASSANDRA-5797) + * Add a native_protocol_version column to the system.local table (CASSANRDA-5819) + * Use index_interval from cassandra.yaml when upgraded (CASSANDRA-5822) + * Fix buffer underflow on socket close (CASSANDRA-5792) +Merged from 1.2: + * Fix reading DeletionTime from 1.1-format sstables (CASSANDRA-5814) + * cqlsh: add collections support to COPY (CASSANDRA-5698) + * retry important messages for any IOException (CASSANDRA-5804) + * Allow empty IN relations in SELECT/UPDATE/DELETE statements (CASSANDRA-5626) + * cqlsh: fix crashing on Windows due to libedit detection (CASSANDRA-5812) * fix bulk-loading compressed sstables (CASSANDRA-5820) * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c84b140/src/java/org/apache/cassandra/cql3/Cql.g -- diff --cc src/java/org/apache/cassandra/cql3/Cql.g index 0e1be34,2445bf2..61bf3c8 --- a/src/java/org/apache/cassandra/cql3/Cql.g +++ b/src/java/org/apache/cassandra/cql3/Cql.g @@@ -419,10 -386,10 +419,10 @@@ batchStatement returns [BatchStatement. : K_BEGIN ( K_UNLOGGED { type = BatchStatement.Type.UNLOGGED; } | K_COUNTER { type =
[4/4] git commit: Merge branch 'cassandra-2.0' into trunk
Merge branch 'cassandra-2.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/972184bf Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/972184bf Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/972184bf Branch: refs/heads/trunk Commit: 972184bf750c0340f6ca2a13ee39347d173b5e79 Parents: 0132b71 2c84b14 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed Sep 11 08:36:21 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed Sep 11 08:36:21 2013 +0200 -- CHANGES.txt | 2 ++ src/java/org/apache/cassandra/cql3/Cql.g| 2 +- .../cassandra/cql3/functions/TimeuuidFcts.java | 24 .../cassandra/cql3/functions/TokenFct.java | 7 +- 4 files changed, 29 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/972184bf/CHANGES.txt --
[1/4] git commit: Allow empty CQL3 batches as no-op
Updated Branches: refs/heads/trunk 0132b71a0 - 972184bf7 Allow empty CQL3 batches as no-op patch by slebresne; reviewed by iamaleksey for CASSANDRA-5994 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b545caa Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b545caa Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b545caa Branch: refs/heads/trunk Commit: 9b545caab5376e7f6e43aba49888df662a5d6e0d Parents: fd12966 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed Sep 11 08:29:25 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed Sep 11 08:29:25 2013 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/cql3/Cql.g | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b545caa/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 12e2017..cfcb364 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -11,6 +11,7 @@ * Correctly handle limits in CompositesSearcher (CASSANDRA-5975) * Pig: handle CQL collections (CASSANDRA-5867) * Pass the updated cf to the PRSI index() method (CASSANDRA-5999) + * Allow empty CQL3 batches (as no-op) (CASSANDRA-5994) 1.2.9 http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b545caa/src/java/org/apache/cassandra/cql3/Cql.g -- diff --git a/src/java/org/apache/cassandra/cql3/Cql.g b/src/java/org/apache/cassandra/cql3/Cql.g index f59be51..2445bf2 100644 --- a/src/java/org/apache/cassandra/cql3/Cql.g +++ b/src/java/org/apache/cassandra/cql3/Cql.g @@ -386,7 +386,7 @@ batchStatement returns [BatchStatement expr] : K_BEGIN ( K_UNLOGGED { type = BatchStatement.Type.UNLOGGED; } | K_COUNTER { type = BatchStatement.Type.COUNTER; } )? K_BATCH ( usingClause[attrs] )? - ( s=batchStatementObjective ';'? { statements.add(s); } )+ + ( s=batchStatementObjective ';'? { statements.add(s); } )* K_APPLY K_BATCH { return new BatchStatement(type, statements, attrs);
[2/4] git commit: Support null in CQL3 functions
Support null in CQL3 functions patch by slebresne; reviewed by iamaleksey for CASSANDRA-5910 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8bedb572 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8bedb572 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8bedb572 Branch: refs/heads/trunk Commit: 8bedb57207d54c4e88a762221d20d814fc351b1f Parents: 9b545ca Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed Sep 11 08:31:05 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed Sep 11 08:31:05 2013 +0200 -- CHANGES.txt | 1 + .../cassandra/cql3/functions/TimeuuidFcts.java | 24 .../cassandra/cql3/functions/TokenFct.java | 7 +- 3 files changed, 27 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bedb572/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index cfcb364..e420a7b 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -12,6 +12,7 @@ * Pig: handle CQL collections (CASSANDRA-5867) * Pass the updated cf to the PRSI index() method (CASSANDRA-5999) * Allow empty CQL3 batches (as no-op) (CASSANDRA-5994) + * Support null in CQL3 functions (CASSANDRA-5910) 1.2.9 http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bedb572/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java -- diff --git a/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java b/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java index e325e8f..52eca54 100644 --- a/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java +++ b/src/java/org/apache/cassandra/cql3/functions/TimeuuidFcts.java @@ -47,7 +47,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.minTimeUUID(DateType.instance.compose(parameters.get(0)).getTime(; +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.minTimeUUID(DateType.instance.compose(bb).getTime(; } }; @@ -55,7 +59,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.maxTimeUUID(DateType.instance.compose(parameters.get(0)).getTime(; +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return ByteBuffer.wrap(UUIDGen.decompose(UUIDGen.maxTimeUUID(DateType.instance.compose(bb).getTime(; } }; @@ -63,7 +71,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return DateType.instance.decompose(new Date(UUIDGen.unixTimestamp(UUIDGen.getUUID(parameters.get(0); +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return DateType.instance.decompose(new Date(UUIDGen.unixTimestamp(UUIDGen.getUUID(bb; } }; @@ -71,7 +83,11 @@ public abstract class TimeuuidFcts { public ByteBuffer execute(ListByteBuffer parameters) { -return ByteBufferUtil.bytes(UUIDGen.unixTimestamp(UUIDGen.getUUID(parameters.get(0; +ByteBuffer bb = parameters.get(0); +if (bb == null) +return null; + +return ByteBufferUtil.bytes(UUIDGen.unixTimestamp(UUIDGen.getUUID(bb))); } }; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bedb572/src/java/org/apache/cassandra/cql3/functions/TokenFct.java -- diff --git a/src/java/org/apache/cassandra/cql3/functions/TokenFct.java b/src/java/org/apache/cassandra/cql3/functions/TokenFct.java index 21695ca..28da87a 100644 --- a/src/java/org/apache/cassandra/cql3/functions/TokenFct.java +++ b/src/java/org/apache/cassandra/cql3/functions/TokenFct.java @@ -62,8 +62,13 @@ public class TokenFct extends AbstractFunction public ByteBuffer execute(ListByteBuffer parameters) throws InvalidRequestException { ColumnNameBuilder builder = cfDef.getKeyNameBuilder(); -for (ByteBuffer bb : parameters) +for (int i = 0; i parameters.size(); i++) +{ +ByteBuffer bb = parameters.get(i); +if (bb == null) +return null; builder.add(bb); +}
[jira] [Commented] (CASSANDRA-5986) Cassandra hangs while reading saved cache
[ https://issues.apache.org/jira/browse/CASSANDRA-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764066#comment-13764066 ] cuser commented on CASSANDRA-5986: -- After 13 hours of cache loading cassandra was failed with java.lang.OutOfMemoryError: Java heap space error, but MAX_HEAP_SIZE is set to 25G. We can't increase heap size, it's already much more than recommended. In case of removing cache files cassandra starts successfuly, but that is NOT a solution. Production cluster MUST start with warm key cache. Any ideas? Workarounds? Cassandra hangs while reading saved cache - Key: CASSANDRA-5986 URL: https://issues.apache.org/jira/browse/CASSANDRA-5986 Project: Cassandra Issue Type: Bug Components: Core Reporter: cuser Fix For: 2.0 We've got a cluster ~6Tb size running on 5 nodes with vnodes(256) enabled. After some cache heat-up one of the nodes was restarted and cassandra just hangs with no error. Last message: cassandra.log: ... INFO 11:12:27,649 Opening /cassandra/data/usertable/data/usertable-data-ja-1360 (6273771 bytes) INFO 11:12:27,649 Opening /cassandra/data/usertable/data/usertable-data-ic-1339 (522628274110 bytes) INFO 11:12:49,224 reading saved cache /cassandra/saved_caches/usertable-data-KeyCache-b.db system.log: ... INFO [SSTableBatchOpen:6] 2013-09-09 11:12:27,649 SSTableReader.java (line 213) Opening /cassandra/data/usertable/data/usertable-data-ja-1360 (6273771 bytes) INFO [SSTableBatchOpen:7] 2013-09-09 11:12:27,649 SSTableReader.java (line 213) Opening /cassandra/data/usertable/data/usertable-data-ic-1339 (522628274110 bytes) INFO [CompactionExecutor:1] 2013-09-09 11:12:49,224 AutoSavingCache.java (line 142) reading saved cache /cassandra/saved_caches/usertable-data-KeyCache-b.db usertable-data-KeyCache-b.db size is about 3.5G. All other nodes are restarting successfuly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5995) Cassandra-shuffle causes NumberFormatException
[ https://issues.apache.org/jira/browse/CASSANDRA-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764095#comment-13764095 ] William Montaz commented on CASSANDRA-5995: --- It seems that the same method is involved in that issue. In this particular case, I had no problem on startup and it is really the shuffle operation that led to the exception. Cassandra-shuffle causes NumberFormatException -- Key: CASSANDRA-5995 URL: https://issues.apache.org/jira/browse/CASSANDRA-5995 Project: Cassandra Issue Type: Bug Components: Tools Environment: Amazon EC2 Reporter: William Montaz Using Cassandra-shuffle create, then Cassandra-shuffle en causes a NumbertFormatException : Extract from output.log INFO 15:01:28,935 Enabling scheduled transfers of token ranges INFO 15:01:28,957 Initiating transfer of 3059156119944164299 (scheduled at Tue Sep 10 15:01:19 UTC 2013) WARN 15:01:28,962 Token 3059156119944164299 changing ownership from /10.36.194.173 to /10.39.67.29 WARN 15:01:28,967 Token 3059156119944164299 changing ownership from /10.36.194.173 to /10.39.67.29 INFO 15:01:28,968 RELOCATING: relocating [3059156119944164299] to 10.39.67.29 INFO 15:01:28,968 RELOCATING: Sleeping 3 ms before start streaming/fetching ranges ERROR 15:01:29,331 Exception in thread Thread[GossipStage:8,5,main] java.lang.NumberFormatException: For input string: at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:453) at java.lang.Long.valueOf(Long.java:540) at org.apache.cassandra.dht.Murmur3Partitioner$1.fromString(Murmur3Partitioner.java:183) at org.apache.cassandra.service.StorageService.handleStateRelocating(StorageService.java:1490) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1180) at org.apache.cassandra.gms.Gossiper.doNotifications(Gossiper.java:956) at org.apache.cassandra.gms.Gossiper.applyNewStates(Gossiper.java:947) at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:905) at org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(GossipDigestAckVerbHandler.java:57) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) ERROR 15:01:30,098 Exception in thread Thread[GossipStage:9,5,main] java.lang.NumberFormatException: For input string: at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:453) at java.lang.Long.valueOf(Long.java:540) at org.apache.cassandra.dht.Murmur3Partitioner$1.fromString(Murmur3Partitioner.java:183) at org.apache.cassandra.service.StorageService.handleStateRelocating(StorageService.java:1490) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1180) at org.apache.cassandra.gms.Gossiper.doNotifications(Gossiper.java:956) at org.apache.cassandra.gms.Gossiper.applyNewStates(Gossiper.java:947) at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:905) at org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:49) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5998) Filters
[ https://issues.apache.org/jira/browse/CASSANDRA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764161#comment-13764161 ] Sylvain Lebresne commented on CASSANDRA-5998: - I think that's a duplicate of CASSANDRA-4914 (even though the title of that latter query talks only about aggregate, the clear goal is that this is a generic mechanism). Filters --- Key: CASSANDRA-5998 URL: https://issues.apache.org/jira/browse/CASSANDRA-5998 Project: Cassandra Issue Type: New Feature Reporter: Matt Stump Original Estimate: 120h Remaining Estimate: 120h As a counterpart to the functionality provided in triggers it is desirable to filter/mutate query results utilizing a user specified classes. Some example use cases include: Row or cell level permissions Ad-hoc aggregation or filtering of column values Proposed changes include the following: * Alter CF metadata to allow the specification of filter classes and options. * Modification of the CQL(3) grammar and addition of create/drop filter statements to allow the specification of filter classes. * Move cassandara/triggers/CustomClassLoader.java to cassandara/CustomClassLoader.java and use the class loader for both triggers and filters. * The addition of the following classes: FilterDefinition, FilterExecuter, and IFilter. * Modification of StorageProxy::fetchRows, such that if the CF has a filter specified, rows returned by the result set are run first through the filter which has the authority of either dropping or modifying rows from the result set. If no filter is specified then no filter will be invoked. * In order to enable the row/cell level authorization use case StorageProxy::fetchRows must be modified to also require the ClientState. Risks: My main concern with the current proposal is that the dependency on ClientState in order to satisfy the cell/row level permissions use case will introduce tight coupling between StorageProxy and ClientState. Another possibility yet to be investigated is to pass ClientState to the filter when the filter class is instantiated. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5986) Cassandra hangs while reading saved cache
[ https://issues.apache.org/jira/browse/CASSANDRA-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764229#comment-13764229 ] Chris Burroughs commented on CASSANDRA-5986: You can control how many keys are saved to disk. So if you have a $HUGE_NUMBER of keys you could choose to only save 10% * $HUGE_NUMBER to balance how warm the cache is with how long it takes to warm. However, if you are OOMing on cache load you would probably OOM anyway during normal operations anyway. As jbellis said you will have better luck starting small ( 512 MiB) and increasing Cassandra's integrated cacheing only as it prooves more effective than memory spent on the os page cache. To clarify this ticket. It was originally about cassandra hanging, but based on your followup comments it sounds like it was just taking a (very) long time to load the cache. Is that correct? During cache load the JMX metrics are still updated so you can see how many are being loaded and also calculate the rate. Cassandra hangs while reading saved cache - Key: CASSANDRA-5986 URL: https://issues.apache.org/jira/browse/CASSANDRA-5986 Project: Cassandra Issue Type: Bug Components: Core Reporter: cuser Fix For: 2.0 We've got a cluster ~6Tb size running on 5 nodes with vnodes(256) enabled. After some cache heat-up one of the nodes was restarted and cassandra just hangs with no error. Last message: cassandra.log: ... INFO 11:12:27,649 Opening /cassandra/data/usertable/data/usertable-data-ja-1360 (6273771 bytes) INFO 11:12:27,649 Opening /cassandra/data/usertable/data/usertable-data-ic-1339 (522628274110 bytes) INFO 11:12:49,224 reading saved cache /cassandra/saved_caches/usertable-data-KeyCache-b.db system.log: ... INFO [SSTableBatchOpen:6] 2013-09-09 11:12:27,649 SSTableReader.java (line 213) Opening /cassandra/data/usertable/data/usertable-data-ja-1360 (6273771 bytes) INFO [SSTableBatchOpen:7] 2013-09-09 11:12:27,649 SSTableReader.java (line 213) Opening /cassandra/data/usertable/data/usertable-data-ic-1339 (522628274110 bytes) INFO [CompactionExecutor:1] 2013-09-09 11:12:49,224 AutoSavingCache.java (line 142) reading saved cache /cassandra/saved_caches/usertable-data-KeyCache-b.db usertable-data-KeyCache-b.db size is about 3.5G. All other nodes are restarting successfuly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5986) Cassandra hangs while reading saved cache
[ https://issues.apache.org/jira/browse/CASSANDRA-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764254#comment-13764254 ] cuser commented on CASSANDRA-5986: -- Our production environment is time critical, so we should load all the available keys(on each node) to cache, in other case read latencies are not acceptable. Moreover, we've written a small tool that heats up all the nodes and load 100% of keys to cache. (it would be nice to have such tool within cassandra default package) So, this is the only way. Key cache is very efficient and eliminates additional disk seeks per operation. We've done bunch of tests that confirms it. We have no problems at all during normal operation with such large cache size. Node stores gigabytes of cache and operates very fast. The problem occurs on node restart only. Can you give us some more advices/suggestions or even fix some code? As for a ticket name, i thought it really hangs and noticed OOM just as it occured - 13 hours past. it sounds like it was just taking a (very) long time to load the cache. Is that correct? So, yes. That's correct. Should i rename the ticket? Cassandra hangs while reading saved cache - Key: CASSANDRA-5986 URL: https://issues.apache.org/jira/browse/CASSANDRA-5986 Project: Cassandra Issue Type: Bug Components: Core Reporter: cuser Fix For: 2.0 We've got a cluster ~6Tb size running on 5 nodes with vnodes(256) enabled. After some cache heat-up one of the nodes was restarted and cassandra just hangs with no error. Last message: cassandra.log: ... INFO 11:12:27,649 Opening /cassandra/data/usertable/data/usertable-data-ja-1360 (6273771 bytes) INFO 11:12:27,649 Opening /cassandra/data/usertable/data/usertable-data-ic-1339 (522628274110 bytes) INFO 11:12:49,224 reading saved cache /cassandra/saved_caches/usertable-data-KeyCache-b.db system.log: ... INFO [SSTableBatchOpen:6] 2013-09-09 11:12:27,649 SSTableReader.java (line 213) Opening /cassandra/data/usertable/data/usertable-data-ja-1360 (6273771 bytes) INFO [SSTableBatchOpen:7] 2013-09-09 11:12:27,649 SSTableReader.java (line 213) Opening /cassandra/data/usertable/data/usertable-data-ic-1339 (522628274110 bytes) INFO [CompactionExecutor:1] 2013-09-09 11:12:49,224 AutoSavingCache.java (line 142) reading saved cache /cassandra/saved_caches/usertable-data-KeyCache-b.db usertable-data-KeyCache-b.db size is about 3.5G. All other nodes are restarting successfuly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-6006) Value of JVM_OPTS is partially lost when enabling JEMallocAllocator in cassandra-env.sh
Nikolai Grigoriev created CASSANDRA-6006: Summary: Value of JVM_OPTS is partially lost when enabling JEMallocAllocator in cassandra-env.sh Key: CASSANDRA-6006 URL: https://issues.apache.org/jira/browse/CASSANDRA-6006 Project: Cassandra Issue Type: Bug Components: Config Environment: Linux, cassandra 2.0.0 Reporter: Nikolai Grigoriev Priority: Minor In conf/cassandra-env.sh I see this: {code} # Configure the following for JEMallocAllocator and if jemalloc is not available in the system # library path (Example: /usr/local/lib/). Usually make install will do the right thing. # export LD_LIBRARY_PATH=JEMALLOC_HOME/lib/ # JVM_OPTS=-Djava.library.path=JEMALLOC_HOME/lib/ {code} When I have enabled JEMalloc I have noticed that Cassandra complained about JAMM agent not being configured. Then I have realized that a bunch of JVM settings do not get passed to JVM, like heap size etc. This is because here the new argument replaces the previous value of JVM_OPTS instead of being added to it. Here is the diff: {code} *** cassandra-env.sh.orig 2013-08-28 13:07:53.0 + --- cassandra-env.sh2013-09-11 13:25:12.904640141 + *** *** 227,233 # Configure the following for JEMallocAllocator and if jemalloc is not available in the system # library path (Example: /usr/local/lib/). Usually make install will do the right thing. # export LD_LIBRARY_PATH=JEMALLOC_HOME/lib/ ! # JVM_OPTS=-Djava.library.path=JEMALLOC_HOME/lib/ # uncomment to have Cassandra JVM listen for remote debuggers/profilers on port 1414 # JVM_OPTS=$JVM_OPTS -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1414 --- 227,233 # Configure the following for JEMallocAllocator and if jemalloc is not available in the system # library path (Example: /usr/local/lib/). Usually make install will do the right thing. # export LD_LIBRARY_PATH=JEMALLOC_HOME/lib/ ! # JVM_OPTS=$JVM_OPTS -Djava.library.path=JEMALLOC_HOME/lib/ # uncomment to have Cassandra JVM listen for remote debuggers/profilers on port 1414 # JVM_OPTS=$JVM_OPTS -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1414 {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5998) Filters
[ https://issues.apache.org/jira/browse/CASSANDRA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764283#comment-13764283 ] Matt Stump commented on CASSANDRA-5998: --- Yes, CASSANDRA-4914 is a specific use case for the more general mechanism outlined by this ticket. But this ticket doesn't go as far as to provide the map/reduce functionality proposed in CASSANDRA-5184. What would be necessary is the addition of union/reduce and the execution of user defined functions specified at a per query level. The functionality envisioned by this ticket would allow the user to specify 1 or more classes through which all results are processed at the CF level. Nothing however prevents functionality outlined in CASSANDRA-5184 from being added in the future, and could be the next logical step in extensibility. I've already begun work on the ticket and expect to have a first draft ready for review within 1 week. Filters --- Key: CASSANDRA-5998 URL: https://issues.apache.org/jira/browse/CASSANDRA-5998 Project: Cassandra Issue Type: New Feature Reporter: Matt Stump Original Estimate: 120h Remaining Estimate: 120h As a counterpart to the functionality provided in triggers it is desirable to filter/mutate query results utilizing a user specified classes. Some example use cases include: Row or cell level permissions Ad-hoc aggregation or filtering of column values Proposed changes include the following: * Alter CF metadata to allow the specification of filter classes and options. * Modification of the CQL(3) grammar and addition of create/drop filter statements to allow the specification of filter classes. * Move cassandara/triggers/CustomClassLoader.java to cassandara/CustomClassLoader.java and use the class loader for both triggers and filters. * The addition of the following classes: FilterDefinition, FilterExecuter, and IFilter. * Modification of StorageProxy::fetchRows, such that if the CF has a filter specified, rows returned by the result set are run first through the filter which has the authority of either dropping or modifying rows from the result set. If no filter is specified then no filter will be invoked. * In order to enable the row/cell level authorization use case StorageProxy::fetchRows must be modified to also require the ClientState. Risks: My main concern with the current proposal is that the dependency on ClientState in order to satisfy the cell/row level permissions use case will introduce tight coupling between StorageProxy and ClientState. Another possibility yet to be investigated is to pass ClientState to the filter when the filter class is instantiated. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5982) OutOfMemoryError when writing text blobs to a very large numbers of tables
[ https://issues.apache.org/jira/browse/CASSANDRA-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764350#comment-13764350 ] Ryan McGuire commented on CASSANDRA-5982: - This seems to be the solution: * Use [~jbellis]' [cfs10k patch|https://github.com/jbellis/cassandra/tree/cfs10k] * set concurrent_write:8 and memtable_total_space_in_mb:1024 in the yaml. * Requires 8GB heap. With these additional settings, I no longer see any OOM errors on any EC2 instance I've tested (m1.xlarge, m2.2xlarge, hs1.8xlarge) OutOfMemoryError when writing text blobs to a very large numbers of tables -- Key: CASSANDRA-5982 URL: https://issues.apache.org/jira/browse/CASSANDRA-5982 Project: Cassandra Issue Type: Bug Reporter: Ryan McGuire Attachments: 2000CF_memtable_mem_usage.png, system.log.gz This test goes outside the norm for Cassandra, creating ~2000 column families, and writing large text blobs to them. The process goes like this: Bring up a 6 node m2.2xlarge cluster on EC2. This instance type has enough memory (34.2GB) so that Cassandra will allocate a full 8GB heap without tuning cassandra-env.sh. However, this instance type only has a single drive, so data and commitlog are comingled. (This test has also been run m1.xlarge instances which have four drives (but lower memory) and has exhibited similar results when assigning one to commitlog and 3 to datafile_directories.) Use the 'memtable_allocator: HeapAllocator' setting from CASSANDRA-5935. Create 2000 CFs: {code} CREATE KEYSPACE cf_stress WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3} CREATE COLUMNFAMILY cf_stress.tbl_0 (id timeuuid PRIMARY KEY, val1 text, val2 text, val3 text ) ; # repeat for tbl_1, tbl_2 ... tbl_02000 {code} This process of creating tables takes a long time, about 5 hours, but for anyone wanting to create that many tables, presumably they only need to do this once, so this may be acceptable. Write data: The test dataset consists of writing 100K, 1M, and 10M documents to these tables: {code} INSERT INTO {table_name} (id, val1, val2, val3) VALUES (?, ?, ?, ?) {code} With 5 threads doing these inserts across the cluster, indefinitely, randomly choosing a table number 1-2000, the cluster eventually topples over with 'OutOfMemoryError: Java heap space'. A heap dump analysis indicates that it's mostly memtables: !2000CF_memtable_mem_usage.png! Best current theory is that this is commitlog bound and that the memtables cannot flush fast enough due to locking issues. But I'll let [~jbellis] comment more on that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5982) OutOfMemoryError when writing text blobs to a very large number of tables
[ https://issues.apache.org/jira/browse/CASSANDRA-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-5982: Summary: OutOfMemoryError when writing text blobs to a very large number of tables (was: OutOfMemoryError when writing text blobs to a very large numbers of tables) OutOfMemoryError when writing text blobs to a very large number of tables - Key: CASSANDRA-5982 URL: https://issues.apache.org/jira/browse/CASSANDRA-5982 Project: Cassandra Issue Type: Bug Reporter: Ryan McGuire Attachments: 2000CF_memtable_mem_usage.png, system.log.gz This test goes outside the norm for Cassandra, creating ~2000 column families, and writing large text blobs to them. The process goes like this: Bring up a 6 node m2.2xlarge cluster on EC2. This instance type has enough memory (34.2GB) so that Cassandra will allocate a full 8GB heap without tuning cassandra-env.sh. However, this instance type only has a single drive, so data and commitlog are comingled. (This test has also been run m1.xlarge instances which have four drives (but lower memory) and has exhibited similar results when assigning one to commitlog and 3 to datafile_directories.) Use the 'memtable_allocator: HeapAllocator' setting from CASSANDRA-5935. Create 2000 CFs: {code} CREATE KEYSPACE cf_stress WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3} CREATE COLUMNFAMILY cf_stress.tbl_0 (id timeuuid PRIMARY KEY, val1 text, val2 text, val3 text ) ; # repeat for tbl_1, tbl_2 ... tbl_02000 {code} This process of creating tables takes a long time, about 5 hours, but for anyone wanting to create that many tables, presumably they only need to do this once, so this may be acceptable. Write data: The test dataset consists of writing 100K, 1M, and 10M documents to these tables: {code} INSERT INTO {table_name} (id, val1, val2, val3) VALUES (?, ?, ?, ?) {code} With 5 threads doing these inserts across the cluster, indefinitely, randomly choosing a table number 1-2000, the cluster eventually topples over with 'OutOfMemoryError: Java heap space'. A heap dump analysis indicates that it's mostly memtables: !2000CF_memtable_mem_usage.png! Best current theory is that this is commitlog bound and that the memtables cannot flush fast enough due to locking issues. But I'll let [~jbellis] comment more on that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-6006) Value of JVM_OPTS is partially lost when enabling JEMallocAllocator in cassandra-env.sh
[ https://issues.apache.org/jira/browse/CASSANDRA-6006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams resolved CASSANDRA-6006. - Resolution: Fixed Fix Version/s: 2.0.1 Thanks, done in 1ae996d38259ad6d18fef7344b745eba8af56a4d Value of JVM_OPTS is partially lost when enabling JEMallocAllocator in cassandra-env.sh --- Key: CASSANDRA-6006 URL: https://issues.apache.org/jira/browse/CASSANDRA-6006 Project: Cassandra Issue Type: Bug Components: Config Environment: Linux, cassandra 2.0.0 Reporter: Nikolai Grigoriev Priority: Minor Fix For: 2.0.1 In conf/cassandra-env.sh I see this: {code} # Configure the following for JEMallocAllocator and if jemalloc is not available in the system # library path (Example: /usr/local/lib/). Usually make install will do the right thing. # export LD_LIBRARY_PATH=JEMALLOC_HOME/lib/ # JVM_OPTS=-Djava.library.path=JEMALLOC_HOME/lib/ {code} When I have enabled JEMalloc I have noticed that Cassandra complained about JAMM agent not being configured. Then I have realized that a bunch of JVM settings do not get passed to JVM, like heap size etc. This is because here the new argument replaces the previous value of JVM_OPTS instead of being added to it. Here is the diff: {code} *** cassandra-env.sh.orig 2013-08-28 13:07:53.0 + --- cassandra-env.sh2013-09-11 13:25:12.904640141 + *** *** 227,233 # Configure the following for JEMallocAllocator and if jemalloc is not available in the system # library path (Example: /usr/local/lib/). Usually make install will do the right thing. # export LD_LIBRARY_PATH=JEMALLOC_HOME/lib/ ! # JVM_OPTS=-Djava.library.path=JEMALLOC_HOME/lib/ # uncomment to have Cassandra JVM listen for remote debuggers/profilers on port 1414 # JVM_OPTS=$JVM_OPTS -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1414 --- 227,233 # Configure the following for JEMallocAllocator and if jemalloc is not available in the system # library path (Example: /usr/local/lib/). Usually make install will do the right thing. # export LD_LIBRARY_PATH=JEMALLOC_HOME/lib/ ! # JVM_OPTS=$JVM_OPTS -Djava.library.path=JEMALLOC_HOME/lib/ # uncomment to have Cassandra JVM listen for remote debuggers/profilers on port 1414 # JVM_OPTS=$JVM_OPTS -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1414 {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: fix missing imports
Updated Branches: refs/heads/trunk 4677e940f - 4f119341e fix missing imports Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4f119341 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4f119341 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4f119341 Branch: refs/heads/trunk Commit: 4f119341eabcbe276d4409e8ec314ea337f10822 Parents: 4677e94 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 10:34:20 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 10:34:20 2013 -0500 -- .../cassandra/hadoop/pig/CassandraStorage.java | 17 + 1 file changed, 1 insertion(+), 16 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4f119341/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java -- diff --git a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java index 577fd38..645f723 100644 --- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java @@ -27,22 +27,7 @@ import org.apache.cassandra.db.Column; import org.apache.cassandra.db.marshal.*; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.hadoop.*; -import org.apache.cassandra.thrift.Cassandra; -import org.apache.cassandra.thrift.CfDef; -import org.apache.cassandra.thrift.ColumnDef; -import org.apache.cassandra.thrift.ColumnOrSuperColumn; -import org.apache.cassandra.thrift.Deletion; -import org.apache.cassandra.thrift.IndexClause; -import org.apache.cassandra.thrift.IndexExpression; -import org.apache.cassandra.thrift.IndexOperator; -import org.apache.cassandra.thrift.InvalidRequestException; -import org.apache.cassandra.thrift.Mutation; -import org.apache.cassandra.thrift.SchemaDisagreementException; -import org.apache.cassandra.thrift.SlicePredicate; -import org.apache.cassandra.thrift.SliceRange; -import org.apache.cassandra.thrift.SuperColumn; -import org.apache.cassandra.thrift.TimedOutException; -import org.apache.cassandra.thrift.UnavailableException; +import org.apache.cassandra.thrift.*; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import org.apache.cassandra.utils.Hex;
[9/9] git commit: Merge branch 'cassandra-2.0' into trunk
Merge branch 'cassandra-2.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4677e940 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4677e940 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4677e940 Branch: refs/heads/trunk Commit: 4677e940f18d677b6bc6253ab906a3d09c9aa681 Parents: fc0cc0e 7f117da Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 10:18:51 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 10:18:51 2013 -0500 -- .../hadoop/pig/AbstractCassandraStorage.java| 170 +++ .../cassandra/hadoop/pig/CassandraStorage.java | 8 +- .../apache/cassandra/hadoop/pig/CqlStorage.java | 10 +- 3 files changed, 147 insertions(+), 41 deletions(-) --
[5/9] git commit: Support thrift tables in Pig CqlStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-5847
Support thrift tables in Pig CqlStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-5847 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5618e36 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5618e36 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5618e36 Branch: refs/heads/trunk Commit: f5618e36dcec78c0fb791327defad14b4488b235 Parents: 8bedb57 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 10:16:19 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 10:16:19 2013 -0500 -- .../hadoop/pig/AbstractCassandraStorage.java| 182 ++- .../cassandra/hadoop/pig/CassandraStorage.java | 8 +- .../apache/cassandra/hadoop/pig/CqlStorage.java | 10 +- 3 files changed, 147 insertions(+), 53 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5618e36/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java -- diff --git a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java index 03805d2..68e18c8 100644 --- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java @@ -29,6 +29,9 @@ import java.util.*; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.exceptions.SyntaxException; import org.apache.cassandra.auth.IAuthenticator; +import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.cql3.CFDefinition; +import org.apache.cassandra.cql3.ColumnIdentifier; import org.apache.cassandra.db.Column; import org.apache.cassandra.db.IColumn; import org.apache.cassandra.db.marshal.*; @@ -205,6 +208,8 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store try { validator = TypeParser.parse(cd.getValidation_class()); +if (validator instanceof CounterColumnType) +validator = LongType.instance; validators.put(cd.name, validator); } catch (ConfigurationException e) @@ -515,27 +520,7 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store column_family, keyspace)); } -catch (TException e) -{ -throw new RuntimeException(e); -} -catch (InvalidRequestException e) -{ -throw new RuntimeException(e); -} -catch (IOException e) -{ -throw new RuntimeException(e); -} -catch (UnavailableException e) -{ -throw new RuntimeException(e); -} -catch (TimedOutException e) -{ -throw new RuntimeException(e); -} -catch (SchemaDisagreementException e) +catch (Exception e) { throw new RuntimeException(e); } @@ -582,15 +567,19 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store TimedOutException, SchemaDisagreementException, TException, - CharacterCodingException + CharacterCodingException, + NotFoundException, + org.apache.cassandra.exceptions.InvalidRequestException, + ConfigurationException { // get CF meta data -String query = SELECT type, + +String query = SELECT type, + comparator, + subcomparator, + - default_validator, + + default_validator, + key_validator, + - key_aliases + + key_aliases, + + key_alias + FROM system.schema_columnfamilies + WHERE keyspace_name = '%s' + AND columnfamily_name = '%s' ; @@ -624,10 +613,27 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store { String keyAliases = ByteBufferUtil.string(cqlRow.columns.get(5).value); keys = FBUtilities.fromJsonList(keyAliases); +
[6/9] git commit: Support thrift tables in Pig CqlStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-5847
Support thrift tables in Pig CqlStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-5847 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5618e36 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5618e36 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5618e36 Branch: refs/heads/cassandra-1.2 Commit: f5618e36dcec78c0fb791327defad14b4488b235 Parents: 8bedb57 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 10:16:19 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 10:16:19 2013 -0500 -- .../hadoop/pig/AbstractCassandraStorage.java| 182 ++- .../cassandra/hadoop/pig/CassandraStorage.java | 8 +- .../apache/cassandra/hadoop/pig/CqlStorage.java | 10 +- 3 files changed, 147 insertions(+), 53 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5618e36/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java -- diff --git a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java index 03805d2..68e18c8 100644 --- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java @@ -29,6 +29,9 @@ import java.util.*; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.exceptions.SyntaxException; import org.apache.cassandra.auth.IAuthenticator; +import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.cql3.CFDefinition; +import org.apache.cassandra.cql3.ColumnIdentifier; import org.apache.cassandra.db.Column; import org.apache.cassandra.db.IColumn; import org.apache.cassandra.db.marshal.*; @@ -205,6 +208,8 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store try { validator = TypeParser.parse(cd.getValidation_class()); +if (validator instanceof CounterColumnType) +validator = LongType.instance; validators.put(cd.name, validator); } catch (ConfigurationException e) @@ -515,27 +520,7 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store column_family, keyspace)); } -catch (TException e) -{ -throw new RuntimeException(e); -} -catch (InvalidRequestException e) -{ -throw new RuntimeException(e); -} -catch (IOException e) -{ -throw new RuntimeException(e); -} -catch (UnavailableException e) -{ -throw new RuntimeException(e); -} -catch (TimedOutException e) -{ -throw new RuntimeException(e); -} -catch (SchemaDisagreementException e) +catch (Exception e) { throw new RuntimeException(e); } @@ -582,15 +567,19 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store TimedOutException, SchemaDisagreementException, TException, - CharacterCodingException + CharacterCodingException, + NotFoundException, + org.apache.cassandra.exceptions.InvalidRequestException, + ConfigurationException { // get CF meta data -String query = SELECT type, + +String query = SELECT type, + comparator, + subcomparator, + - default_validator, + + default_validator, + key_validator, + - key_aliases + + key_aliases, + + key_alias + FROM system.schema_columnfamilies + WHERE keyspace_name = '%s' + AND columnfamily_name = '%s' ; @@ -624,10 +613,27 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store { String keyAliases = ByteBufferUtil.string(cqlRow.columns.get(5).value); keys = FBUtilities.fromJsonList(keyAliases); +
[2/9] git commit: preserve jvm_opts w/jemalloc
preserve jvm_opts w/jemalloc Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1ae996d3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1ae996d3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1ae996d3 Branch: refs/heads/trunk Commit: 1ae996d38259ad6d18fef7344b745eba8af56a4d Parents: 2c84b14 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 09:36:09 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 09:36:09 2013 -0500 -- conf/cassandra-env.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ae996d3/conf/cassandra-env.sh -- diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh index 12cef7e..1ba1a46 100644 --- a/conf/cassandra-env.sh +++ b/conf/cassandra-env.sh @@ -222,7 +222,7 @@ fi # Configure the following for JEMallocAllocator and if jemalloc is not available in the system # library path (Example: /usr/local/lib/). Usually make install will do the right thing. # export LD_LIBRARY_PATH=JEMALLOC_HOME/lib/ -# JVM_OPTS=-Djava.library.path=JEMALLOC_HOME/lib/ +# JVM_OPTS=$JVM_OPTS -Djava.library.path=JEMALLOC_HOME/lib/ # uncomment to have Cassandra JVM listen for remote debuggers/profilers on port 1414 # JVM_OPTS=$JVM_OPTS -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1414
[4/9] git commit: Support thrift tables in Pig CqlStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-5847
Support thrift tables in Pig CqlStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-5847 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5618e36 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5618e36 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5618e36 Branch: refs/heads/cassandra-2.0 Commit: f5618e36dcec78c0fb791327defad14b4488b235 Parents: 8bedb57 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 10:16:19 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 10:16:19 2013 -0500 -- .../hadoop/pig/AbstractCassandraStorage.java| 182 ++- .../cassandra/hadoop/pig/CassandraStorage.java | 8 +- .../apache/cassandra/hadoop/pig/CqlStorage.java | 10 +- 3 files changed, 147 insertions(+), 53 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5618e36/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java -- diff --git a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java index 03805d2..68e18c8 100644 --- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java @@ -29,6 +29,9 @@ import java.util.*; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.exceptions.SyntaxException; import org.apache.cassandra.auth.IAuthenticator; +import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.cql3.CFDefinition; +import org.apache.cassandra.cql3.ColumnIdentifier; import org.apache.cassandra.db.Column; import org.apache.cassandra.db.IColumn; import org.apache.cassandra.db.marshal.*; @@ -205,6 +208,8 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store try { validator = TypeParser.parse(cd.getValidation_class()); +if (validator instanceof CounterColumnType) +validator = LongType.instance; validators.put(cd.name, validator); } catch (ConfigurationException e) @@ -515,27 +520,7 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store column_family, keyspace)); } -catch (TException e) -{ -throw new RuntimeException(e); -} -catch (InvalidRequestException e) -{ -throw new RuntimeException(e); -} -catch (IOException e) -{ -throw new RuntimeException(e); -} -catch (UnavailableException e) -{ -throw new RuntimeException(e); -} -catch (TimedOutException e) -{ -throw new RuntimeException(e); -} -catch (SchemaDisagreementException e) +catch (Exception e) { throw new RuntimeException(e); } @@ -582,15 +567,19 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store TimedOutException, SchemaDisagreementException, TException, - CharacterCodingException + CharacterCodingException, + NotFoundException, + org.apache.cassandra.exceptions.InvalidRequestException, + ConfigurationException { // get CF meta data -String query = SELECT type, + +String query = SELECT type, + comparator, + subcomparator, + - default_validator, + + default_validator, + key_validator, + - key_aliases + + key_aliases, + + key_alias + FROM system.schema_columnfamilies + WHERE keyspace_name = '%s' + AND columnfamily_name = '%s' ; @@ -624,10 +613,27 @@ public abstract class AbstractCassandraStorage extends LoadFunc implements Store { String keyAliases = ByteBufferUtil.string(cqlRow.columns.get(5).value); keys = FBUtilities.fromJsonList(keyAliases); +
[1/9] git commit: preserve jvm_opts w/jemalloc
Updated Branches: refs/heads/cassandra-1.2 8bedb5720 - f5618e36d refs/heads/cassandra-2.0 2c84b1403 - 7f117da0c refs/heads/trunk 972184bf7 - 4677e940f preserve jvm_opts w/jemalloc Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1ae996d3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1ae996d3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1ae996d3 Branch: refs/heads/cassandra-2.0 Commit: 1ae996d38259ad6d18fef7344b745eba8af56a4d Parents: 2c84b14 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 09:36:09 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 09:36:09 2013 -0500 -- conf/cassandra-env.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ae996d3/conf/cassandra-env.sh -- diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh index 12cef7e..1ba1a46 100644 --- a/conf/cassandra-env.sh +++ b/conf/cassandra-env.sh @@ -222,7 +222,7 @@ fi # Configure the following for JEMallocAllocator and if jemalloc is not available in the system # library path (Example: /usr/local/lib/). Usually make install will do the right thing. # export LD_LIBRARY_PATH=JEMALLOC_HOME/lib/ -# JVM_OPTS=-Djava.library.path=JEMALLOC_HOME/lib/ +# JVM_OPTS=$JVM_OPTS -Djava.library.path=JEMALLOC_HOME/lib/ # uncomment to have Cassandra JVM listen for remote debuggers/profilers on port 1414 # JVM_OPTS=$JVM_OPTS -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1414
[8/9] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Conflicts: src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f117da0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f117da0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f117da0 Branch: refs/heads/cassandra-2.0 Commit: 7f117da0caf66715a82417b3f7e3a2b30d0f279e Parents: 1ae996d f5618e3 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 10:18:40 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 10:18:40 2013 -0500 -- .../hadoop/pig/AbstractCassandraStorage.java| 170 +++ .../cassandra/hadoop/pig/CassandraStorage.java | 8 +- .../apache/cassandra/hadoop/pig/CqlStorage.java | 10 +- 3 files changed, 147 insertions(+), 41 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f117da0/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java -- diff --cc src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java index 19361e4,68e18c8..b770ed6 --- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java @@@ -29,7 -29,11 +29,10 @@@ import java.util.* import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.exceptions.SyntaxException; import org.apache.cassandra.auth.IAuthenticator; + import org.apache.cassandra.config.CFMetaData; + import org.apache.cassandra.cql3.CFDefinition; + import org.apache.cassandra.cql3.ColumnIdentifier; import org.apache.cassandra.db.Column; -import org.apache.cassandra.db.IColumn; import org.apache.cassandra.db.marshal.*; import org.apache.cassandra.db.marshal.AbstractCompositeType.CompositeComponent; import org.apache.cassandra.hadoop.*; http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f117da0/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java -- diff --cc src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java index e3c8a67,dbdd5e9..577fd38 --- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java @@@ -23,25 -23,11 +23,26 @@@ import java.nio.charset.CharacterCoding import java.util.*; -import org.apache.cassandra.db.IColumn; +import org.apache.cassandra.db.Column; import org.apache.cassandra.db.marshal.*; + import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.hadoop.*; -import org.apache.cassandra.thrift.*; +import org.apache.cassandra.thrift.Cassandra; +import org.apache.cassandra.thrift.CfDef; +import org.apache.cassandra.thrift.ColumnDef; +import org.apache.cassandra.thrift.ColumnOrSuperColumn; +import org.apache.cassandra.thrift.Deletion; +import org.apache.cassandra.thrift.IndexClause; +import org.apache.cassandra.thrift.IndexExpression; +import org.apache.cassandra.thrift.IndexOperator; +import org.apache.cassandra.thrift.InvalidRequestException; +import org.apache.cassandra.thrift.Mutation; +import org.apache.cassandra.thrift.SchemaDisagreementException; +import org.apache.cassandra.thrift.SlicePredicate; +import org.apache.cassandra.thrift.SliceRange; +import org.apache.cassandra.thrift.SuperColumn; +import org.apache.cassandra.thrift.TimedOutException; +import org.apache.cassandra.thrift.UnavailableException; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import org.apache.cassandra.utils.Hex; http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f117da0/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java -- diff --cc src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java index 2b76b83,b35e13a..1ef69b7 --- a/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java @@@ -23,8 -23,10 +23,9 @@@ import java.nio.charset.CharacterCoding import java.util.*; -import org.apache.cassandra.db.IColumn; import org.apache.cassandra.db.Column; import org.apache.cassandra.db.marshal.*; + import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.hadoop.*; import org.apache.cassandra.hadoop.cql3.CqlConfigHelper; import org.apache.cassandra.thrift.*;
[jira] [Resolved] (CASSANDRA-5877) Unclear FileNotFoundException stacktrace when system.log can't be created
[ https://issues.apache.org/jira/browse/CASSANDRA-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams resolved CASSANDRA-5877. - Resolution: Won't Fix Totally agree that log4j should be throwing a better exception, rather than us trying to do strange gymnastics to throw one ourselves. Unclear FileNotFoundException stacktrace when system.log can't be created - Key: CASSANDRA-5877 URL: https://issues.apache.org/jira/browse/CASSANDRA-5877 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0rc1 Reporter: Michaël Figuière Priority: Trivial When you start Cassandra with default settings without the appropriate permissions to write in {{/var/log/cassandra}} you end up with the following stacktrace: {code} log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /var/log/cassandra/system.log (No such file or directory) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.init(FileOutputStream.java:206) at java.io.FileOutputStream.init(FileOutputStream.java:127) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:395) at org.apache.log4j.PropertyWatchdog.doOnChange(PropertyConfigurator.java:922) at org.apache.log4j.helpers.FileWatchdog.checkAndConfigure(FileWatchdog.java:89) at org.apache.log4j.helpers.FileWatchdog.init(FileWatchdog.java:58) at org.apache.log4j.PropertyWatchdog.init(PropertyConfigurator.java:914) at org.apache.log4j.PropertyConfigurator.configureAndWatch(PropertyConfigurator.java:461) at org.apache.cassandra.service.CassandraDaemon.initLog4j(CassandraDaemon.java:121) at org.apache.cassandra.service.CassandraDaemon.clinit(CassandraDaemon.java:69) {code} While a stacktrace at startup may not be the most elegant mean of communication with the user - though at least it's visible - in this situation it doesn't make it clear that Cassandra couldn't create a file in the specified log directory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-6007) guava dependency mismatch between datastax/driver 2.0 and cassandra 2.0
Max Penet created CASSANDRA-6007: Summary: guava dependency mismatch between datastax/driver 2.0 and cassandra 2.0 Key: CASSANDRA-6007 URL: https://issues.apache.org/jira/browse/CASSANDRA-6007 Project: Cassandra Issue Type: Bug Components: Core, Drivers Reporter: Max Penet Attempting to load datastax/java-driver 2.0 beta1 and cassandra-all 2.0 in the same jvm causes some issues mainly because of clashes between guava versions (15.0 in the driver vs 13.0.1 in c*). This makes automated testing using EmbeddedCassandraService problematic for instance. Stacktrace from https://github.com/mpenet/alia/tree/2.0 running lein test Upgrading c* 2.0 to guava 15+ should help fix this issue. {code:java} java.lang.IllegalAccessError: tried to access method com.google.common.collect.MapMaker.makeComputingMap(Lcom/google/common/base/Function;)Ljava/util/concurrent/ConcurrentMap; from class org.apache.cassandra.service.StorageProxy at org.apache.cassandra.service.StorageProxy.clinit(StorageProxy.java:87) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:190) at org.apache.cassandra.service.StorageService.initServer(StorageService.java:447) at org.apache.cassandra.service.StorageService.initServer(StorageService.java:426) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:344) at org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:377) at org.apache.cassandra.service.EmbeddedCassandraService.start(EmbeddedCassandraService.java:52) at qbits.alia.test.embedded$start_service_BANG_.invoke(embedded.clj:20) at qbits.alia.test.embedded$eval10911.invoke(embedded.clj:24) at clojure.lang.Compiler.eval(Compiler.java:6619) at clojure.lang.Compiler.load(Compiler.java:7064) at clojure.lang.RT.loadResourceScript(RT.java:370) at clojure.lang.RT.loadResourceScript(RT.java:361) at clojure.lang.RT.load(RT.java:440) at clojure.lang.RT.load(RT.java:411) at clojure.core$load$fn__5018.invoke(core.clj:5530) at clojure.core$load.doInvoke(core.clj:5529) at clojure.lang.RestFn.invoke(RestFn.java:408) at clojure.core$load_one.invoke(core.clj:5336) at clojure.core$load_lib$fn__4967.invoke(core.clj:5375) at clojure.core$load_lib.doInvoke(core.clj:5374) at clojure.lang.RestFn.applyTo(RestFn.java:142) at clojure.core$apply.invoke(core.clj:619) at clojure.core$load_libs.doInvoke(core.clj:5413) at clojure.lang.RestFn.applyTo(RestFn.java:137) at clojure.core$apply.invoke(core.clj:621) at clojure.core$use.doInvoke(core.clj:5507) at clojure.lang.RestFn.invoke(RestFn.java:703) at qbits.alia.test.core$eval161$loading__4910__auto162.invoke(core.clj:1) at qbits.alia.test.core$eval161.invoke(core.clj:1) at clojure.lang.Compiler.eval(Compiler.java:6619) at clojure.lang.Compiler.eval(Compiler.java:6608) at clojure.lang.Compiler.load(Compiler.java:7064) at clojure.lang.RT.loadResourceScript(RT.java:370) at clojure.lang.RT.loadResourceScript(RT.java:361) at clojure.lang.RT.load(RT.java:440) at clojure.lang.RT.load(RT.java:411) at clojure.core$load$fn__5018.invoke(core.clj:5530) at clojure.core$load.doInvoke(core.clj:5529) at clojure.lang.RestFn.invoke(RestFn.java:408) at clojure.core$load_one.invoke(core.clj:5336) at clojure.core$load_lib$fn__4967.invoke(core.clj:5375) at clojure.core$load_lib.doInvoke(core.clj:5374) at clojure.lang.RestFn.applyTo(RestFn.java:142) at clojure.core$apply.invoke(core.clj:619) at clojure.core$load_libs.doInvoke(core.clj:5413) at clojure.lang.RestFn.applyTo(RestFn.java:137) at clojure.core$apply.invoke(core.clj:619) at clojure.core$require.doInvoke(core.clj:5496) at clojure.lang.RestFn.applyTo(RestFn.java:137) at clojure.core$apply.invoke(core.clj:619) at user$eval85.invoke(NO_SOURCE_FILE:1) at clojure.lang.Compiler.eval(Compiler.java:6619) at clojure.lang.Compiler.eval(Compiler.java:6609) at clojure.lang.Compiler.eval(Compiler.java:6582) at clojure.core$eval.invoke(core.clj:2852) at clojure.main$eval_opt.invoke(main.clj:308) at clojure.main$initialize.invoke(main.clj:327) at clojure.main$null_opt.invoke(main.clj:362) at clojure.main$main.doInvoke(main.clj:440) at clojure.lang.RestFn.invoke(RestFn.java:421) at clojure.lang.Var.invoke(Var.java:419) at clojure.lang.AFn.applyToHelper(AFn.java:163) at clojure.lang.Var.applyTo(Var.java:532) at clojure.main.main(main.java:37) {code} -- This message is
[jira] [Commented] (CASSANDRA-5078) save compaction merge counts in a system table
[ https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764620#comment-13764620 ] Yuki Morishita commented on CASSANDRA-5078: --- bq. two threads are working on the same column family with the same timestamp? Compactions can run in parallel. See CompactionExecutor in CompactionManager. (Number of concurrency is set to number of cpu cores by default.) save compaction merge counts in a system table -- Key: CASSANDRA-5078 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078 Project: Cassandra Issue Type: Improvement Reporter: Matthew F. Dennis Assignee: lantao yan Priority: Minor Labels: lhf Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch we should save the compaction merge stats from CASSANDRA-4894 in the system table and probably expose them via JMX (and nodetool) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[7/9] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Conflicts: src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f117da0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f117da0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f117da0 Branch: refs/heads/trunk Commit: 7f117da0caf66715a82417b3f7e3a2b30d0f279e Parents: 1ae996d f5618e3 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 10:18:40 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 10:18:40 2013 -0500 -- .../hadoop/pig/AbstractCassandraStorage.java| 170 +++ .../cassandra/hadoop/pig/CassandraStorage.java | 8 +- .../apache/cassandra/hadoop/pig/CqlStorage.java | 10 +- 3 files changed, 147 insertions(+), 41 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f117da0/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java -- diff --cc src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java index 19361e4,68e18c8..b770ed6 --- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java @@@ -29,7 -29,11 +29,10 @@@ import java.util.* import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.exceptions.SyntaxException; import org.apache.cassandra.auth.IAuthenticator; + import org.apache.cassandra.config.CFMetaData; + import org.apache.cassandra.cql3.CFDefinition; + import org.apache.cassandra.cql3.ColumnIdentifier; import org.apache.cassandra.db.Column; -import org.apache.cassandra.db.IColumn; import org.apache.cassandra.db.marshal.*; import org.apache.cassandra.db.marshal.AbstractCompositeType.CompositeComponent; import org.apache.cassandra.hadoop.*; http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f117da0/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java -- diff --cc src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java index e3c8a67,dbdd5e9..577fd38 --- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java @@@ -23,25 -23,11 +23,26 @@@ import java.nio.charset.CharacterCoding import java.util.*; -import org.apache.cassandra.db.IColumn; +import org.apache.cassandra.db.Column; import org.apache.cassandra.db.marshal.*; + import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.hadoop.*; -import org.apache.cassandra.thrift.*; +import org.apache.cassandra.thrift.Cassandra; +import org.apache.cassandra.thrift.CfDef; +import org.apache.cassandra.thrift.ColumnDef; +import org.apache.cassandra.thrift.ColumnOrSuperColumn; +import org.apache.cassandra.thrift.Deletion; +import org.apache.cassandra.thrift.IndexClause; +import org.apache.cassandra.thrift.IndexExpression; +import org.apache.cassandra.thrift.IndexOperator; +import org.apache.cassandra.thrift.InvalidRequestException; +import org.apache.cassandra.thrift.Mutation; +import org.apache.cassandra.thrift.SchemaDisagreementException; +import org.apache.cassandra.thrift.SlicePredicate; +import org.apache.cassandra.thrift.SliceRange; +import org.apache.cassandra.thrift.SuperColumn; +import org.apache.cassandra.thrift.TimedOutException; +import org.apache.cassandra.thrift.UnavailableException; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import org.apache.cassandra.utils.Hex; http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f117da0/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java -- diff --cc src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java index 2b76b83,b35e13a..1ef69b7 --- a/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java @@@ -23,8 -23,10 +23,9 @@@ import java.nio.charset.CharacterCoding import java.util.*; -import org.apache.cassandra.db.IColumn; import org.apache.cassandra.db.Column; import org.apache.cassandra.db.marshal.*; + import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.hadoop.*; import org.apache.cassandra.hadoop.cql3.CqlConfigHelper; import org.apache.cassandra.thrift.*;
[3/9] git commit: Merge branch 'cassandra-2.0' into trunk
Merge branch 'cassandra-2.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc0cc0ec Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc0cc0ec Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc0cc0ec Branch: refs/heads/trunk Commit: fc0cc0ec21edeb843c62020d7704f1675f737848 Parents: 972184b 1ae996d Author: Brandon Williams brandonwilli...@apache.org Authored: Wed Sep 11 09:36:25 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed Sep 11 09:36:25 2013 -0500 -- conf/cassandra-env.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --
[jira] [Commented] (CASSANDRA-6004) Performing a Select count(*) when replication factor node count causes assertion error and timeout
[ https://issues.apache.org/jira/browse/CASSANDRA-6004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764552#comment-13764552 ] Brandon Williams commented on CASSANDRA-6004: - Reproduces well. Looks like something related to CASSANDRA-4415, and IDAF is receiving a type of -1 it wasn't expecting. Performing a Select count(*) when replication factor node count causes assertion error and timeout -- Key: CASSANDRA-6004 URL: https://issues.apache.org/jira/browse/CASSANDRA-6004 Project: Cassandra Issue Type: Bug Components: API Environment: Two node setup Ubuntu Server 12.04 Tested on JDK 1.6 and 1.7 Reporter: James P When performing a Select Count() query on a table belonging to a keyspace with a replication factor less than the total node count, the following error is encountered which ultimately results in an rpc_timeout for the request: ERROR 18:47:54,660 Exception in thread Thread[Thread-5,5,main] java.lang.AssertionError at org.apache.cassandra.db.filter.IDiskAtomFilter$Serializer.deserialize(IDiskAtomFilter.java:116) at org.apache.cassandra.db.RangeSliceCommandSerializer.deserialize(RangeSliceCommand.java:247) at org.apache.cassandra.db.RangeSliceCommandSerializer.deserialize(RangeSliceCommand.java:156) at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:148) at org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:125) at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:73) The issue is not encountered when the replication factor is = node count To replicate the issue: 1) Create the keyspace: CREATE KEYSPACE demodb WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor': 1}; 2) Create the table CREATE TABLE users ( user_name varchar, password varchar, gender varchar, session_token varchar, state varchar, birth_year bigint, PRIMARY KEY (user_name)); 3) Do a CQL query: SELECT count( * ) FROM demodb.users ; The issue is reproducible even if the table is empty. Both CQLSH and client (astyanax) api calls are affected. Tested on two different clusters (2-node and 8-node) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-6004) Performing a Select count(*) when replication factor node count causes assertion error and timeout
[ https://issues.apache.org/jira/browse/CASSANDRA-6004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams reassigned CASSANDRA-6004: --- Assignee: Sylvain Lebresne Performing a Select count(*) when replication factor node count causes assertion error and timeout -- Key: CASSANDRA-6004 URL: https://issues.apache.org/jira/browse/CASSANDRA-6004 Project: Cassandra Issue Type: Bug Components: API Environment: Two node setup Ubuntu Server 12.04 Tested on JDK 1.6 and 1.7 Reporter: James P Assignee: Sylvain Lebresne When performing a Select Count() query on a table belonging to a keyspace with a replication factor less than the total node count, the following error is encountered which ultimately results in an rpc_timeout for the request: ERROR 18:47:54,660 Exception in thread Thread[Thread-5,5,main] java.lang.AssertionError at org.apache.cassandra.db.filter.IDiskAtomFilter$Serializer.deserialize(IDiskAtomFilter.java:116) at org.apache.cassandra.db.RangeSliceCommandSerializer.deserialize(RangeSliceCommand.java:247) at org.apache.cassandra.db.RangeSliceCommandSerializer.deserialize(RangeSliceCommand.java:156) at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:148) at org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:125) at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:73) The issue is not encountered when the replication factor is = node count To replicate the issue: 1) Create the keyspace: CREATE KEYSPACE demodb WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor': 1}; 2) Create the table CREATE TABLE users ( user_name varchar, password varchar, gender varchar, session_token varchar, state varchar, birth_year bigint, PRIMARY KEY (user_name)); 3) Do a CQL query: SELECT count( * ) FROM demodb.users ; The issue is reproducible even if the table is empty. Both CQLSH and client (astyanax) api calls are affected. Tested on two different clusters (2-node and 8-node) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5078) save compaction merge counts in a system table
[ https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764669#comment-13764669 ] lantao yan commented on CASSANDRA-5078: --- yeah, you are right. I thought it was a single thread executor. save compaction merge counts in a system table -- Key: CASSANDRA-5078 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078 Project: Cassandra Issue Type: Improvement Reporter: Matthew F. Dennis Assignee: lantao yan Priority: Minor Labels: lhf Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch we should save the compaction merge stats from CASSANDRA-4894 in the system table and probably expose them via JMX (and nodetool) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing
John Carrino created CASSANDRA-6008: --- Summary: Getting 'This should never happen' error at startup due to sstables missing Key: CASSANDRA-6008 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008 Project: Cassandra Issue Type: Bug Components: Core Reporter: John Carrino Fix For: 2.0.1 Exception encountered during startup: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables This happens when sstables that have been compacted away are removed, but they still have entries in the system.compactions_in_progress table. Normally this should not happen because the entries in system.compactions_in_progress are deleted before the old sstables are deleted. However at startup recovery time, old sstables are deleted (NOT BEFORE they are removed from the compactions_in_progress table) and then after that is done it does a truncate using SystemKeyspace.discardCompactionsInProgress We ran into a case where the disk filled up and the node died and was bounced and then failed to truncate this table on startup, and then got stuck hitting this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers. Maybe on startup we can delete from this table incrementally as we clean stuff up in the same way that compactions delete from this table before they delete old sstables. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4705) Speculative execution for reads / eager retries
[ https://issues.apache.org/jira/browse/CASSANDRA-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764863#comment-13764863 ] Li Zou commented on CASSANDRA-4705: --- Hello [~vijay2...@yahoo.com] and [~jbellis], My company is adopting the Cassandra db technology and we are very much interested in this new feature of Speculative Execution for Reads, as it could potentially help reduce the overall outage window to the sub-second range upon the failure of any one of Cassandra nodes in a data center. I have recently tested the Speculative Execution for Reads using Cassandra 2.0.0-rc2 and I have not seen the expected results. In my tests, I have already excluded the possible effects from the client (connection pool) side. Not sure what is still missed out in my tests. Here is my setup for the tests. One data center of four Cassandra nodes is configured on machine A, B, C and D with each physical machine has one Cassandra node running on it. My testing app (Cassandra client) is running on machine A and it is configured to connect to Cassandra nodes A, B, and C only. In other words, my testing app will never connect to Cassandra node D. * Replication Factor (RF) is set to 3 * Client requested Consistency Level (CL) is set to CL_TWO For Cassandra 1.2.4, upon the failure of Cassandra node D (via kill -9 pid or kill -s SIGKILL pid), there will be an outage window for around 20 seconds with zero transactions. The outage ends as soon as the gossip protocol detects the failure of Cassandra node D and marks it down. For Cassandra 2.0.0-rc2, database tables are configured with speculative_retry proper values (such as 'ALWAYS', '10 ms', '100 ms', '80 percentile', 'NONE'). No expected results are observed in the tests. The testing results are quite similar to those observed for Cassandra 1.2.4. That is, upon the failure of Cassandra node D, there will be an outage window for around 20 seconds with zero transactions. The outage ends as soon as the gossip protocol detects the failure of Cassandra node D and marks it down. The tested values for _speculative_retry_ against database table are listed below: * ALWAYS * 10 ms * 100 ms * 80 percentile * NONE In my tests, I also checked the JMX stats of Cassandra node A for the speculative_retry for each table. What I observed really surprised me. For an instance, the speculative_retry (for each table) is set to '20 ms'. During normal operations with all of four Cassandra nodes up, the SpeculativeRetry count went up occasionally. However, during the 20-second outage window, the SpeculativeRetry count somehow never went up for three tests in a row. On the contrary, I would expect to see lots of speculative retries were executed (say on Cassandra node A) during the 20-second outage window. Some notes for the kill signals. I tested if the Cassandra node D is killed using SIGTERM (for Cassandra 1.2.4) and SIGINT / SIGTERM (for Cassandra 2.0.0-rc2), there will be no observed outage in second range, as the Cassandra node D does the orderly shutdown and the gossip announces the node down. From my testing app point of view, everything is going very well. But if the Cassandra node D is killed using SIGKILL, there will be an outage with zero transactions. I guess this might be a TCP socket related issue. I often observed TCP socket CLOSE_WAIT on the passive close side (i.e. Cassandra node A, B, and C) and TCP socket TIME_WAIT on the active close side (Cassandra node D). Speculative execution for reads / eager retries --- Key: CASSANDRA-4705 URL: https://issues.apache.org/jira/browse/CASSANDRA-4705 Project: Cassandra Issue Type: Improvement Reporter: Vijay Assignee: Vijay Fix For: 2.0 beta 1 Attachments: 0001-CASSANDRA-4705.patch, 0001-CASSANDRA-4705-v2.patch, 0001-CASSANDRA-4705-v3.patch, 0001-Refactor-to-introduce-AbstractReadExecutor.patch, 0002-Add-Speculative-execution-for-Row-Reads.patch When read_repair is not 1.0, we send the request to one node for some of the requests. When a node goes down or when a node is too busy the client has to wait for the timeout before it can retry. It would be nice to watch for latency and execute an additional request to a different node, if the response is not received within average/99% of the response times recorded in the past. CASSANDRA-2540 might be able to solve the variance when read_repair is set to 1.0 1) May be we need to use metrics-core to record various Percentiles 2) Modify ReadCallback.get to execute additional request speculatively. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see:
[jira] [Commented] (CASSANDRA-4705) Speculative execution for reads / eager retries
[ https://issues.apache.org/jira/browse/CASSANDRA-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764885#comment-13764885 ] Aleksey Yeschenko commented on CASSANDRA-4705: -- [~lizou] See CASSANDRA-5932 Speculative execution for reads / eager retries --- Key: CASSANDRA-4705 URL: https://issues.apache.org/jira/browse/CASSANDRA-4705 Project: Cassandra Issue Type: Improvement Reporter: Vijay Assignee: Vijay Fix For: 2.0 beta 1 Attachments: 0001-CASSANDRA-4705.patch, 0001-CASSANDRA-4705-v2.patch, 0001-CASSANDRA-4705-v3.patch, 0001-Refactor-to-introduce-AbstractReadExecutor.patch, 0002-Add-Speculative-execution-for-Row-Reads.patch When read_repair is not 1.0, we send the request to one node for some of the requests. When a node goes down or when a node is too busy the client has to wait for the timeout before it can retry. It would be nice to watch for latency and execute an additional request to a different node, if the response is not received within average/99% of the response times recorded in the past. CASSANDRA-2540 might be able to solve the variance when read_repair is set to 1.0 1) May be we need to use metrics-core to record various Percentiles 2) Modify ReadCallback.get to execute additional request speculatively. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing
[ https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764847#comment-13764847 ] John Carrino commented on CASSANDRA-6008: - Now that I think about this more: Doesn't this new cleanup code make it hard to restore a CF from a backup? If there was a compaction for this CF in progress when you took down the system, when you bring it back up with new sstables for this CF, then this check will prevent you from starting. Getting 'This should never happen' error at startup due to sstables missing --- Key: CASSANDRA-6008 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008 Project: Cassandra Issue Type: Bug Components: Core Reporter: John Carrino Fix For: 2.0.1 Exception encountered during startup: Unfinished compactions reference missing sstables. This should never happen since compactions are marked finished before we start removing the old sstables This happens when sstables that have been compacted away are removed, but they still have entries in the system.compactions_in_progress table. Normally this should not happen because the entries in system.compactions_in_progress are deleted before the old sstables are deleted. However at startup recovery time, old sstables are deleted (NOT BEFORE they are removed from the compactions_in_progress table) and then after that is done it does a truncate using SystemKeyspace.discardCompactionsInProgress We ran into a case where the disk filled up and the node died and was bounced and then failed to truncate this table on startup, and then got stuck hitting this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers. Maybe on startup we can delete from this table incrementally as we clean stuff up in the same way that compactions delete from this table before they delete old sstables. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-6007) guava dependency mismatch between datastax/driver 2.0 and cassandra 2.0
[ https://issues.apache.org/jira/browse/CASSANDRA-6007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765185#comment-13765185 ] Mikhail Mazursky commented on CASSANDRA-6007: - We use cassandra-maven-plugin 1.2.1, Astyanax 1.56.42 with cassandra jar version 1.2.5 (not possible to update due to [1] and [2]). With Guava 14.0.1 everything works fine, but if I update to Guava 15.0 then this exception happens when building project (integration tests fail): {code} [INFO ] 11:36:09.855 [main][] ERROR CassandraDaemon:430 - Exception encountered during startup [INFO ] java.lang.IllegalAccessError: tried to access method com.google.common.collect.MapMaker.makeComputingMap(Lcom/google/common/base/Function;)Ljava/util/concurrent/ConcurrentMap; from class org.apache.cassandra.service.StorageProxy [INFO ] at org.apache.cassandra.service.StorageProxy.clinit(StorageProxy.java:84) ~[cassandra-all-1.2.5.jar:1.2.5] [INFO ] at java.lang.Class.forName0(Native Method) ~[na:1.7.0_25] [INFO ] at java.lang.Class.forName(Class.java:190) ~[na:1.7.0_25] [INFO ] java.lang.IllegalAccessError: tried to access method com.google.common.collect.MapMaker.makeComputingMap(Lcom/google/common/base/Function;)Ljava/util/concurrent/ConcurrentMap; from class org.apache.cassandra.service.StorageProxy [INFO ] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:466) ~[cassandra-all-1.2.5.jar:1.2.5] [INFO ] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:445) ~[cassandra-all-1.2.5.jar:1.2.5] [INFO ] at org.apache.cassandra.service.StorageProxy.clinit(StorageProxy.java:84) [INFO ] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:325) ~[cassandra-all-1.2.5.jar:1.2.5] [INFO ] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:413) ~[cassandra-all-1.2.5.jar:1.2.5] [INFO ] at java.lang.Class.forName0(Native Method) [INFO ] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:456) ~[cassandra-all-1.2.5.jar:1.2.5] [INFO ] at java.lang.Class.forName(Class.java:190) [INFO ] at org.codehaus.mojo.cassandra.CassandraMonitor.main(CassandraMonitor.java:148) ~[cassandra-maven-plugin-1.2.1-1.jar:na] [INFO ] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:466) [INFO ] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:445) [INFO ] Exception encountered during startup: tried to access method com.google.common.collect.MapMaker.makeComputingMap(Lcom/google/common/base/Function;)Ljava/util/concurrent/ConcurrentMap; from class org.apache.cassandra.service.StorageProxy [INFO ] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:325) [INFO ] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:413) [INFO ] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:456) [INFO ] at org.codehaus.mojo.cassandra.CassandraMonitor.main(CassandraMonitor.java:148) {code} MapMaker.makeComputingMap() was deprecated some time ago and removed in 15.0 that's why this exception happens. The code in StorageProxy class can trivially be fixed using Guava's LoadingCache. [1]: https://github.com/Netflix/astyanax/issues/352 [2]: https://github.com/Netflix/astyanax/issues/391 guava dependency mismatch between datastax/driver 2.0 and cassandra 2.0 --- Key: CASSANDRA-6007 URL: https://issues.apache.org/jira/browse/CASSANDRA-6007 Project: Cassandra Issue Type: Bug Components: Core, Drivers Reporter: Max Penet Attempting to load datastax/java-driver 2.0 beta1 and cassandra-all 2.0 in the same jvm causes some issues mainly because of clashes between guava versions (15.0 in the driver vs 13.0.1 in c*). This makes automated testing using EmbeddedCassandraService problematic for instance. Stacktrace from https://github.com/mpenet/alia/tree/2.0 running lein test Upgrading c* 2.0 to guava 15+ should help fix this issue. {code:java} java.lang.IllegalAccessError: tried to access method com.google.common.collect.MapMaker.makeComputingMap(Lcom/google/common/base/Function;)Ljava/util/concurrent/ConcurrentMap; from class org.apache.cassandra.service.StorageProxy at org.apache.cassandra.service.StorageProxy.clinit(StorageProxy.java:87) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:190) at org.apache.cassandra.service.StorageService.initServer(StorageService.java:447) at