[jira] [Commented] (CASSANDRA-4411) Assertion with LCS compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421233#comment-13421233 ] Sylvain Lebresne commented on CASSANDRA-4411: - @Mina Did you run the offline scrub introduced with CASSANDRA-4321. Otherwise, it won't fix the problem. So you need to 1) shut down the node (this is important before running the offline scrub) and 2) run ./bin/sstablescrub. That last step should print some lines indicating having corrected some problems (otherwise, something is wrong with the scrubbing). If after that you still get an exception, it might be helpful if you could run with 0001-Add-debugging-info-for-LCS.txt applied. Assertion with LCS compaction - Key: CASSANDRA-4411 URL: https://issues.apache.org/jira/browse/CASSANDRA-4411 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.2 Reporter: Anton Winter Assignee: Sylvain Lebresne Fix For: 1.1.3 Attachments: 0001-Add-debugging-info-for-LCS.txt, 4411-followup.txt, 4411.txt, assertion-w-more-debugging-info-omid.log, assertion.moreinfo.system.log, system.log As instructed in CASSANDRA-4321 I have raised this issue as a continuation of that issue as it appears the problem still exists. I have repeatedly run sstablescrub across all my nodes after the 1.1.2 upgrade until sstablescrub shows no errors. The exceptions described in CASSANDRA-4321 do not occur as frequently now but the integrity check still throws exceptions on a number of nodes. Once those exceptions occur compactionstats shows a large number of pending tasks with no progression afterwards. {code} ERROR [CompactionExecutor:150] 2012-07-05 04:26:15,570 AbstractCassandraDaemon.java (line 134) Exception in thread Thread[CompactionExecutor:150,1,main] java.lang.AssertionError at org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:214) at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:158) at org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:531) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:254) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:978) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200) at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50) at org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:150) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: (CQL3) Allow definitions with only a PK
Updated Branches: refs/heads/trunk d0d1b2cf3 - 795174611 (CQL3) Allow definitions with only a PK patch by slebresne; reviewed by jbellis for CASSANDRA-4361 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/79517461 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/79517461 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/79517461 Branch: refs/heads/trunk Commit: 795174611b7a9bea4ee2d64f8640c8bfebe07cfb Parents: d0d1b2c Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Jul 24 09:31:56 2012 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Jul 24 09:31:56 2012 +0200 -- CHANGES.txt|1 + .../org/apache/cassandra/config/CFMetaData.java|4 +- .../org/apache/cassandra/cql3/CFDefinition.java| 22 -- .../cassandra/cql3/operations/ColumnOperation.java | 18 - .../cassandra/cql3/statements/ColumnGroupMap.java |6 -- .../statements/CreateColumnFamilyStatement.java| 66 +- .../cassandra/cql3/statements/SelectStatement.java | 51 +++ .../cassandra/cql3/statements/UpdateStatement.java | 43 -- 8 files changed, 145 insertions(+), 66 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/79517461/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 8d61f64..979e3ef 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -30,6 +30,7 @@ * (cql3) Always use composite types by default (CASSANDRA-4329) * (cql3) Add support for set, map and list (CASSANDRA-3647) * Validate date type correctly (CASSANDRA-4441) + * (cql3) Allow definitions with only a PK (CASSANDRA-4361) 1.1.3 http://git-wip-us.apache.org/repos/asf/cassandra/blob/79517461/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index 3cf41d7..906017c 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -1080,8 +1080,6 @@ public final class CFMetaData { if (alias != null) { -if (!alias.hasRemaining()) -throw new ConfigurationException(msg + alias may not be empty); try { UTF8Type.instance.validate(alias); @@ -1259,9 +1257,9 @@ public final class CFMetaData cfm.caching(Caching.valueOf(result.getString(caching))); cfm.compactionStrategyClass(createCompactionStrategy(result.getString(compaction_strategy_class))); cfm.compressionParameters(CompressionParameters.create(fromJsonMap(result.getString(compression_parameters; + cfm.columnAliases(columnAliasesFromStrings(fromJsonList(result.getString(column_aliases; if (result.has(value_alias)) cfm.valueAlias(result.getBytes(value_alias)); - cfm.columnAliases(columnAliasesFromStrings(fromJsonList(result.getString(column_aliases; cfm.compactionStrategyOptions(fromJsonMap(result.getString(compaction_strategy_options))); return cfm; http://git-wip-us.apache.org/repos/asf/cassandra/blob/79517461/src/java/org/apache/cassandra/cql3/CFDefinition.java -- diff --git a/src/java/org/apache/cassandra/cql3/CFDefinition.java b/src/java/org/apache/cassandra/cql3/CFDefinition.java index 23edd62..d960a79 100644 --- a/src/java/org/apache/cassandra/cql3/CFDefinition.java +++ b/src/java/org/apache/cassandra/cql3/CFDefinition.java @@ -51,8 +51,8 @@ public class CFDefinition implements IterableCFDefinition.Name public final MapColumnIdentifier, Name metadata = new TreeMapColumnIdentifier, Name(); public final boolean isComposite; -// Note that isCompact means here that no componet of the comparator correspond to the column names -// defined in the CREATE TABLE QUERY. This is not exactly equivalent to the 'WITH COMPACT STORAGE' +// Note that isCompact means here that no component of the comparator correspond to the column names +// defined in the CREATE TABLE QUERY. This is not exactly equivalent to using the 'WITH COMPACT STORAGE' // option when creating a table in that static CF without a composite type will have isCompact == false // even though one must use 'WITH COMPACT STORAGE' to declare them. public final boolean isCompact; @@ -66,7 +66,7 @@ public class CFDefinition implements IterableCFDefinition.Name { this.isComposite = true;
[jira] [Updated] (CASSANDRA-4436) Counters in columns don't preserve correct values after cluster restart
[ https://issues.apache.org/jira/browse/CASSANDRA-4436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-4436: Attachment: 4436-1.1-2.txt 4436-1.0-2.txt bq. Looks like skipCompacted in Directories.SSTableLister can be removed (since we scrubDataDirectories on startup and no new compacted components will be created). True, though there is the (arguably remote) possibility that people call loadNewSSTables() (or the offline scrub from CASSANDRA-4441) on sstables having some -Compacted components. So I would prefer leaving it in 1.1 and removing it during the merge to trunk, just to be sure minor upgrade are as little disrupting as can be. bq. Using a List means we can add an ancestor multiple times. Suggest using a Set instead. But we won't have the same ancestor multiple times. Otherwise that would be a bug (and at least for counters, a particularly bad one). But for sanity I've added an assertion to check this doesn't happen (I've a list however, I figured that since the list will be small, the difference between List.contains() and Set.contains() will be negligeable, and it's checked in an assertion and only once a the sstable creation. On the other Lists have a smaller memory footprint. Though I admit in either case we're talked minor differences). bq. would prefer Ancestor to LiveAncestor, since we only check liveness at creation time, so Live is misleading when iterating over them later. Renamed. bq. the deleting code feels more at home in CFS constructor than addInitialSSTables. Moved. bq. tracker parameter is unused now in SSTR.open Removed. I realized that setTrackedBy was already always call through the DataTracker.addNewSSTablesSize, so I also removed the call duplication. Counters in columns don't preserve correct values after cluster restart --- Key: CASSANDRA-4436 URL: https://issues.apache.org/jira/browse/CASSANDRA-4436 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.0.10 Reporter: Peter Velas Assignee: Sylvain Lebresne Fix For: 1.1.3 Attachments: 4436-1.0-2.txt, 4436-1.0.txt, 4436-1.1-2.txt, 4436-1.1.txt, increments.cql.gz Similar to #3821. but affecting normal columns. Set up a 2-node cluster with rf=2. 1. Create a counter column family and increment a 100 keys in loop 5000 times. 2. Then make a rolling restart to cluster. 3. Again increment another 5000 times. 4. Make a rolling restart to cluster. 5. Again increment another 5000 times. 6. Make a rolling restart to cluster. After step 6 we were able to reproduce bug with bad counter values. Expected values were 15 000. Values returned from cluster are higher then 15000 + some random number. Rolling restarts are done with nodetool drain. Always waiting until second node discover its down then kill java process. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4461) Schema no longer modifiable
Marco Matarazzo created CASSANDRA-4461: -- Summary: Schema no longer modifiable Key: CASSANDRA-4461 URL: https://issues.apache.org/jira/browse/CASSANDRA-4461 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Environment: Ubuntu LTS 12.04, Java Sun Oracle 6 (jre1.6.0_33) Reporter: Marco Matarazzo It seems that after a while, our keyspaces can't be modified. They accept data, read and write works, truncate works, just keyspace and every column family inside it can't be dropped, created or altered. This happens both on our 3 nodes test cluster and on our single-node dev cluster. With 1.1.1, a rolling restart of the cluster seemed to solve the issue. This no longer happens with 1.1.2 . user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master drop columnfamily agents_blueprints; cqlsh:goh_master user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master DESCRIBE COLUMNFAMILY agents_blueprints CREATE TABLE agents_blueprints ( agent_id ascii, archetype ascii, proto_id ascii, PRIMARY KEY (agent_id, archetype) ) WITH COMPACT STORAGE AND comment='' AND caching='KEYS_ONLY' AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; cqlsh:goh_master -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4461) Schema no longer modifiable
[ https://issues.apache.org/jira/browse/CASSANDRA-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marco Matarazzo updated CASSANDRA-4461: --- Attachment: system.log Schema no longer modifiable --- Key: CASSANDRA-4461 URL: https://issues.apache.org/jira/browse/CASSANDRA-4461 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Environment: Ubuntu LTS 12.04, Java Sun Oracle 6 (jre1.6.0_33) Reporter: Marco Matarazzo Attachments: system.log It seems that after a while, our keyspaces can't be modified. They accept data, read and write works, truncate works, just keyspace and every column family inside it can't be dropped, created or altered. This happens both on our 3 nodes test cluster and on our single-node dev cluster. With 1.1.1, a rolling restart of the cluster seemed to solve the issue. This no longer happens with 1.1.2 . user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master drop columnfamily agents_blueprints; cqlsh:goh_master user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master DESCRIBE COLUMNFAMILY agents_blueprints CREATE TABLE agents_blueprints ( agent_id ascii, archetype ascii, proto_id ascii, PRIMARY KEY (agent_id, archetype) ) WITH COMPACT STORAGE AND comment='' AND caching='KEYS_ONLY' AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; cqlsh:goh_master -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4461) Schema no longer modifiable
[ https://issues.apache.org/jira/browse/CASSANDRA-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421332#comment-13421332 ] Marco Matarazzo commented on CASSANDRA-4461: Update: on the 3-nodes cluster, executing the same DROP on a second node, does the magic. It works with CREATE and ALTER too. It can be useful to know that the keyspace has a replication factor of 3. Schema no longer modifiable --- Key: CASSANDRA-4461 URL: https://issues.apache.org/jira/browse/CASSANDRA-4461 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Environment: Ubuntu LTS 12.04, Java Sun Oracle 6 (jre1.6.0_33) Reporter: Marco Matarazzo Attachments: system.log It seems that after a while, our keyspaces can't be modified. They accept data, read and write works, truncate works, just keyspace and every column family inside it can't be dropped, created or altered. This happens both on our 3 nodes test cluster and on our single-node dev cluster. With 1.1.1, a rolling restart of the cluster seemed to solve the issue. This no longer happens with 1.1.2 . user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master drop columnfamily agents_blueprints; cqlsh:goh_master user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master DESCRIBE COLUMNFAMILY agents_blueprints CREATE TABLE agents_blueprints ( agent_id ascii, archetype ascii, proto_id ascii, PRIMARY KEY (agent_id, archetype) ) WITH COMPACT STORAGE AND comment='' AND caching='KEYS_ONLY' AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; cqlsh:goh_master -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4459) pig driver casts ints as bytearray
[ https://issues.apache.org/jira/browse/CASSANDRA-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421338#comment-13421338 ] Pavel Yaskevich commented on CASSANDRA-4459: I agree with Jonathan on this, IntegerType could be larger than int32. pig driver casts ints as bytearray -- Key: CASSANDRA-4459 URL: https://issues.apache.org/jira/browse/CASSANDRA-4459 Project: Cassandra Issue Type: Bug Environment: C* 1.1.2 embedded in DSE Reporter: Cathy Daw Assignee: Brandon Williams Fix For: 1.1.3 Attachments: 4459.txt we seem to be auto-mapping C* int columns to bytearray in Pig, and farther down I can't seem to find a way to cast that to int and do an average. {code} grunt cassandra_users = LOAD 'cassandra://cqldb/users' USING CassandraStorage(); grunt dump cassandra_users; (bobhatter,(act,22),(fname,bob),(gender,m),(highSchool,Cal High),(lname,hatter),(sat,500),(state,CA),{}) (alicesmith,(act,27),(fname,alice),(gender,f),(highSchool,Tuscon High),(lname,smith),(sat,650),(state,AZ),{}) // notice sat and act columns are bytearray values grunt describe cassandra_users; cassandra_users: {key: chararray,act: (name: chararray,value: bytearray),fname: (name: chararray,value: chararray), gender: (name: chararray,value: chararray),highSchool: (name: chararray,value: chararray),lname: (name: chararray,value: chararray), sat: (name: chararray,value: bytearray),state: (name: chararray,value: chararray),columns: {(name: chararray,value: chararray)}} grunt users_by_state = GROUP cassandra_users BY state; grunt dump users_by_state; ((state,AX),{(aoakley,(highSchool,Phoenix High),(lname,Oakley),state,(act,22),(sat,500),(gender,m),(fname,Anne),{})}) ((state,AZ),{(gjames,(highSchool,Tuscon High),(lname,James),state,(act,24),(sat,650),(gender,f),(fname,Geronomo),{})}) ((state,CA),{(philton,(highSchool,Beverly High),(lname,Hilton),state,(act,37),(sat,220),(gender,m),(fname,Paris),{}),(jbrown,(highSchool,Cal High),(lname,Brown),state,(act,20),(sat,700),(gender,m),(fname,Jerry),{})}) // Error - use explicit cast grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG(cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:15:04,361 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1045: Could not infer the matching function for org.apache.pig.builtin.AVG as multiple or none of them fit. Please use an explicit cast. // Unable to cast as int grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG((int)cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:07:39,217 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1052: Cannot cast bag with schema sat: bag({name: chararray,value: bytearray}) to int {code} *Seed data in CQL* {code} CREATE KEYSPACE cqldb with strategy_class = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options:replication_factor=3; use cqldb; CREATE COLUMNFAMILY users ( KEY text PRIMARY KEY, fname text, lname text, gender varchar, act int, sat int, highSchool text, state varchar); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (gjames, Geronomo, James, f, 24, 650, 'Tuscon High', 'AZ'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (aoakley, Anne, Oakley, m , 22, 500, 'Phoenix High', 'AX'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (jbrown, Jerry, Brown, m , 20, 700, 'Cal High', 'CA'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (philton, Paris, Hilton, m , 37, 220, 'Beverly High', 'CA'); select * from users; {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4411) Assertion with LCS compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421350#comment-13421350 ] Mina Naguib commented on CASSANDRA-4411: I ran the scrub in online mode. I just took down a node and am now running it in offline mode. Will report back. BTW, the default sstablescrub does not respect the memory limits set in cassandra.in.sh, so it failed for me with: {code} Exception in thread main java.lang.OutOfMemoryError: Java heap space at sun.security.provider.DigestBase.engineDigest(DigestBase.java:146) at java.security.MessageDigest$Delegate.engineDigest(MessageDigest.java:546) at java.security.MessageDigest.digest(MessageDigest.java:323) at org.apache.cassandra.utils.FBUtilities.hash(FBUtilities.java:229) at org.apache.cassandra.utils.FBUtilities.hashToBigInteger(FBUtilities.java:213) at org.apache.cassandra.dht.RandomPartitioner.getToken(RandomPartitioner.java:154) at org.apache.cassandra.dht.RandomPartitioner.decorateKey(RandomPartitioner.java:47) at org.apache.cassandra.cache.AutoSavingCache.readSaved(AutoSavingCache.java:118) at org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:230) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:341) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:313) at org.apache.cassandra.db.Table.initCf(Table.java:371) at org.apache.cassandra.db.Table.init(Table.java:304) at org.apache.cassandra.db.Table.open(Table.java:119) at org.apache.cassandra.db.Table.openWithoutSSTables(Table.java:102) at org.apache.cassandra.tools.StandaloneScrubber.main(StandaloneScrubber.java:65) {code} I edited it to update the hardocded limit of 256MB to a more reasonable value (the same as my cassandra.in.sh) to allow it to run without crashing. Assertion with LCS compaction - Key: CASSANDRA-4411 URL: https://issues.apache.org/jira/browse/CASSANDRA-4411 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.2 Reporter: Anton Winter Assignee: Sylvain Lebresne Fix For: 1.1.3 Attachments: 0001-Add-debugging-info-for-LCS.txt, 4411-followup.txt, 4411.txt, assertion-w-more-debugging-info-omid.log, assertion.moreinfo.system.log, system.log As instructed in CASSANDRA-4321 I have raised this issue as a continuation of that issue as it appears the problem still exists. I have repeatedly run sstablescrub across all my nodes after the 1.1.2 upgrade until sstablescrub shows no errors. The exceptions described in CASSANDRA-4321 do not occur as frequently now but the integrity check still throws exceptions on a number of nodes. Once those exceptions occur compactionstats shows a large number of pending tasks with no progression afterwards. {code} ERROR [CompactionExecutor:150] 2012-07-05 04:26:15,570 AbstractCassandraDaemon.java (line 134) Exception in thread Thread[CompactionExecutor:150,1,main] java.lang.AssertionError at org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:214) at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:158) at org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:531) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:254) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:978) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200) at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50) at org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:150) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on
[jira] [Updated] (CASSANDRA-4461) Schema no longer modifiable
[ https://issues.apache.org/jira/browse/CASSANDRA-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4461: -- Fix Version/s: 1.1.3 Assignee: Pavel Yaskevich Schema no longer modifiable --- Key: CASSANDRA-4461 URL: https://issues.apache.org/jira/browse/CASSANDRA-4461 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Environment: Ubuntu LTS 12.04, Java Sun Oracle 6 (jre1.6.0_33) Reporter: Marco Matarazzo Assignee: Pavel Yaskevich Fix For: 1.1.3 Attachments: system.log It seems that after a while, our keyspaces can't be modified. They accept data, read and write works, truncate works, just keyspace and every column family inside it can't be dropped, created or altered. This happens both on our 3 nodes test cluster and on our single-node dev cluster. With 1.1.1, a rolling restart of the cluster seemed to solve the issue. This no longer happens with 1.1.2 . user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master drop columnfamily agents_blueprints; cqlsh:goh_master user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master DESCRIBE COLUMNFAMILY agents_blueprints CREATE TABLE agents_blueprints ( agent_id ascii, archetype ascii, proto_id ascii, PRIMARY KEY (agent_id, archetype) ) WITH COMPACT STORAGE AND comment='' AND caching='KEYS_ONLY' AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; cqlsh:goh_master -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected
[ https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421432#comment-13421432 ] Michael Theroux commented on CASSANDRA-4417: I just hit this myself on 1.1.2 on two nodes of a six node cluster. The cluster has been stable for a couple of weeks. If it makes any difference, we recently enabled row-caching. ... INFO [AntiEntropyStage:1] 2012-07-24 11:05:55,537 AntiEntropyService.java (line 206) [repair #b9355020-d57e-11e1--7c4549350fdf] Received merkle tree for caches from /10.29.214.111 ERROR [CompactionExecutor:183] 2012-07-24 11:05:58,532 CounterContext.java (line 381) invalid counter shard detected; (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, -1) and (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, 1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard ERROR [CompactionExecutor:183] 2012-07-24 11:05:58,533 CounterContext.java (line 381) invalid counter shard detected; (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, 1) and (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, -1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard ERROR [CompactionExecutor:183] 2012-07-24 11:05:58,534 CounterContext.java (line 381) invalid counter shard detected; (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, -1) and (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, 1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard ERROR [CompactionExecutor:183] 2012-07-24 11:05:58,534 CounterContext.java (line 381) invalid counter shard detected; (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, 1) and (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, -1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard ERROR [CompactionExecutor:183] 2012-07-24 11:05:58,534 CounterContext.java (line 381) invalid counter shard detected; (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, -1) and (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, 1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard ERROR [CompactionExecutor:183] 2012-07-24 11:05:58,535 CounterContext.java (line 381) invalid counter shard detected; (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, 1) and (6be74ab0-6cc6-11e1--242d50cf1fd7, 1, -1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard INFO [AntiEntropyStage:1] 2012-07-24 11:06:05,541 AntiEntropyService.java (line 206) [repair #b9355020-d57e-11e1--7c4549350fdf] Received merkle tree for caches from /10.144.15.6 ... invalid counter shard detected --- Key: CASSANDRA-4417 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.1 Environment: Amazon Linux Reporter: Senthilvel Rangaswamy Seeing errors like these: 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard What does it mean ? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected
[ https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421440#comment-13421440 ] Michael Theroux commented on CASSANDRA-4417: Ignore that comment about row-caching. I see these errors in the log dating back the the 11th of July (long before we enabled rowcaching) invalid counter shard detected --- Key: CASSANDRA-4417 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.1 Environment: Amazon Linux Reporter: Senthilvel Rangaswamy Seeing errors like these: 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick highest to self-heal; this indicates a bug or corruption generated a bad counter shard What does it mean ? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4461) Schema no longer modifiable
[ https://issues.apache.org/jira/browse/CASSANDRA-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421441#comment-13421441 ] Pavel Yaskevich commented on CASSANDRA-4461: Marco, can you try to apply CASSANDRA-4432 and test? I bet that is the problem... Schema no longer modifiable --- Key: CASSANDRA-4461 URL: https://issues.apache.org/jira/browse/CASSANDRA-4461 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Environment: Ubuntu LTS 12.04, Java Sun Oracle 6 (jre1.6.0_33) Reporter: Marco Matarazzo Assignee: Pavel Yaskevich Fix For: 1.1.3 Attachments: system.log It seems that after a while, our keyspaces can't be modified. They accept data, read and write works, truncate works, just keyspace and every column family inside it can't be dropped, created or altered. This happens both on our 3 nodes test cluster and on our single-node dev cluster. With 1.1.1, a rolling restart of the cluster seemed to solve the issue. This no longer happens with 1.1.2 . user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master drop columnfamily agents_blueprints; cqlsh:goh_master user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master DESCRIBE COLUMNFAMILY agents_blueprints CREATE TABLE agents_blueprints ( agent_id ascii, archetype ascii, proto_id ascii, PRIMARY KEY (agent_id, archetype) ) WITH COMPACT STORAGE AND comment='' AND caching='KEYS_ONLY' AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; cqlsh:goh_master -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2116) Separate out filesystem errors from generic IOErrors
[ https://issues.apache.org/jira/browse/CASSANDRA-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-2116: - Attachment: CASSANDRA-2116-v4.patch I reverted change to the SSTII. Can't reliably throw any FSE from there, from the calling code either. SSTW methods' callers that used to catch IOE and do writer.abort() now catch FSWE and do the same. Any call that wasn't performing any cleanup was left as it was - without the cleanup. And most FSRE were converted to RTE since in most cases you can't be sure that FS was indeed the reason for the IOE. This hast to be all this time. Separate out filesystem errors from generic IOErrors Key: CASSANDRA-2116 URL: https://issues.apache.org/jira/browse/CASSANDRA-2116 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Chris Goffinet Assignee: Aleksey Yeschenko Fix For: 1.2 Attachments: 0001-Issue-2116-Replace-some-IOErrors-with-more-informati.patch, 0001-Separate-out-filesystem-errors-from-generic-IOErrors.patch, CASSANDRA-2116-v3.patch, CASSANDRA-2116-v4.patch We throw IOErrors everywhere today in the codebase. We should separate out specific errors such as (reading, writing) from filesystem into FSReadError and FSWriteError. This makes it possible in the next ticket to allow certain failure modes (kill the server if reads or writes fail to disk). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-2116) Separate out filesystem errors from generic IOErrors
[ https://issues.apache.org/jira/browse/CASSANDRA-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421444#comment-13421444 ] Aleksey Yeschenko edited comment on CASSANDRA-2116 at 7/24/12 2:36 PM: --- I reverted changes to the SSTII constuctor. Can't reliably throw any FSE from there, from the calling code either. SSTW methods' callers that used to catch IOE and do writer.abort() now catch FSWE and do the same. Any call that wasn't performing any cleanup was left as it was - without the cleanup. And most FSRE were converted to RTE since in most cases you can't be sure that FS was indeed the reason for the IOE. This hast to be all this time. was (Author: iamaleksey): I reverted change to the SSTII. Can't reliably throw any FSE from there, from the calling code either. SSTW methods' callers that used to catch IOE and do writer.abort() now catch FSWE and do the same. Any call that wasn't performing any cleanup was left as it was - without the cleanup. And most FSRE were converted to RTE since in most cases you can't be sure that FS was indeed the reason for the IOE. This hast to be all this time. Separate out filesystem errors from generic IOErrors Key: CASSANDRA-2116 URL: https://issues.apache.org/jira/browse/CASSANDRA-2116 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Chris Goffinet Assignee: Aleksey Yeschenko Fix For: 1.2 Attachments: 0001-Issue-2116-Replace-some-IOErrors-with-more-informati.patch, 0001-Separate-out-filesystem-errors-from-generic-IOErrors.patch, CASSANDRA-2116-v3.patch, CASSANDRA-2116-v4.patch We throw IOErrors everywhere today in the codebase. We should separate out specific errors such as (reading, writing) from filesystem into FSReadError and FSWriteError. This makes it possible in the next ticket to allow certain failure modes (kill the server if reads or writes fail to disk). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-2116) Separate out filesystem errors from generic IOErrors
[ https://issues.apache.org/jira/browse/CASSANDRA-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421444#comment-13421444 ] Aleksey Yeschenko edited comment on CASSANDRA-2116 at 7/24/12 2:38 PM: --- I reverted changes to the SSTII constuctor. Can't reliably throw any FSE from there, from the calling code either. SSTW methods' callers that used to catch IOE and do writer.abort() now catch FSWE and do the same. Any call that wasn't performing any cleanup was left as it was - without the cleanup. And most FSRE were converted to RTE since in most cases you can't be sure that FS was indeed the reason for the IOE. This has to be all this time. was (Author: iamaleksey): I reverted changes to the SSTII constuctor. Can't reliably throw any FSE from there, from the calling code either. SSTW methods' callers that used to catch IOE and do writer.abort() now catch FSWE and do the same. Any call that wasn't performing any cleanup was left as it was - without the cleanup. And most FSRE were converted to RTE since in most cases you can't be sure that FS was indeed the reason for the IOE. This hast to be all this time. Separate out filesystem errors from generic IOErrors Key: CASSANDRA-2116 URL: https://issues.apache.org/jira/browse/CASSANDRA-2116 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Chris Goffinet Assignee: Aleksey Yeschenko Fix For: 1.2 Attachments: 0001-Issue-2116-Replace-some-IOErrors-with-more-informati.patch, 0001-Separate-out-filesystem-errors-from-generic-IOErrors.patch, CASSANDRA-2116-v3.patch, CASSANDRA-2116-v4.patch We throw IOErrors everywhere today in the codebase. We should separate out specific errors such as (reading, writing) from filesystem into FSReadError and FSWriteError. This makes it possible in the next ticket to allow certain failure modes (kill the server if reads or writes fail to disk). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4461) Schema no longer modifiable
[ https://issues.apache.org/jira/browse/CASSANDRA-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421462#comment-13421462 ] Marco Matarazzo commented on CASSANDRA-4461: It seems that updating the schema from a second node (and maybe the millions of nodetool repair/cleanup/everything) unlocked everything. As soon as the problem happens again (and I expect it soon), I will more than happily try to apply the patch and see if it solves the problem. Schema no longer modifiable --- Key: CASSANDRA-4461 URL: https://issues.apache.org/jira/browse/CASSANDRA-4461 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Environment: Ubuntu LTS 12.04, Java Sun Oracle 6 (jre1.6.0_33) Reporter: Marco Matarazzo Assignee: Pavel Yaskevich Fix For: 1.1.3 Attachments: system.log It seems that after a while, our keyspaces can't be modified. They accept data, read and write works, truncate works, just keyspace and every column family inside it can't be dropped, created or altered. This happens both on our 3 nodes test cluster and on our single-node dev cluster. With 1.1.1, a rolling restart of the cluster seemed to solve the issue. This no longer happens with 1.1.2 . user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master drop columnfamily agents_blueprints; cqlsh:goh_master user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master DESCRIBE COLUMNFAMILY agents_blueprints CREATE TABLE agents_blueprints ( agent_id ascii, archetype ascii, proto_id ascii, PRIMARY KEY (agent_id, archetype) ) WITH COMPACT STORAGE AND comment='' AND caching='KEYS_ONLY' AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; cqlsh:goh_master -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4461) Schema no longer modifiable
[ https://issues.apache.org/jira/browse/CASSANDRA-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421471#comment-13421471 ] Pavel Yaskevich commented on CASSANDRA-4461: How about we resolve this one for now and you re-open when it would happen again? Schema no longer modifiable --- Key: CASSANDRA-4461 URL: https://issues.apache.org/jira/browse/CASSANDRA-4461 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Environment: Ubuntu LTS 12.04, Java Sun Oracle 6 (jre1.6.0_33) Reporter: Marco Matarazzo Assignee: Pavel Yaskevich Fix For: 1.1.3 Attachments: system.log It seems that after a while, our keyspaces can't be modified. They accept data, read and write works, truncate works, just keyspace and every column family inside it can't be dropped, created or altered. This happens both on our 3 nodes test cluster and on our single-node dev cluster. With 1.1.1, a rolling restart of the cluster seemed to solve the issue. This no longer happens with 1.1.2 . user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master drop columnfamily agents_blueprints; cqlsh:goh_master user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master DESCRIBE COLUMNFAMILY agents_blueprints CREATE TABLE agents_blueprints ( agent_id ascii, archetype ascii, proto_id ascii, PRIMARY KEY (agent_id, archetype) ) WITH COMPACT STORAGE AND comment='' AND caching='KEYS_ONLY' AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; cqlsh:goh_master -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4461) Schema no longer modifiable
[ https://issues.apache.org/jira/browse/CASSANDRA-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421472#comment-13421472 ] Marco Matarazzo commented on CASSANDRA-4461: Absolutely OK for me. Schema no longer modifiable --- Key: CASSANDRA-4461 URL: https://issues.apache.org/jira/browse/CASSANDRA-4461 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Environment: Ubuntu LTS 12.04, Java Sun Oracle 6 (jre1.6.0_33) Reporter: Marco Matarazzo Assignee: Pavel Yaskevich Fix For: 1.1.3 Attachments: system.log It seems that after a while, our keyspaces can't be modified. They accept data, read and write works, truncate works, just keyspace and every column family inside it can't be dropped, created or altered. This happens both on our 3 nodes test cluster and on our single-node dev cluster. With 1.1.1, a rolling restart of the cluster seemed to solve the issue. This no longer happens with 1.1.2 . user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master drop columnfamily agents_blueprints; cqlsh:goh_master user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master DESCRIBE COLUMNFAMILY agents_blueprints CREATE TABLE agents_blueprints ( agent_id ascii, archetype ascii, proto_id ascii, PRIMARY KEY (agent_id, archetype) ) WITH COMPACT STORAGE AND comment='' AND caching='KEYS_ONLY' AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; cqlsh:goh_master -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-4461) Schema no longer modifiable
[ https://issues.apache.org/jira/browse/CASSANDRA-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Yaskevich resolved CASSANDRA-4461. Resolution: Duplicate Resolving as duplicate to CASSANDRA-4432 until further notice. Schema no longer modifiable --- Key: CASSANDRA-4461 URL: https://issues.apache.org/jira/browse/CASSANDRA-4461 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Environment: Ubuntu LTS 12.04, Java Sun Oracle 6 (jre1.6.0_33) Reporter: Marco Matarazzo Assignee: Pavel Yaskevich Fix For: 1.1.3 Attachments: system.log It seems that after a while, our keyspaces can't be modified. They accept data, read and write works, truncate works, just keyspace and every column family inside it can't be dropped, created or altered. This happens both on our 3 nodes test cluster and on our single-node dev cluster. With 1.1.1, a rolling restart of the cluster seemed to solve the issue. This no longer happens with 1.1.2 . user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master drop columnfamily agents_blueprints; cqlsh:goh_master user@server:~$ cqlsh -3 -k goh_master cassandra1 Connected to GOH Cluster at cassandra1:9160. [cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0] Use HELP for help. cqlsh:goh_master DESCRIBE COLUMNFAMILY agents_blueprints CREATE TABLE agents_blueprints ( agent_id ascii, archetype ascii, proto_id ascii, PRIMARY KEY (agent_id, archetype) ) WITH COMPACT STORAGE AND comment='' AND caching='KEYS_ONLY' AND read_repair_chance=0.10 AND gc_grace_seconds=864000 AND min_compaction_threshold=4 AND max_compaction_threshold=32 AND replicate_on_write='true' AND compaction_strategy_class='SizeTieredCompactionStrategy' AND compression_parameters:sstable_compression='SnappyCompressor'; cqlsh:goh_master -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4444) Failure to delete column families
[ https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421485#comment-13421485 ] Pavel Yaskevich commented on CASSANDRA-: Any news on this one, David? Failure to delete column families - Key: CASSANDRA- URL: https://issues.apache.org/jira/browse/CASSANDRA- Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.2 Environment: 2 node cluster running on Ubuntu Precise Reporter: David B Assignee: Pavel Yaskevich I have a two node cluster, and one keyspace defined as follows: create keyspace SampleKeyspace with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = {replication_factor:2}; I then create a column family as follows: create column family SampleFamily with caching = 'keys_only' and key_validation_class = 'LongType' and compression_options = { sstable_compression: SnappyCompressor, chunk_length_kb: 64 } I stream SSTables through SStableLoader. After the load is complete, compaction begins. During this time, I request a drop of the family through cassandra-cli using drop column family SampleFamily. Cassandra-cli responds that schemas are in agreement. Looking on the file system, however, the full set of data files are still found under data/SampleFamily (in addition to the snapshot created on drop family). There are no errors in either system or output logs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4411) Assertion with LCS compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421487#comment-13421487 ] Mina Naguib commented on CASSANDRA-4411: Things appear better after an offline scrub. While the scrubbing itself was uneventful, at the very end it did Checking leveled manifest, found 14 sstables in level 3 and level 4 that were problematic and moved them back to level 0. I started the node back up and all (+/- 10) compactions ran successfully. I'll keep an eye on it and if it stays well I'll do the same to the other nodes. Perhaps I'll try my luck with sstablescrub --manifest-check to see if I can keep the downtime to a minimum. Assertion with LCS compaction - Key: CASSANDRA-4411 URL: https://issues.apache.org/jira/browse/CASSANDRA-4411 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.2 Reporter: Anton Winter Assignee: Sylvain Lebresne Fix For: 1.1.3 Attachments: 0001-Add-debugging-info-for-LCS.txt, 4411-followup.txt, 4411.txt, assertion-w-more-debugging-info-omid.log, assertion.moreinfo.system.log, system.log As instructed in CASSANDRA-4321 I have raised this issue as a continuation of that issue as it appears the problem still exists. I have repeatedly run sstablescrub across all my nodes after the 1.1.2 upgrade until sstablescrub shows no errors. The exceptions described in CASSANDRA-4321 do not occur as frequently now but the integrity check still throws exceptions on a number of nodes. Once those exceptions occur compactionstats shows a large number of pending tasks with no progression afterwards. {code} ERROR [CompactionExecutor:150] 2012-07-05 04:26:15,570 AbstractCassandraDaemon.java (line 134) Exception in thread Thread[CompactionExecutor:150,1,main] java.lang.AssertionError at org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:214) at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:158) at org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:531) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:254) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:978) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200) at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50) at org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:150) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4459) pig driver casts ints as bytearray
[ https://issues.apache.org/jira/browse/CASSANDRA-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421502#comment-13421502 ] Jeremy Hanna commented on CASSANDRA-4459: - fwiw - see https://issues.apache.org/jira/browse/PIG-2764 for the addition of BigInteger and BigDecimal as built-in pig data types. Also, I'm not sure how much of an issue it is for users to use pig ints for now because I don't know how many users know that the cassandra IntegerType is actually a BigInteger and not just a regular Integer. That's not to say that it's not dangerous to try to put a BigInteger value into an Integer type. It's just that I don't know if it's common knowledge that Cassandra uses a BigInteger underneath. pig driver casts ints as bytearray -- Key: CASSANDRA-4459 URL: https://issues.apache.org/jira/browse/CASSANDRA-4459 Project: Cassandra Issue Type: Bug Environment: C* 1.1.2 embedded in DSE Reporter: Cathy Daw Assignee: Brandon Williams Fix For: 1.1.3 Attachments: 4459.txt we seem to be auto-mapping C* int columns to bytearray in Pig, and farther down I can't seem to find a way to cast that to int and do an average. {code} grunt cassandra_users = LOAD 'cassandra://cqldb/users' USING CassandraStorage(); grunt dump cassandra_users; (bobhatter,(act,22),(fname,bob),(gender,m),(highSchool,Cal High),(lname,hatter),(sat,500),(state,CA),{}) (alicesmith,(act,27),(fname,alice),(gender,f),(highSchool,Tuscon High),(lname,smith),(sat,650),(state,AZ),{}) // notice sat and act columns are bytearray values grunt describe cassandra_users; cassandra_users: {key: chararray,act: (name: chararray,value: bytearray),fname: (name: chararray,value: chararray), gender: (name: chararray,value: chararray),highSchool: (name: chararray,value: chararray),lname: (name: chararray,value: chararray), sat: (name: chararray,value: bytearray),state: (name: chararray,value: chararray),columns: {(name: chararray,value: chararray)}} grunt users_by_state = GROUP cassandra_users BY state; grunt dump users_by_state; ((state,AX),{(aoakley,(highSchool,Phoenix High),(lname,Oakley),state,(act,22),(sat,500),(gender,m),(fname,Anne),{})}) ((state,AZ),{(gjames,(highSchool,Tuscon High),(lname,James),state,(act,24),(sat,650),(gender,f),(fname,Geronomo),{})}) ((state,CA),{(philton,(highSchool,Beverly High),(lname,Hilton),state,(act,37),(sat,220),(gender,m),(fname,Paris),{}),(jbrown,(highSchool,Cal High),(lname,Brown),state,(act,20),(sat,700),(gender,m),(fname,Jerry),{})}) // Error - use explicit cast grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG(cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:15:04,361 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1045: Could not infer the matching function for org.apache.pig.builtin.AVG as multiple or none of them fit. Please use an explicit cast. // Unable to cast as int grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG((int)cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:07:39,217 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1052: Cannot cast bag with schema sat: bag({name: chararray,value: bytearray}) to int {code} *Seed data in CQL* {code} CREATE KEYSPACE cqldb with strategy_class = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options:replication_factor=3; use cqldb; CREATE COLUMNFAMILY users ( KEY text PRIMARY KEY, fname text, lname text, gender varchar, act int, sat int, highSchool text, state varchar); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (gjames, Geronomo, James, f, 24, 650, 'Tuscon High', 'AZ'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (aoakley, Anne, Oakley, m , 22, 500, 'Phoenix High', 'AX'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (jbrown, Jerry, Brown, m , 20, 700, 'Cal High', 'CA'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (philton, Paris, Hilton, m , 37, 220, 'Beverly High', 'CA'); select * from users; {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4459) pig driver casts ints as bytearray
[ https://issues.apache.org/jira/browse/CASSANDRA-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421508#comment-13421508 ] Brandon Williams commented on CASSANDRA-4459: - bq. casting IntegerType to pig's [32bit] int sounds broken to me. I agree, but the conundrum we're in now is, I'm almost certain someone is relying on the current behavior to work and always using ints under 2**31, so changing it now would break things for them, and really the only danger is exceeding that limit, which I suspect people who create int columns never intend to do (or they'd make them longs.) So, I propose that when pig has a BigInteger, we use switch to that, allowing a smooth transition (unless you're exceeding 2**31 already, which to my knowledge no one is.) pig driver casts ints as bytearray -- Key: CASSANDRA-4459 URL: https://issues.apache.org/jira/browse/CASSANDRA-4459 Project: Cassandra Issue Type: Bug Environment: C* 1.1.2 embedded in DSE Reporter: Cathy Daw Assignee: Brandon Williams Fix For: 1.1.3 Attachments: 4459.txt we seem to be auto-mapping C* int columns to bytearray in Pig, and farther down I can't seem to find a way to cast that to int and do an average. {code} grunt cassandra_users = LOAD 'cassandra://cqldb/users' USING CassandraStorage(); grunt dump cassandra_users; (bobhatter,(act,22),(fname,bob),(gender,m),(highSchool,Cal High),(lname,hatter),(sat,500),(state,CA),{}) (alicesmith,(act,27),(fname,alice),(gender,f),(highSchool,Tuscon High),(lname,smith),(sat,650),(state,AZ),{}) // notice sat and act columns are bytearray values grunt describe cassandra_users; cassandra_users: {key: chararray,act: (name: chararray,value: bytearray),fname: (name: chararray,value: chararray), gender: (name: chararray,value: chararray),highSchool: (name: chararray,value: chararray),lname: (name: chararray,value: chararray), sat: (name: chararray,value: bytearray),state: (name: chararray,value: chararray),columns: {(name: chararray,value: chararray)}} grunt users_by_state = GROUP cassandra_users BY state; grunt dump users_by_state; ((state,AX),{(aoakley,(highSchool,Phoenix High),(lname,Oakley),state,(act,22),(sat,500),(gender,m),(fname,Anne),{})}) ((state,AZ),{(gjames,(highSchool,Tuscon High),(lname,James),state,(act,24),(sat,650),(gender,f),(fname,Geronomo),{})}) ((state,CA),{(philton,(highSchool,Beverly High),(lname,Hilton),state,(act,37),(sat,220),(gender,m),(fname,Paris),{}),(jbrown,(highSchool,Cal High),(lname,Brown),state,(act,20),(sat,700),(gender,m),(fname,Jerry),{})}) // Error - use explicit cast grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG(cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:15:04,361 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1045: Could not infer the matching function for org.apache.pig.builtin.AVG as multiple or none of them fit. Please use an explicit cast. // Unable to cast as int grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG((int)cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:07:39,217 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1052: Cannot cast bag with schema sat: bag({name: chararray,value: bytearray}) to int {code} *Seed data in CQL* {code} CREATE KEYSPACE cqldb with strategy_class = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options:replication_factor=3; use cqldb; CREATE COLUMNFAMILY users ( KEY text PRIMARY KEY, fname text, lname text, gender varchar, act int, sat int, highSchool text, state varchar); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (gjames, Geronomo, James, f, 24, 650, 'Tuscon High', 'AZ'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (aoakley, Anne, Oakley, m , 22, 500, 'Phoenix High', 'AX'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (jbrown, Jerry, Brown, m , 20, 700, 'Cal High', 'CA'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (philton, Paris, Hilton, m , 37, 220, 'Beverly High', 'CA'); select * from users; {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-4459) pig driver casts ints as bytearray
[ https://issues.apache.org/jira/browse/CASSANDRA-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421508#comment-13421508 ] Brandon Williams edited comment on CASSANDRA-4459 at 7/24/12 4:10 PM: -- bq. casting IntegerType to pig's [32bit] int sounds broken to me. I agree, but the conundrum we're in now is, I'm almost certain someone is relying on the current behavior to work and always using ints under 2**31, so changing it now would break things for them, and really the only danger is exceeding that limit, which I suspect people who create int columns never intend to do (or they'd make them longs.) So, I propose that when pig has a BigInteger, we switch to that, allowing a smooth transition (unless you're exceeding 2**31 already, which to my knowledge no one is.) was (Author: brandon.williams): bq. casting IntegerType to pig's [32bit] int sounds broken to me. I agree, but the conundrum we're in now is, I'm almost certain someone is relying on the current behavior to work and always using ints under 2**31, so changing it now would break things for them, and really the only danger is exceeding that limit, which I suspect people who create int columns never intend to do (or they'd make them longs.) So, I propose that when pig has a BigInteger, we use switch to that, allowing a smooth transition (unless you're exceeding 2**31 already, which to my knowledge no one is.) pig driver casts ints as bytearray -- Key: CASSANDRA-4459 URL: https://issues.apache.org/jira/browse/CASSANDRA-4459 Project: Cassandra Issue Type: Bug Environment: C* 1.1.2 embedded in DSE Reporter: Cathy Daw Assignee: Brandon Williams Fix For: 1.1.3 Attachments: 4459.txt we seem to be auto-mapping C* int columns to bytearray in Pig, and farther down I can't seem to find a way to cast that to int and do an average. {code} grunt cassandra_users = LOAD 'cassandra://cqldb/users' USING CassandraStorage(); grunt dump cassandra_users; (bobhatter,(act,22),(fname,bob),(gender,m),(highSchool,Cal High),(lname,hatter),(sat,500),(state,CA),{}) (alicesmith,(act,27),(fname,alice),(gender,f),(highSchool,Tuscon High),(lname,smith),(sat,650),(state,AZ),{}) // notice sat and act columns are bytearray values grunt describe cassandra_users; cassandra_users: {key: chararray,act: (name: chararray,value: bytearray),fname: (name: chararray,value: chararray), gender: (name: chararray,value: chararray),highSchool: (name: chararray,value: chararray),lname: (name: chararray,value: chararray), sat: (name: chararray,value: bytearray),state: (name: chararray,value: chararray),columns: {(name: chararray,value: chararray)}} grunt users_by_state = GROUP cassandra_users BY state; grunt dump users_by_state; ((state,AX),{(aoakley,(highSchool,Phoenix High),(lname,Oakley),state,(act,22),(sat,500),(gender,m),(fname,Anne),{})}) ((state,AZ),{(gjames,(highSchool,Tuscon High),(lname,James),state,(act,24),(sat,650),(gender,f),(fname,Geronomo),{})}) ((state,CA),{(philton,(highSchool,Beverly High),(lname,Hilton),state,(act,37),(sat,220),(gender,m),(fname,Paris),{}),(jbrown,(highSchool,Cal High),(lname,Brown),state,(act,20),(sat,700),(gender,m),(fname,Jerry),{})}) // Error - use explicit cast grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG(cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:15:04,361 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1045: Could not infer the matching function for org.apache.pig.builtin.AVG as multiple or none of them fit. Please use an explicit cast. // Unable to cast as int grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG((int)cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:07:39,217 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1052: Cannot cast bag with schema sat: bag({name: chararray,value: bytearray}) to int {code} *Seed data in CQL* {code} CREATE KEYSPACE cqldb with strategy_class = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options:replication_factor=3; use cqldb; CREATE COLUMNFAMILY users ( KEY text PRIMARY KEY, fname text, lname text, gender varchar, act int, sat int, highSchool text, state varchar); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (gjames, Geronomo, James, f, 24, 650, 'Tuscon High', 'AZ'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (aoakley, Anne, Oakley, m , 22, 500, 'Phoenix High', 'AX'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (jbrown, Jerry, Brown, m , 20, 700, 'Cal High', 'CA'); insert into
[jira] [Commented] (CASSANDRA-4459) pig driver casts ints as bytearray
[ https://issues.apache.org/jira/browse/CASSANDRA-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421510#comment-13421510 ] Jonathan Ellis commented on CASSANDRA-4459: --- That's reasonable, but we should still use int32type for cli population :) pig driver casts ints as bytearray -- Key: CASSANDRA-4459 URL: https://issues.apache.org/jira/browse/CASSANDRA-4459 Project: Cassandra Issue Type: Bug Environment: C* 1.1.2 embedded in DSE Reporter: Cathy Daw Assignee: Brandon Williams Fix For: 1.1.3 Attachments: 4459.txt we seem to be auto-mapping C* int columns to bytearray in Pig, and farther down I can't seem to find a way to cast that to int and do an average. {code} grunt cassandra_users = LOAD 'cassandra://cqldb/users' USING CassandraStorage(); grunt dump cassandra_users; (bobhatter,(act,22),(fname,bob),(gender,m),(highSchool,Cal High),(lname,hatter),(sat,500),(state,CA),{}) (alicesmith,(act,27),(fname,alice),(gender,f),(highSchool,Tuscon High),(lname,smith),(sat,650),(state,AZ),{}) // notice sat and act columns are bytearray values grunt describe cassandra_users; cassandra_users: {key: chararray,act: (name: chararray,value: bytearray),fname: (name: chararray,value: chararray), gender: (name: chararray,value: chararray),highSchool: (name: chararray,value: chararray),lname: (name: chararray,value: chararray), sat: (name: chararray,value: bytearray),state: (name: chararray,value: chararray),columns: {(name: chararray,value: chararray)}} grunt users_by_state = GROUP cassandra_users BY state; grunt dump users_by_state; ((state,AX),{(aoakley,(highSchool,Phoenix High),(lname,Oakley),state,(act,22),(sat,500),(gender,m),(fname,Anne),{})}) ((state,AZ),{(gjames,(highSchool,Tuscon High),(lname,James),state,(act,24),(sat,650),(gender,f),(fname,Geronomo),{})}) ((state,CA),{(philton,(highSchool,Beverly High),(lname,Hilton),state,(act,37),(sat,220),(gender,m),(fname,Paris),{}),(jbrown,(highSchool,Cal High),(lname,Brown),state,(act,20),(sat,700),(gender,m),(fname,Jerry),{})}) // Error - use explicit cast grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG(cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:15:04,361 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1045: Could not infer the matching function for org.apache.pig.builtin.AVG as multiple or none of them fit. Please use an explicit cast. // Unable to cast as int grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, AVG((int)cassandra_users.sat); grunt dump user_avg; 2012-07-22 17:07:39,217 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1052: Cannot cast bag with schema sat: bag({name: chararray,value: bytearray}) to int {code} *Seed data in CQL* {code} CREATE KEYSPACE cqldb with strategy_class = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options:replication_factor=3; use cqldb; CREATE COLUMNFAMILY users ( KEY text PRIMARY KEY, fname text, lname text, gender varchar, act int, sat int, highSchool text, state varchar); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (gjames, Geronomo, James, f, 24, 650, 'Tuscon High', 'AZ'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (aoakley, Anne, Oakley, m , 22, 500, 'Phoenix High', 'AX'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (jbrown, Jerry, Brown, m , 20, 700, 'Cal High', 'CA'); insert into users (KEY, fname, lname, gender, act, sat, highSchool, state) values (philton, Paris, Hilton, m , 37, 220, 'Beverly High', 'CA'); select * from users; {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3974) Per-CF TTL
[ https://issues.apache.org/jira/browse/CASSANDRA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421558#comment-13421558 ] Kirk True commented on CASSANDRA-3974: -- Sylvain - the main issue is that the code isn't structured in such a way that a CFMetaData object is available. Neither the code for QueryFilter.isRelevant nor its callers have access to a CFMetaData. Can you think of a way to get the CFMetaData in there or a different way to structure the code in general? Per-CF TTL -- Key: CASSANDRA-3974 URL: https://issues.apache.org/jira/browse/CASSANDRA-3974 Project: Cassandra Issue Type: New Feature Affects Versions: 1.2 Reporter: Jonathan Ellis Assignee: Kirk True Priority: Minor Fix For: 1.2 Attachments: trunk-3974.txt, trunk-3974v2.txt, trunk-3974v3.txt Per-CF TTL would allow compaction optimizations (drop an entire sstable's worth of expired data) that we can't do with per-column. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3974) Per-CF TTL
[ https://issues.apache.org/jira/browse/CASSANDRA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk True updated CASSANDRA-3974: - Attachment: trunk-3974v4.txt Per-CF TTL -- Key: CASSANDRA-3974 URL: https://issues.apache.org/jira/browse/CASSANDRA-3974 Project: Cassandra Issue Type: New Feature Affects Versions: 1.2 Reporter: Jonathan Ellis Assignee: Kirk True Priority: Minor Fix For: 1.2 Attachments: trunk-3974.txt, trunk-3974v2.txt, trunk-3974v3.txt, trunk-3974v4.txt Per-CF TTL would allow compaction optimizations (drop an entire sstable's worth of expired data) that we can't do with per-column. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Alves updated CASSANDRA-3564: --- Attachment: 3564.patch Patch that includes killing/waiting in the do_stop function of the init debian script. Script does not actually call the flushTablesAndExit() instead jsvc's stop() hook does, after killing the RPC and native servers. Node is given 100 secs to finish flush, I'm not sure this is a good default setting. flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: cqlsh: add a COPY TO command Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4434
Updated Branches: refs/heads/cassandra-1.1 41c9ba63d - 9a6339476 cqlsh: add a COPY TO command Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4434 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9a633947 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9a633947 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9a633947 Branch: refs/heads/cassandra-1.1 Commit: 9a63394765de28160d576c9285be68587e222a86 Parents: 41c9ba6 Author: Brandon Williams brandonwilli...@apache.org Authored: Tue Jul 24 13:57:19 2012 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Tue Jul 24 13:57:19 2012 -0500 -- CHANGES.txt |1 + bin/cqlsh | 126 - 2 files changed, 105 insertions(+), 22 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a633947/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0885387..638574c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -23,6 +23,7 @@ Merged from 1.0: * Fix LCS splitting sstable base on uncompressed size (CASSANDRA-4419) * Bootstraps that fail are detected upon restart and will retry safely without needing to delete existing data first (CASSANDRA-4427) + * (cqlsh) add a COPY TO command to copy a CF to a CSV file (CASSANDRA-4434) 1.1.2 http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a633947/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 574d49b..c67a818 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -224,7 +224,8 @@ cqlsh_extra_syntax_rules = r''' copyCommand ::= COPY cf=columnFamilyName ( ( [colnames]=colname ( , [colnames]=colname )* ) )? - FROM ( fname=stringLiteral | STDIN ) + ( dir=FROM ( fname=stringLiteral | STDIN ) + | dir=TO ( fname=stringLiteral | STDOUT ) ) ( WITH copyOption ( AND copyOption )* )? ; @@ -303,12 +304,16 @@ def complete_copy_column_names(ctxt, cqlsh): return [colnames[0]] return set(colnames[1:]) - set(existcols) -COPY_OPTIONS = ('DELIMITER', 'QUOTE', 'ESCAPE', 'HEADER') +COPY_OPTIONS = ('DELIMITER', 'QUOTE', 'ESCAPE', 'HEADER', 'ENCODING', 'NULL') @cqlsh_syntax_completer('copyOption', 'optnames') def complete_copy_options(ctxt, cqlsh): optnames = map(str.upper, ctxt.get_binding('optnames', ())) -return set(COPY_OPTIONS) - set(optnames) +direction = ctxt.get_binding('dir').upper() +opts = set(COPY_OPTIONS) - set(optnames) +if direction == 'FROM': +opts -= ('ENCODING', 'NULL') +return opts @cqlsh_syntax_completer('copyOption', 'optvals') def complete_copy_opt_values(ctxt, cqlsh): @@ -448,13 +453,13 @@ def unix_time_from_uuid1(u): return (u.get_time() - 0x01B21DD213814000) / 1000.0 def format_value(val, casstype, output_encoding, addcolor=False, time_format='', - float_precision=3, colormap=DEFAULT_VALUE_COLORS): + float_precision=3, colormap=DEFAULT_VALUE_COLORS, nullval='null'): color = colormap['default'] coloredval = None displaywidth = None if val is None: -bval = 'null' +bval = nullval color = colormap['error'] elif isinstance(val, DecodeError): casstype = 'BytesType' @@ -727,7 +732,7 @@ class Shell(cmd.Cmd): def get_column_names(self, ksname, cfname): if ksname is None: ksname = self.current_keyspace -if self.cqlver_atleast(3): +if ksname != 'system' and self.cqlver_atleast(3): return self.get_column_names_from_layout(ksname, cfname) else: return self.get_column_names_from_cfdef(ksname, cfname) @@ -1433,6 +1438,9 @@ class Shell(cmd.Cmd): COPY table_name [ ( column [, ...] ) ] FROM ( 'filename' | STDIN ) [ WITH option='value' [AND ...] ]; +COPY table_name [ ( column [, ...] ) ] + TO ( 'filename' | STDOUT ) + [ WITH option='value' [AND ...] ]; Available options and defaults: @@ -1440,6 +1448,8 @@ class Shell(cmd.Cmd): QUOTE=''- quoting character to be used to quote fields ESCAPE='\' - character to appear before the QUOTE char when quoted HEADER=false - whether to ignore the first line + ENCODING='utf8' - encoding for CSV output (COPY TO only) + NULL='' - string that represents a null value (COPY TO only) When entering CSV data on STDIN, you can use the sequence \. on a line by itself to
git commit: cqlsh: add a COPY TO command Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4434
Updated Branches: refs/heads/trunk 795174611 - e73b2a68b cqlsh: add a COPY TO command Patch by paul cannon, reviewed by brandonwilliams for CASSANDRA-4434 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e73b2a68 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e73b2a68 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e73b2a68 Branch: refs/heads/trunk Commit: e73b2a68bf25597b351cbaa52700edad4ed773de Parents: 7951746 Author: Brandon Williams brandonwilli...@apache.org Authored: Tue Jul 24 13:57:19 2012 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Tue Jul 24 14:01:42 2012 -0500 -- CHANGES.txt |1 + bin/cqlsh | 126 - 2 files changed, 105 insertions(+), 22 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e73b2a68/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 979e3ef..c558c3f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -58,6 +58,7 @@ Merged from 1.0: * Fix LCS splitting sstable base on uncompressed size (CASSANDRA-4419) * Bootstraps that fail are detected upon restart and will retry safely without needing to delete existing data first (CASSANDRA-4427) + * (cqlsh) add a COPY TO command to copy a CF to a CSV file (CASSANDRA-4434) 1.1.2 http://git-wip-us.apache.org/repos/asf/cassandra/blob/e73b2a68/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 574d49b..c67a818 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -224,7 +224,8 @@ cqlsh_extra_syntax_rules = r''' copyCommand ::= COPY cf=columnFamilyName ( ( [colnames]=colname ( , [colnames]=colname )* ) )? - FROM ( fname=stringLiteral | STDIN ) + ( dir=FROM ( fname=stringLiteral | STDIN ) + | dir=TO ( fname=stringLiteral | STDOUT ) ) ( WITH copyOption ( AND copyOption )* )? ; @@ -303,12 +304,16 @@ def complete_copy_column_names(ctxt, cqlsh): return [colnames[0]] return set(colnames[1:]) - set(existcols) -COPY_OPTIONS = ('DELIMITER', 'QUOTE', 'ESCAPE', 'HEADER') +COPY_OPTIONS = ('DELIMITER', 'QUOTE', 'ESCAPE', 'HEADER', 'ENCODING', 'NULL') @cqlsh_syntax_completer('copyOption', 'optnames') def complete_copy_options(ctxt, cqlsh): optnames = map(str.upper, ctxt.get_binding('optnames', ())) -return set(COPY_OPTIONS) - set(optnames) +direction = ctxt.get_binding('dir').upper() +opts = set(COPY_OPTIONS) - set(optnames) +if direction == 'FROM': +opts -= ('ENCODING', 'NULL') +return opts @cqlsh_syntax_completer('copyOption', 'optvals') def complete_copy_opt_values(ctxt, cqlsh): @@ -448,13 +453,13 @@ def unix_time_from_uuid1(u): return (u.get_time() - 0x01B21DD213814000) / 1000.0 def format_value(val, casstype, output_encoding, addcolor=False, time_format='', - float_precision=3, colormap=DEFAULT_VALUE_COLORS): + float_precision=3, colormap=DEFAULT_VALUE_COLORS, nullval='null'): color = colormap['default'] coloredval = None displaywidth = None if val is None: -bval = 'null' +bval = nullval color = colormap['error'] elif isinstance(val, DecodeError): casstype = 'BytesType' @@ -727,7 +732,7 @@ class Shell(cmd.Cmd): def get_column_names(self, ksname, cfname): if ksname is None: ksname = self.current_keyspace -if self.cqlver_atleast(3): +if ksname != 'system' and self.cqlver_atleast(3): return self.get_column_names_from_layout(ksname, cfname) else: return self.get_column_names_from_cfdef(ksname, cfname) @@ -1433,6 +1438,9 @@ class Shell(cmd.Cmd): COPY table_name [ ( column [, ...] ) ] FROM ( 'filename' | STDIN ) [ WITH option='value' [AND ...] ]; +COPY table_name [ ( column [, ...] ) ] + TO ( 'filename' | STDOUT ) + [ WITH option='value' [AND ...] ]; Available options and defaults: @@ -1440,6 +1448,8 @@ class Shell(cmd.Cmd): QUOTE=''- quoting character to be used to quote fields ESCAPE='\' - character to appear before the QUOTE char when quoted HEADER=false - whether to ignore the first line + ENCODING='utf8' - encoding for CSV output (COPY TO only) + NULL='' - string that represents a null value (COPY TO only) When entering CSV data on STDIN, you can use the sequence \. on a line by itself to end the data
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421691#comment-13421691 ] Brandon Williams commented on CASSANDRA-3564: - Why don't we kill before we sleep on WAIT_FOR_STOP? flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2116) Separate out filesystem errors from generic IOErrors
[ https://issues.apache.org/jira/browse/CASSANDRA-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2116: -- Attachment: 2116-v5.txt v5 attached. I got up through KeyIterator in the review but got bogged down in the compressed reader code, which needs some deeper fixes. (Still lots of bare IOExceptions being thrown in CM.Writer and CRAR.) Here's the changes I made while going through it: - updated try/catch in CCE.create to wrap the bytesRemaining call specifically - moved StorageService.getCannonicalPath into FileUtils and added overload taking a File - updated CFMD.reload to turn ConfigurationException into RTE instead of IOE - Introduced CorruptSSTableError where we were using RTE. We don't do anything with it, yet, but this will make it much easier if we need to down the road since RTE is used *all* over the place. - added FSWE around setLength in CommitLogSegment - removed extra catches in doCleanupCompaction in favor of the existing catch (Exception) block, extended to Throwable - split FailureDetector RTE into RTE + FSWE - lots of changes to RandomAccessReader, CompressionMetadata, CRAR (incomplete) Separate out filesystem errors from generic IOErrors Key: CASSANDRA-2116 URL: https://issues.apache.org/jira/browse/CASSANDRA-2116 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Chris Goffinet Assignee: Aleksey Yeschenko Fix For: 1.2 Attachments: 0001-Issue-2116-Replace-some-IOErrors-with-more-informati.patch, 0001-Separate-out-filesystem-errors-from-generic-IOErrors.patch, 2116-v5.txt, CASSANDRA-2116-v3.patch, CASSANDRA-2116-v4.patch We throw IOErrors everywhere today in the codebase. We should separate out specific errors such as (reading, writing) from filesystem into FSReadError and FSWriteError. This makes it possible in the next ticket to allow certain failure modes (kill the server if reads or writes fail to disk). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421713#comment-13421713 ] David Alves commented on CASSANDRA-3564: You mean send SIGINT/SIGTERM before waiting and SIGKILL after right? SIGINT/SIGTERM don't do a global flush because the shutdown hooks still only flush tables that have durable_writes to false, so either we used nodetool to explicitly call flushAllTables of do in cassandra daemon which is being called from jsvc anyway. flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421737#comment-13421737 ] Brandon Williams commented on CASSANDRA-3564: - bq. You mean send SIGINT/SIGTERM before waiting and SIGKILL after right? Oops, I misread, disregard that. One thing that occurs to me now though, is that as an admin, I'd probably be annoyed by waiting 100s for something to shutdown (and it's not completely unreasonable for flush to take that long.) Generally I want things to shutdown asap, and startup time be damned, which I understand is counter to the purpose of this ticket, but still I think we should put the WAIT_FOR_STOP in /etc/default/cassandra as well as a flag to enable/disable this behavior. flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4460) SystemTable.setBootstrapState always sets bootstrap state to true
[ https://issues.apache.org/jira/browse/CASSANDRA-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421741#comment-13421741 ] Brandon Williams commented on CASSANDRA-4460: - Actually, it breaks, and buildbot knows it: {noformat} ERROR [main] 2012-07-24 14:31:19,156 CassandraDaemon.java (line 335) Exception encountered during startup java.lang.IndexOutOfBoundsException at java.nio.Buffer.checkIndex(Buffer.java:520) at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:340) at org.apache.cassandra.utils.ByteBufferUtil.toInt(ByteBufferUtil.java:414) at org.apache.cassandra.cql.jdbc.JdbcInt32.compose(JdbcInt32.java:94) at org.apache.cassandra.db.marshal.Int32Type.compose(Int32Type.java:33) at org.apache.cassandra.cql3.UntypedResultSet$Row.getInt(UntypedResultSet.java:104) at org.apache.cassandra.db.SystemTable.getBootstrapState(SystemTable.java:375) at org.apache.cassandra.db.SystemTable.setBootstrapState(SystemTable.java:391) at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:691) at org.apache.cassandra.service.StorageService.initServer(StorageService.java:476) at org.apache.cassandra.service.StorageService.initServer(StorageService.java:367) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:228) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:318) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:361) {noformat} Obviously we should be using %i instead. SystemTable.setBootstrapState always sets bootstrap state to true - Key: CASSANDRA-4460 URL: https://issues.apache.org/jira/browse/CASSANDRA-4460 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2 Reporter: Dave Brosius Priority: Trivial public static void setBootstrapState(BootstrapState state) { String req = INSERT INTO system.%s (key, bootstrapped) VALUES ('%s', '%b'); processInternal(String.format(req, LOCAL_CF, LOCAL_KEY, getBootstrapState())); forceBlockingFlush(LOCAL_CF); } Third parameter %b is set from getBootstrapState() which returns an enum, thus %b collapses to null/non null checks. This would seem then to always set it to true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421757#comment-13421757 ] David Alves commented on CASSANDRA-3564: I suggest that we add flag that when enabled waits for the full flush for all tables and maybe we don't even need the wait period since the user explicitely told that he wanted a full flush and should be willing to wait. When the flag disabled reverts to the current setting (flush only non-durable tables). Even in the default setting (say the admin remembered he wanted to flush everything *after* the daemon started) he can always use nodetool to do so. flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421764#comment-13421764 ] Brandon Williams commented on CASSANDRA-3564: - bq. I suggest that we add flag that when enabled waits for the full flush for all tables and maybe we don't even need the wait period since the user explicitely told that he wanted a full flush and should be willing to wait. When the flag disabled reverts to the current setting (flush only non-durable tables). That sounds reasonable but I think we should have *some* kind of ceiling (10 minutes or something) where we kill -9 it, just in case we ever have a bug that causes us not to exit (we've had them before), so we don't hang the shutdown of the entire machine forever. flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4462) upgradesstables strips active data from sstables
Mike Heffner created CASSANDRA-4462: --- Summary: upgradesstables strips active data from sstables Key: CASSANDRA-4462 URL: https://issues.apache.org/jira/browse/CASSANDRA-4462 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.2 Environment: Ubuntu 11.04 64-bit Reporter: Mike Heffner From the discussion here: http://mail-archives.apache.org/mod_mbox/cassandra-user/201207.mbox/%3CCAOac0GCtyDqS6ocuHOuQqre4re5wKj3o-ZpUZGkGsjCHzDVbTA%40mail.gmail.com%3E We are trying to migrate a 0.8.8 cluster to 1.1.2 by migrating the sstables from the 0.8.8 ring to a parallel 1.1.2 ring. However, every time we run the `nodetool upgradesstables` step we find it removes active data from our CFs -- leading to lost data in our application. The steps we took were: 1. Bring up a 1.1.2 ring in the same AZ/data center configuration with tokens matching the corresponding nodes in the 0.8.8 ring. 2. Create the same keyspace on 1.1.2. 3. Create each CF in the keyspace on 1.1.2. 4. Flush each node of the 0.8.8 ring. 5. Rsync each non-compacted sstable from 0.8.8 to the corresponding node in 1.1.2. 6. Move each 0.8.8 sstable into the 1.1.2 directory structure by renaming the file to the /cassandra/data/keyspace/cf/keyspace-cf... format. For example, for the keyspace Metrics and CF epochs_60 we get: cassandra/data/Metrics/epochs_60/Metrics-epochs_60-g-941-Data.db. 7. On each 1.1.2 node run `nodetool -h localhost refresh Metrics CF` for each CF in the keyspace. We notice that storage load jumps accordingly. 8. On each 1.1.2 node run `nodetool -h localhost upgradesstables`. This takes awhile but appears to correctly rewrite each sstable in the new 1.1.x format. Storage load drops as sstables are compressed. With further testing we found that we could successfully use scrub to convert our sstables. However, any invocation of upgradesstables causes active data to be culled from the sstables: INFO [CompactionExecutor:4] 2012-07-24 04:27:36,837 CompactionTask.java (line 109) Compacting [SSTableReader(path='/raid0/cassandra/data/Metrics/metrics_900/Metrics-metrics_900-hd-51-Data.db')] INFO [CompactionExecutor:4] 2012-07-24 04:27:51,090 CompactionTask.java (line 221) Compacted to [/raid0/cassandra/data/Metrics/metrics_900/Metrics-metrics_900-hd-58-Data.db,]. 60,449,155 to 2,578,102 (~4% of original) bytes for 4,002 keys at 0.172562MB/s. Time: 14,248ms. These are the steps we've tried: WORKS refresh - scrub WORKS refresh - scrub - major compaction WORKS refresh - scrub - cleanup WORKS refresh - scrub - repair FAILS refresh - upgradesstables FAILS refresh - scrub - upgradesstables FAILS refresh - scrub - repair - upgradesstables FAILS refresh - scrub - major compaction - upgradesstables We have fewer than 143 million row keys in the CFs we're testing and none of the *-Filter.db files are 10MB, so I don't believe this is our problem: https://issues.apache.org/jira/browse/CASSANDRA-3820 The keyspace is defined as: Keyspace: Metrics: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [us-east:3] And the column family that we tested with is defined as: ColumnFamily: metrics_900 Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type) GC grace seconds: 0 Compaction min/max thresholds: 4/32 Read repair chance: 0.1 DC Local Read repair chance: 0.0 Replicate on write: true Caching: KEYS_ONLY Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy Compression Options: sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor All rows have a TTL of 30 days and a gc_grace=0 so it's possible that a small number of older columns would be removed during a compaction/scrub/upgradesstables step. However, the majority should still be kept as their TTL's have not expired yet. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4462) upgradesstables strips active data from sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Heffner updated CASSANDRA-4462: Description: From the discussion here: http://mail-archives.apache.org/mod_mbox/cassandra-user/201207.mbox/%3CCAOac0GCtyDqS6ocuHOuQqre4re5wKj3o-ZpUZGkGsjCHzDVbTA%40mail.gmail.com%3E We are trying to migrate a 0.8.8 cluster to 1.1.2 by migrating the sstables from the 0.8.8 ring to a parallel 1.1.2 ring. However, every time we run the `nodetool upgradesstables` step we find it removes active data from our CFs -- leading to lost data in our application. The steps we took were: 1. Bring up a 1.1.2 ring in the same AZ/data center configuration with tokens matching the corresponding nodes in the 0.8.8 ring. 2. Create the same keyspace on 1.1.2. 3. Create each CF in the keyspace on 1.1.2. 4. Flush each node of the 0.8.8 ring. 5. Rsync each non-compacted sstable from 0.8.8 to the corresponding node in 1.1.2. 6. Move each 0.8.8 sstable into the 1.1.2 directory structure by renaming the file to the /cassandra/data/keyspace/cf/keyspace-cf... format. For example, for the keyspace Metrics and CF epochs_60 we get: cassandra/data/Metrics/epochs_60/Metrics-epochs_60-g-941-Data.db. 7. On each 1.1.2 node run `nodetool -h localhost refresh Metrics CF` for each CF in the keyspace. We notice that storage load jumps accordingly. 8. On each 1.1.2 node run `nodetool -h localhost upgradesstables`. Afterwards we would test the validity of the data by comparing it with data from the original 0.8.8 ring. After an upgradesstables command the data was always incorrect. With further testing we found that we could successfully use scrub to convert our sstables without data loss. However, any invocation of upgradesstables causes active data to be culled from the sstables: INFO [CompactionExecutor:4] 2012-07-24 04:27:36,837 CompactionTask.java (line 109) Compacting [SSTableReader(path='/raid0/cassandra/data/Metrics/metrics_900/Metrics-metrics_900-hd-51-Data.db')] INFO [CompactionExecutor:4] 2012-07-24 04:27:51,090 CompactionTask.java (line 221) Compacted to [/raid0/cassandra/data/Metrics/metrics_900/Metrics-metrics_900-hd-58-Data.db,]. 60,449,155 to 2,578,102 (~4% of original) bytes for 4,002 keys at 0.172562MB/s. Time: 14,248ms. These are the steps we've tried: WORKS refresh - scrub WORKS refresh - scrub - major compaction WORKS refresh - scrub - cleanup WORKS refresh - scrub - repair FAILS refresh - upgradesstables FAILS refresh - scrub - upgradesstables FAILS refresh - scrub - repair - upgradesstables FAILS refresh - scrub - major compaction - upgradesstables We have fewer than 143 million row keys in the CFs we're testing and none of the *-Filter.db files are 10MB, so I don't believe this is our problem: https://issues.apache.org/jira/browse/CASSANDRA-3820 The keyspace is defined as: Keyspace: Metrics: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [us-east:3] And the column family that we tested with is defined as: ColumnFamily: metrics_900 Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type) GC grace seconds: 0 Compaction min/max thresholds: 4/32 Read repair chance: 0.1 DC Local Read repair chance: 0.0 Replicate on write: true Caching: KEYS_ONLY Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy Compression Options: sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor All rows have a TTL of 30 days and a gc_grace=0 so it's possible that a small number of older columns would be removed during a compaction/scrub/upgradesstables step. However, the majority should still be kept as their TTL's have not expired yet. was: From the discussion here: http://mail-archives.apache.org/mod_mbox/cassandra-user/201207.mbox/%3CCAOac0GCtyDqS6ocuHOuQqre4re5wKj3o-ZpUZGkGsjCHzDVbTA%40mail.gmail.com%3E We are trying to migrate a 0.8.8 cluster to 1.1.2 by migrating the sstables from the 0.8.8 ring to a parallel 1.1.2 ring. However, every time we run the `nodetool upgradesstables` step we find it removes active data from our CFs -- leading to lost data in our application. The steps we took were: 1. Bring up a 1.1.2 ring in the same AZ/data center configuration with tokens matching the corresponding nodes in the 0.8.8 ring. 2. Create the same keyspace on 1.1.2. 3. Create each CF in the keyspace
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421770#comment-13421770 ] David Alves commented on CASSANDRA-3564: makes sense, will add a 10 min wiat period, still what should be the default setting? flush everything or just the non durable tables? flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421778#comment-13421778 ] Brandon Williams commented on CASSANDRA-3564: - bq. makes sense, will add a 10 min wait period, still what should be the default setting? flush everything or just the non durable tables? Let's go ahead and flush everything, since startup time is an existing complaint, and maybe I'm in the minority with preferring quick shutdown :) As long as there's a way to change it, that's fine. flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421792#comment-13421792 ] David Alves commented on CASSANDRA-3564: One last thing... since we're only changing debian/init this will work only on debian flavored deployments maybe we should go ahead place the flag on the cassandra.yaml and make the shutdown hook follow that option inetead, this would make the option valid for all deployments (I'd still add the automated wait/kill for debian deployments, non debian deployments would have to take care of killing themselves). wdyt? flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4463) Assertion Error on Serializing Cache provider on restart
Arya Goudarzi created CASSANDRA-4463: Summary: Assertion Error on Serializing Cache provider on restart Key: CASSANDRA-4463 URL: https://issues.apache.org/jira/browse/CASSANDRA-4463 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.2 Environment: Ubuntu 12.04 Precise Cassandra 1.1.2 Oracle Java 6 Reporter: Arya Goudarzi I stopped Cassandra on one of our 1.1.2 nodes and I couldn't start it any more. System.log didn't have much useful info but output.log had this: java.lang.AssertionError at org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:43) at org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:39) at org.apache.cassandra.cache.SerializingCache.serialize(SerializingCache.java:116) at org.apache.cassandra.cache.SerializingCache.put(SerializingCache.java:174) at org.apache.cassandra.cache.InstrumentingCache.put(InstrumentingCache.java:45) at org.apache.cassandra.db.ColumnFamilyStore.initRowCache(ColumnFamilyStore.java:430) at org.apache.cassandra.db.Table.open(Table.java:124) at org.apache.cassandra.db.Table.open(Table.java:97) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:204) at org.apache.cassandra.service.AbstractCassandraDaemon.init(AbstractCassandraDaemon.java:254) at com.netflix.priam.cassandra.NFThinCassandraDaemon.init(NFThinCassandraDaemon.java:41) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:212) Cannot load daemon Service exit with a return value of 3 Deleting the stuff in saved_caches folder fixed the problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421818#comment-13421818 ] Jonathan Ellis commented on CASSANDRA-3564: --- bq. this will work only on debian flavored deployments That's a good point... rpm is close enough, but Windows is not. I don't want to make the shutdown hook flush everything by default, because it's harder to undo than the other way around. That is to say, with the status quo, if you want to flush before shutdown, you call {{nodetool flush}}. Not a big deal. But if we made it flush-everything-by-default then to make it NOT flush our options include - a yaml option as discussed above. this sucks because if you start it up with the flag set to flush-first, then you discover you want no-flush-first, it's too late. you're stuck waiting (or with kill -9, which can cause data loss in the durable_writes=false case). - a new JMX method so we can call {{nodetool shutdown-without-flush}}. Ugh. - non-portable options by platform (e.g. explicit signal handling for Linux) Maybe the best thing to do is to leave the status quo and close this as wontfix. flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421820#comment-13421820 ] Brandon Williams commented on CASSANDRA-3564: - I'd prefer to do it in packaging, *but* reinventing all that logic on windows is painful, so I guess we'll go with yaml. flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-3564: Comment: was deleted (was: I'd prefer to do it in packaging, *but* reinventing all that logic on windows is painful, so I guess we'll go with yaml.) flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421826#comment-13421826 ] Brandon Williams commented on CASSANDRA-3564: - bq. Maybe the best thing to do is to leave the status quo and close this as wontfix. I'm kind of leaning toward that, but we could still improve the (debian) packaging to have a 'call flush for me before shutdown' flag at least, giving you most of the spirit of the ticket if you'd like it. flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster
[ https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421828#comment-13421828 ] David Alves commented on CASSANDRA-3564: the patch includes the nodetool functionality to flush everything, we'd still want that right? I can trim it to just that (and open a different ticket?). flush before shutdown so restart is faster -- Key: CASSANDRA-3564 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564 Project: Cassandra Issue Type: New Feature Components: Packaging Reporter: Jonathan Ellis Assignee: David Alves Priority: Minor Fix For: 1.2 Attachments: 3564.patch, 3564.patch Cassandra handles flush in its shutdown hook for durable_writes=false CFs (otherwise we're *guaranteed* to lose data) but leaves it up to the operator otherwise. I'd rather leave it that way to offer these semantics: - cassandra stop = shutdown nicely [explicit flush, then kill -int] - kill -INT = shutdown faster but don't lose any updates [current behavior] - kill -KILL = lose most recent writes unless durable_writes=true and batch commits are on [also current behavior] But if it's not reasonable to use nodetool from the init script then I guess we can just make the shutdown hook flush everything. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4452) remove RangeKeySample from attributes in jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-4452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Prach updated CASSANDRA-4452: - Comment: was deleted (was: Sample is also verb. We can rename it from getRangeKeySample() to sampleKeyRange(). Ok?) remove RangeKeySample from attributes in jmx Key: CASSANDRA-4452 URL: https://issues.apache.org/jira/browse/CASSANDRA-4452 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.2 Reporter: Jan Prach RangeKeySample in org.apache.cassandra.db:type=StorageService MBean can be really huge (over 200MB in our case). That's a problem for monitoring tools as they're not build for that. Recommended and often used mx4j may be killer in this situation. It would be good enough to make RangeKeySample operation instead of attribute in jmx. Looking at how MBeanServer.registerMBean() works we can do one of the following: a) add some dummy parameter to getRangeKeySample b) name it differently - not like getter (next time somebody will rename it back) c) implement MXBean instead of MBean (a lot of work) Any of those work. All of them are hacks. Any better idea? BTW: It's blocker for some installations. Our update to 1.1.2 caused downtime, downgrade back to 1.0.x, repairs, etc. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4452) remove RangeKeySample from attributes in jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-4452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Prach updated CASSANDRA-4452: - Attachment: cassandra-1.1.2-4452.txt Sample is also verb. We can rename it from getRangeKeySample() to sampleKeyRange(). Ok? remove RangeKeySample from attributes in jmx Key: CASSANDRA-4452 URL: https://issues.apache.org/jira/browse/CASSANDRA-4452 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.2 Reporter: Jan Prach Attachments: cassandra-1.1.2-4452.txt RangeKeySample in org.apache.cassandra.db:type=StorageService MBean can be really huge (over 200MB in our case). That's a problem for monitoring tools as they're not build for that. Recommended and often used mx4j may be killer in this situation. It would be good enough to make RangeKeySample operation instead of attribute in jmx. Looking at how MBeanServer.registerMBean() works we can do one of the following: a) add some dummy parameter to getRangeKeySample b) name it differently - not like getter (next time somebody will rename it back) c) implement MXBean instead of MBean (a lot of work) Any of those work. All of them are hacks. Any better idea? BTW: It's blocker for some installations. Our update to 1.1.2 caused downtime, downgrade back to 1.0.x, repairs, etc. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4460) SystemTable.setBootstrapState always sets bootstrap state to true
[ https://issues.apache.org/jira/browse/CASSANDRA-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dave Brosius updated CASSANDRA-4460: Attachment: use_bootstrap_enum_strings.txt switch the systemtable.bootstrap field to be text and hold BootstrapState.name() so that the schema is more readable and easier to mutate in the future, if needed. I'm not sure how upgrades from old schema generally is handled, that part still needs to be added to the patch. SystemTable.setBootstrapState always sets bootstrap state to true - Key: CASSANDRA-4460 URL: https://issues.apache.org/jira/browse/CASSANDRA-4460 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2 Reporter: Dave Brosius Priority: Trivial Attachments: use_bootstrap_enum_strings.txt public static void setBootstrapState(BootstrapState state) { String req = INSERT INTO system.%s (key, bootstrapped) VALUES ('%s', '%b'); processInternal(String.format(req, LOCAL_CF, LOCAL_KEY, getBootstrapState())); forceBlockingFlush(LOCAL_CF); } Third parameter %b is set from getBootstrapState() which returns an enum, thus %b collapses to null/non null checks. This would seem then to always set it to true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4460) SystemTable.setBootstrapState always sets bootstrap state to true
[ https://issues.apache.org/jira/browse/CASSANDRA-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dave Brosius updated CASSANDRA-4460: Attachment: (was: use_bootstrap_enum_strings.txt) SystemTable.setBootstrapState always sets bootstrap state to true - Key: CASSANDRA-4460 URL: https://issues.apache.org/jira/browse/CASSANDRA-4460 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2 Reporter: Dave Brosius Priority: Trivial Attachments: use_bootstrap_enum_strings.txt public static void setBootstrapState(BootstrapState state) { String req = INSERT INTO system.%s (key, bootstrapped) VALUES ('%s', '%b'); processInternal(String.format(req, LOCAL_CF, LOCAL_KEY, getBootstrapState())); forceBlockingFlush(LOCAL_CF); } Third parameter %b is set from getBootstrapState() which returns an enum, thus %b collapses to null/non null checks. This would seem then to always set it to true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4460) SystemTable.setBootstrapState always sets bootstrap state to true
[ https://issues.apache.org/jira/browse/CASSANDRA-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dave Brosius updated CASSANDRA-4460: Attachment: use_bootstrap_enum_strings.txt SystemTable.setBootstrapState always sets bootstrap state to true - Key: CASSANDRA-4460 URL: https://issues.apache.org/jira/browse/CASSANDRA-4460 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2 Reporter: Dave Brosius Priority: Trivial Attachments: use_bootstrap_enum_strings.txt public static void setBootstrapState(BootstrapState state) { String req = INSERT INTO system.%s (key, bootstrapped) VALUES ('%s', '%b'); processInternal(String.format(req, LOCAL_CF, LOCAL_KEY, getBootstrapState())); forceBlockingFlush(LOCAL_CF); } Third parameter %b is set from getBootstrapState() which returns an enum, thus %b collapses to null/non null checks. This would seem then to always set it to true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4406) Update stress for CQL3
[ https://issues.apache.org/jira/browse/CASSANDRA-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Alves updated CASSANDRA-4406: --- Attachment: 4406.patch this patch does the following: - updates cql version to 3.0 - adds column names and types to the cfdef - uses those column names to build the statement (prepared statement previsouly used ?=? pairs for columns now uses C.. = ? pairs) Update stress for CQL3 -- Key: CASSANDRA-4406 URL: https://issues.apache.org/jira/browse/CASSANDRA-4406 Project: Cassandra Issue Type: Improvement Components: Tools Affects Versions: 1.1.0 Reporter: Sylvain Lebresne Assignee: David Alves Labels: stress Fix For: 1.2 Attachments: 4406.patch Stress does not support CQL3. We should add support for it so that: # we can benchmark CQL3 # we can benchmark CASSANDRA-2478 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4406) Update stress for CQL3
[ https://issues.apache.org/jira/browse/CASSANDRA-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421931#comment-13421931 ] David Alves commented on CASSANDRA-4406: What I couldn't figure out was how to use UUID column names (required when the TimeUUID comparator is set with the U option), that option currently throws an UnsupportedOperationException. Update stress for CQL3 -- Key: CASSANDRA-4406 URL: https://issues.apache.org/jira/browse/CASSANDRA-4406 Project: Cassandra Issue Type: Improvement Components: Tools Affects Versions: 1.1.0 Reporter: Sylvain Lebresne Assignee: David Alves Labels: stress Fix For: 1.2 Attachments: 4406.patch Stress does not support CQL3. We should add support for it so that: # we can benchmark CQL3 # we can benchmark CASSANDRA-2478 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4464) expose 2I CFs to the rest of nodetool
Jonathan Ellis created CASSANDRA-4464: - Summary: expose 2I CFs to the rest of nodetool Key: CASSANDRA-4464 URL: https://issues.apache.org/jira/browse/CASSANDRA-4464 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jonathan Ellis Assignee: Brandon Williams Priority: Minor Fix For: 1.1.4 This was begun in CASSANDRA-4063. We should extend it to scrub as well, and probably compact since any sane way to do it for scrub should give the other for free. Not sure how easy these will be since they go through CompactionManager via StorageProxy. I think getValidColumnFamilies could be updated to check for index CFs with dot notation. (Other operations like flush or snapshot don't make sense for 2I CFs in isolation of their parent.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira